Data Protection Assessment FAQ

What is the Data Protection Assessment

  1. What is the Data Protection Assessment?

    1. The Data Protection Assessment is an annual requirement for apps accessing certain types of data. The questions in the assessment are designed to determine whether developers are complying with our Platform Terms as it relates to the use, sharing and protection of Platform Data.
  2. What is Platform Data?

    1. “Platform Data” means any information, data, or other content you obtain from us, through Platform or through your App, whether directly or indirectly and whether before, on, or after the date you agree to these Terms, including data anonymized, aggregated, or derived from such data. Platform Data includes app tokens, page tokens, access tokens, app secrets, and user tokens.
    2. All data you receive from Meta through the app is considered Platform Data. For example, UserID, User email and User friends are all Platform Data.
  3. How does this differ from App Review and Data Use Checkup?

    1. App Review is a forward-looking process that allows you to request approval for individual permissions and features.
    2. Data Use Checkup is an annual process for developers to self-certify that their continued use of and access to specific data via Meta APIs is in compliance with our Platform Terms and Developer Policies.
    3. Data Protection Assessment is an annual questionnaire that asks developers about their data use, sharing and data security with regard to Platform Data.
  4. What can I do to prepare for the Data Protection Assessment?

    1. Make sure you are reachable:
      1. Update your developer or business account contact email and notification settings.
      2. Update the contact email on the app’s basic settings page.
      3. Review the administrators listed on the app, and consider adding your legal and data security team members as admins so they can also answer questions.
    2. Review the questions in the Data Protection Assessment and engage with your teams on how best to answer these questions.
    3. Review our Platform Terms, and our Developer Policies.

Why is the Data Protection Assessment necessary

  1. Why do I need to complete the Data Protection Assessment?

    1. With the ever-changing privacy regulatory landscape and continuously evolving threats to people’s privacy, we all have a responsibility to ensure we are working to build the trust of people who use our products and services, beginning with how their data is being used, shared and protected across the internet.
      1. The Data Protection Assessment is required for developers who have apps that access certain types of data on our platform.
      2. We have already seen success stories of developers implementing new data security measures because of our standards. If we partner on this together, we’ll raise the standards across the internet and gain the trust of the billions of people who use our services around the world.

Process

  1. How will I know if I need to complete the Data Protection Assessment?

    1. If you are an administrator for an app that requires Data Protection Assessment, you will receive notifications on your My Apps page, the App Dashboard, Alert Inbox, and the email associated with your developer account.
      1. You will receive a notification to prepare for the assessment by completing the following steps:
        1. Make sure you are reachable:
          1. Update your developer or business account contact email and notification settings.
          2. Update the contact email on the app’s basic settings page.
          3. Review the administrators listed on the app, and consider adding your legal and data security team members as admins so they can also answer questions.
        2. Review the questions in the Data Protection Assessment and engage with your teams on how best to answer these questions.
        3. Review our Platform Terms, and our Developer Policies.
  2. How do I ensure I’m receiving email communications related to Data Protection Assessment?

    1. If you are an administrator for an app that requires Data Protection Assessment, you will receive notifications on your My Apps page, the App Dashboard, Alert Inbox, and the email associated with your App Admin account.
    2. If you are an App Admin, make sure you are reachable:
      1. Update your developer or business account contact email and notification settings.
      2. Update the contact email on the app’s basic settings page.
  3. How do I know I can begin the Data Protection Assessment?

    1. If you are the administrator for an app that requires Data Protection Assessment, you will be notified in the following ways:
      1. Notifications:
        1. An email is sent to the developer or business account contact email. Edit your personal developer notification settings here.
        2. A message is sent to the Alert Inbox on the App Dashboard.
        3. A notification is sent to the App Admin via the App Dashboard.
      2. Required Actions:
        1. On your My Apps page, you will see a ‘Required Action’ on the app information card called ‘Data Protection Assessment.’
        2. On the App Dashboard, you will see a ‘Required Action’ at the top.
        3. On the basic settings page for the app, you will see a ‘Data Protection Assessment’ record.
  4. Am I able to ask questions about the topics covered in the Data Protection Assessment?

    1. Yes. If you need clarification about the questions asked in the Data Protection Assessment, you can reach out to Meta directly.
      1. Within the Data Protection Assessment, on the left-hand side you will see a section titled ‘Need Help?’ Under this section, click ‘Ask a question’ and you will receive a pop-up through which you can submit a clarifying question.
      2. To access this feature, you will need to have a Business Manager account, and make sure to add the app and app admins. Please refer to those links for step-by-step guides.
      3. You will receive a response from Meta via an email alert as well as a notification on the App Dashboard.
    1. Meta has published Data Security Requirements on our developer documentation site, but this content is only available to users who are logged into their Facebook account. If you aren’t able to open this page and access the documentation, make sure that you:
      1. Are logged into your Facebook account, and.
      2. Have accepted Facebook’s developer terms here.

Submission

  1. How long do I have to submit the Data Protection Assessment?

    1. You have 60 calendar days from first notification to complete the Data Protection Assessment.
  2. How often will I need to submit the Data Protection Assessment?

    1. This is an annual requirement.
  3. The questionnaire is very long, can I save it and complete it over multiple sessions?

    1. Yes. The form will auto-save so you will be able to pick up where you left off.
  4. How will I know the status of my submission?

    1. Here are the definitions for all the scenarios of your assessment:
      1. Not started: Developer received a notification that Data Protection Assessment is required, and there is a future deadline, but they have not started filling out the form.
      2. Past due: The deadline to submit the Data Protection Assessment form has passed. Developer must complete the assessment or the app will be restricted.
      3. Submitted, more info needed: Developer submitted answers to the Data Protection Assessment questions, Meta has begun the review, and needs clarification.
      4. Submitted, violations found: Developer submitted answers to the Data Protection Assessment questions, Meta has begun the review, and confirmed that the app is violating one or more of our Platform Terms. Depending on the severity of the violation, there may be an opportunity to resolve the violation prior to the restriction. However, the most severe violations do not include a warning period.
      5. Submitted, in review: Developer submitted answers to the Data Protection Assessment questions, or has responded to the request for more information. Meta has begun the review. The assessment is not completed yet.
      6. Submitted, no violations found: Developer submitted answers to the Data Protection Assessment questions, Meta completed the review, and found that the app is in compliance with our Platform Terms. No further action is necessary.
  5. I did this last year, where can I download my answers?

    1. Unfortunately, you will not be able to download your answers from the previous assessment, as we have changed the questionnaire to provide more clarity. After you submit your answers for this assessment, you will be able to view and download your submission as a PDF.

Review

  1. Will Meta reviewers reach out to me with additional questions before making any sort of decision?

    1. If based on your response, Meta reviewers need more information, we will reach out with clarifying questions and you will be notified in the following ways:
      1. Notifications:
        1. An email is sent to the developer or business account contact email. Edit your personal developer notification settings here.
        2. A message is sent to the Alert Inbox on the App Dashboard.
        3. A notification is sent to the App Admin via the App Dashboard.
      2. Required Actions:
        1. On your My Apps page, you will see a ‘Required Action’ on the app information card called ‘Data Protection Assessment’
        2. On the App Dashboard, you will see a ‘Required Action’ at the top
      3. Status: Action Required
  2. I received an email saying that Meta needs more information. Why?

    1. This is an opportunity for you to work with Meta reviewers who need to be absolutely certain before making a decision on whether or not the app is complying with our Platform Terms.
  3. How do I reply back to the Meta reviewer questions?

    1. The notifications you received (described above) will have a link to the assessment, where you will see information at the top of the page that provides details on what additional information Meta reviewers are looking for. Please respond in the form and upload documentation if needed. Make sure you click ‘Submit’ after you have completed your response.
  4. How long do I have to reply back to the Meta reviewers questions?

    1. If you receive a ‘More information needed’ request, you will have 5 business days to respond. If you do not respond within the initial 5-day window, you will receive two auto-extensions for a total of 15 business days.

Responding to Violations

  1. Will my app be restricted for violations to Platform Terms?

    1. Yes. Depending on the violation, different restrictions could be placed against the app.
  2. Will Meta reviewers reach out to me if a violation is found?

    1. Yes. If based on your responses to the assessment, Meta reviewers identify a violation, you will be notified in the following ways:
      1. Notifications:
        1. An email is sent to the developer or business account contact email. Edit your personal developer notification settings here.
        2. A message is sent to the Alert Inbox on the App Dashboard.
        3. A notification is sent to the App Admin via the App Dashboard.
      2. Required Actions:
        1. On your My Apps page, you will see a ‘Required Action’ on the app information card
          1. If the violation stems from failure to submit assessment by the deadline, the information card will say ‘Data Protection Assessment: Past due’
          2. If the Data Protection Assessment was submitted and a violation was found, the information card will say ‘Violations found’
        2. On the App Dashboard, you will see a ‘Required Action’ at the top
      3. Status: Violations found
  3. What happens if I do not respond?

    1. Failure to respond is considered a Platform Term violation.
      1. After 60 days of non-response, your app will be deactivated.
        1. Restore the app by completing and submitting the assessment.
        2. If you receive a ‘More information needed’ request, you will have 5 business days to respond. If you do not respond within the initial 5-day window, you will receive two auto-extensions for a total of 15 business days. After 15 business days, your app will be enforced.
        3. If you have passed these timelines the app may be restricted. You can go to the My Apps page to view any apps that are restricted due to violations.
          1. Depending on the severity of the violation, there may be an opportunity to resolve the violation prior to the restriction. However, the most severe violations do not include a warning period. To resolve a violation, look on the My Apps page for the ‘Resolve’ link, and follow the outlined steps.
  4. How do I request an extension to respond to the violation?

    1. For each violation, if the deadline to respond is within 3 days, a “Request extension’ button will appear. You can request two extensions equal to the length of the warning period (which will vary depending on violation) that will be automatically approved.
  5. How do I resolve a violation identified from the Data Protection Assessment?

    1. If a violation is found and the app has been restricted, you will be able to resolve the violation by providing a response with submitting evidence showing the violation has been remediated. Once a response has been submitted, a Meta reviewer will review this and respond directly in the ‘Resolve violations’ form.

Platform Terms 3 (Data Use)

  1. The jurisdiction my app is based in, or that my users are based in, does not have an applicable law that requires me to delete user data when a user requests it (such as the European Union or California). Under Meta policies, is my app still required to honor deletion requests from users and am I still required to include instructions on how users can request deletion in my privacy policy?

    1. Yes, even if the jurisdiction the app is based in, or that your users are based in, does not require you to delete user data when a user requests deletion, our Platform Terms still require that you 1) delete Platform Data when a user requests deletion, and 2) explain in your privacy policy how users can request deletion of their data - unless there is an applicable law or regulation that prevents you from complying with those two Meta requirements.
  2. My company uses Platform Data to allow users to pick their desired criteria for matches on dating apps or to age-gate certain content (such as alcohol sales) from underage users. Does this mean we are using Platform Data to disadvantage certain people based on race, ethnicity, color, national origin, religion, age, sex, sexual orientation, gender identity, family status, disability, medical or genetic condition?

    1. No, using Platform Data to allow users to set criteria on dating apps or to age-gate content does not count as disadvantaging certain people based on race, ethnicity, color, national origin, religion, age, sex, sexual orientation, gender identity, family status, disability, medical or genetic condition in the Data Protection Assessment.
  3. What happens if my app does not delete Platform Data in some or all of the scenarios Meta requires because we do not have the ability to delete data automatically?

    1. Deletion does not have to be done automatically. When answering questions in the Data Protection Assessment about whether you delete Platform Data in certain scenarios, please answer in the affirmative if your policy is to delete either manually or automatically.
  4. Meta terms require that I delete a user's Platform Data when that user deletes their account with my app. However, I am unable to do so because my app lacks the callback functionality necessary to know when a user has deleted their Facebook account. Does this mean my app is in violation of Meta terms?

    1. No, this does not mean the app is in violation of Meta terms. Callback functionality is not required. However, if you do have callback functionality in place, you are required to delete Platform Data when a user deletes their Facebook account. You are also required to delete Platform Data when a user deletes their in-app account.

Platform Terms 4 (Privacy Policy)

  1. My app's privacy policy contains a statement telling users that they may request the deletion of their data. Is this sufficient?

    1. This statement, alone, is not sufficient. It is necessary to include instructions on how users can submit their request to you.

Platform Terms 5 (Service Providers and Tech Providers)

  1. What is a service provider?

    1. Service provider means an entity you use to provide you services in connection with Platform or any Platform Data. Google Cloud and Amazon Web Services (AWS) are examples of common, large service providers.
  2. What is a sub-service provider?

    1. A sub-service provider is a service provider that is used by another service provider to provide them services in connection with Platform or any Platform Data.
  3. What is a tech provider?

    1. Tech provider means a Developer of an App whose primary purpose is to enable Users thereof to access and use Platform or Platform Data."Typically, a tech provider’s primary purpose is to enable users thereof to access and use Meta for Developers or Platform Data.
  4. What qualifies as a written agreement for “Other” purposes, as described in question 4?

    1. Examples of a written agreement include terms of service, a standard non negotiated agreement, or a signed contract.

Platform Terms 6 (Data Security)

  1. What do I need to do to demonstrate compliance with Meta’s Data Security Platform Terms Section 6.a.i?

    1. To demonstrate compliance we'll ask if you have an information security certificate from a third party auditor. It is not obligatory to have such a certificate, but it is one of the ways to demonstrate compliance. We will also ask about specific protections you implement. Meta will evaluate your responses using our internal criteria and we will follow up with you if we have further questions or if we require you to enact additional protections.
  2. Am I required to obtain an Information Security Framework (ISF) or Cybersecurity Framework (CSF) certification from a third-party auditor?

    1. No, you’re not required to get an auditor’s certification. Organizations that do have SOC 2, ISO 27001, ISO 27018, or PCI DSS certifications can submit any of these as evidence.
      1. The certification must be relevant to the computing environment in which Platform Data is being stored or processed.
    2. Please note that we will ask about specific protections you implement regardless of whether or not you submit a security certification.
  3. If I do not submit a security certification, what questions will I be asked to answer about my data security protections?

    1. All developers that are required to complete the Data Protection Assessment will be asked whether they have implemented the following:
      1. Enforcing encryption at rest for all Platform Data storage (e.g., all database files, backups, object storage buckets)
      2. Technical and/or administrative controls to protect Platform Data that is stored on organizational devices (e.g., laptops or removable media that are lost or stolen). A variety of controls could be acceptable, including, for example, written policy documentation with annual training or reminders that Platform Data should not be stored on such devices, or technical controls.
      3. Enabling TLS 1.2 or higher encryption for all public network connections where Platform Data is transmitted
    2. In addition, developers of apps that have access to higher-risk data will be asked whether they have implemented these additional security protections:
      1. Testing the app and systems for vulnerabilities and security issues at least every 12 months
      2. Protecting sensitive data like credentials and access tokens
      3. Testing the systems and processes used to address data security incidents (e.g., data breaches and cyberattacks)
      4. Requiring multi-factor authentication for remote access
      5. Having a system for maintaining accounts (assigning, revoking, reviewing access and privileges)
      6. Having a system for keeping system code and environments updated, including servers, virtual machines, distributions, libraries, packages, and anti-virus software
      7. Note: This list is not exhaustive and does not serve as a replacement for an appropriate information security program for your organization.
  4. I’m being asked whether my organization enforces encryption at rest for all Platform Data storage. What is the scope of this question? Does it apply to both cloud and client storage?

    1. Encryption at rest protects Platform Data by making the data indecipherable without a separate decryption key. This provides an additional layer of protection against unauthorized read access.
    2. On servers or in a cloud environment, where Platform Data related to all of an app’s users tends to be concentrated, encryption at rest reduces the risk of a data breach. For example, encryption at rest protects against threats like unauthorized access to a database backup, which may not be protected as tightly as the production database itself.
    3. If your organization does store Platform Data in the cloud, you MUST protect that data with encryption at rest, or with suitable compensating controls. Either application-level (e.g., software encrypts/decrypts specific columns in a database) or full-disk encryption are acceptable. Although we recommend that industry-standard encryption (e.g., AES, BitLocker, Blowfish, TDES, RSA) be used, we do not require any particular algorithm or key length.
    4. If your organization does NOT store Platform Data in the cloud then this requirement is not applicable.
    5. For the avoidance of doubt, data that is persisted within web or mobile clients for individual users of your service is not in scope for this question.
    6. If your organization stores Platform Data via a cloud IaaS product (e.g., AWS EC2, Microsoft Azure IaaS, and Google Compute Engine), or you use self hosting, or you use a hybrid approach then this question does apply.
    7. However there are other backend hosting models that are special cases: Use of SaaS products - if your organization stores Platform Data via a SaaS product (e.g., MailChimp or Salesforce), this question does not apply. You should instead describe this relationship in the Service Provider section of the Data Protection Assessment. Use of PaaS hosting - if your organization stores Platform Data via a PaaS product (e.g., AWS Elastic Beanstalk, Google App Engine, Force.com) this question does not apply. You should instead describe this relationship in the Service Provider section of the Data Protection Assessment. Use of BaaS hosting - if your organization stores Platform Data via a BaaS product (e.g., AWS Amplify, Azure Mobile Apps, Azure Playfab, Google Firebase, and MongoDB Switch) this question does not apply. You should instead describe this relationship in the Service Provider section of the DPA.
  5. I’m being asked how I prevent Platform Data from being stored on organizational and personal devices. What sorts of prevention measures are acceptable?

    1. Developers must have technical and/or administrative controls (for example, policies and rules) to protect Platform Data that is stored on organizational devices (e.g., laptops or removable media that can be lost or stolen). A variety of controls could be acceptable, including, for example, written policy documentation with annual training or reminders that Platform Data should not be stored on such devices, or technical controls.
  6. I’m being asked whether my organization enables TLS 1.2 or greater for all network connections where Platform Data is transmitted. What is the scope of this question?

    1. This question pertains to any transfer of data over the internet that includes our Platform Data, whether that transfer is from a web or mobile client to your cloud environment, or transfers between cloud environments that traverse the internet.
    2. Platform Data transmitted across the internet is accessible to anyone that can observe the network traffic. Therefore it must be protected with encryption to prevent those unauthorized parties from being able to read the data. Encryption in transit protects Platform Data when it is transmitted across untrusted networks (e.g., the internet) by making it indecipherable except for the origin and the destination devices. In other words, parties in the middle of the transmission would not be able to read Platform Data even if they can see the network traffic (this is also called a man-in-the-middle attack). TLS is the most prevalent form of encryption in transit because it’s the technology that browsers use to secure communications to websites like banks.
    3. The following diagram illustrates network connections that may involve the transfer of Platform Data. The yellow dashed lines represent connections that are your responsibility to secure using TLS encryption.
      For transfers of data that are entirely within private networks, for example an AWS or GCP Virtual Private Cloud (VPC), TLS is recommended but not required.
      The following table summarizes TLS encryption requirements for the transferring of Meta Platform Data.
      ConnectionsTLS Policy

      To and from end user devices (mobile phones, PCs, tablets, etc.) and your server or cloud infrastructure

      TLS 1.2 or greater must be enabled for compatible devices

      TLS 1.0 and 1.1 may be enabled for compatibility with older devices

      To and from your server or cloud infrastructure and any remote server, cloud infrastructure, or 4th party service

      TLS 1.2 or greater must be enforced

      To and from components entirely within your private data center, server, or cloud infrastructure

      TLS encryption is recommended but not required for Platform Data transfers that are entirely within a private cloud network

      To and from Facebook and any device, server, or cloud infrastructure

      Out of Scope for Data Protection Assessment - Facebook controls the TLS policy for these transfers

      Consult your Chief Information Security Officer (CISO) or equivalent role for your organization, or a qualified cybersecurity firm, to guide you on appropriate encryption and other compensating controls for the application.
  7. I’m being asked whether my organization tests our app and systems for vulnerabilities and security issues at least every 12 months. What sorts of tests are acceptable?

    1. App developers that access Meta’s platform write software to access and process Platform Data and provide services to people using the software. This software - and related system configurations - may contain security vulnerabilities that malicious actors can exploit, leading to unauthorized access to Platform Data. Developers must therefore test for vulnerabilities and security issues so that they can be discovered proactively, ideally preventing security incidents before they happen.
    2. Security testing should occur in a variety of forms throughout the application lifecycle. Typically, a combination of testing methods ranging from simple vulnerability scanning, to application security scanning, to penetration testing are employed, especially where sensitive data such as personal information are processed.
    3. You must have tested your software used to process Platform Data for security vulnerabilities by either conducting a penetration test or a vulnerability scan / static analysis of your software. The output of the test must show that there are no unresolved critical or high severity vulnerabilities. The test must have been completed within the past 12 months.
    4. Additionally, if your organization processes Platform Data in the cloud, an acceptable security vulnerability test must have specifically tested your cloud software for security vulnerabilities by either conducting: A penetration test of their app/system, or a vulnerability scan/static analysis. You must also have tested your cloud configuration for security issues. This requirement applies irrespective of the hosting approach (e.g., BaaS, PaaS, IaaS, self hosted, or hybrid).
  8. I’m being asked whether my organization protects sensitive data like credentials and access tokens. What is the scope of this question?

    1. App secrets and access tokens are fundamental to the security of Meta's APIs in that they provide access control to the data available via these APIs. If an unauthorized party gains access to these credentials they could call Meta’s APIs - impersonating the real developer - and take any of the actions that we have granted the app (e.g., reading data from our APIs about an app’s users).
    2. Developers have access to sensitive credentials as a part of their use of Meta’s Platform. Specifically:
      1. App Secret - Meta shares an app secret with developers with the expectation that only trusted parties (e.g., app admins) within the developer organization have access to this secret
      2. Access Token - When people authorize an app, the developer gets a credential called an access token that’s used in subsequent API calls
    3. An unauthorized party who is able to read these sensitive credentials can use them to call Meta APIs as if they are the developer (this is sometimes called token impersonation) leading to unauthorized access to Platform Data. Therefore these credentials must be protected from unauthorized access to prevent impersonation.
    4. App Secret - one of these two things must be true:
      1. Developer never exposes the app secret outside of a secured server environment (e.g., it is never returned by a network call to a browser or mobile app and the secret is not embedded into code that’s distributed to mobile or native/desktop clients)
      2. Or the developer must have configured their app with type Native/Desktop so that our APIs will no longer trust API calls that include the app secret
    5. Access Tokens - all of the following must be true:
      1. On client devices - Meta access tokens must not be written such that another user or process could read it
      2. In the cloud - If the developer processes or stores Meta access tokens in the cloud, those access tokens:
        1. Must be protected by a vault or with application encryption, where access to the decryption key is limited to the application and must not be written to log files
        2. Or must be protected with suitable compensating controls
  9. I’m being asked whether my organization tests our security incident response systems and processes at least every 12 months. What is the scope of this question?

    1. Security incidents happen to all companies sooner or later, so it is essential that organizations have planned ahead for who needs to do what to contain the incident, communicate with stakeholders, recover and learn from what happened. If a security incident occurs, having a plan or playbook ready - with a team of people who are trained in what to do - can reduce the duration of the incident and ultimately limit the exposure of Platform Data.
    2. Although different organizations will have different levels of incident response sophistication, we require at least a basic plan that includes the key activities - detect, react, recover, and review along with named personnel assigned roles and responsibilities. You should also have documented evidence that the plan has been tested recently (within the past 12 months) and that all personnel named in the plan did participate.
  10. I’m being asked whether my organization requires multi-factor authentication for remote access. What is the scope of this question?

    1. A common technique used by adversaries to gain access to confidential data is to start by gaining access to tools that the developer uses to build or operate their app/system. Sophisticated tools exist to obtain user credentials (e.g phishing attacks) in order to hack into accounts that are protected only by passwords; multi-factor authentication provides an additional layer of security to guard against this risk.
    2. Software developers use a variety of tools to build, deploy, and administer their apps/systems. It’s common to use these tools remotely over the internet (e.g., an employee working from home and shipping a new software feature or updating the cloud configuration). Tools that are protected with single factor authentication (username and password) are highly susceptible to account takeover attacks. For example, attackers can use tools to try username and password combinations that have leaked from one tool to gain access to another tool. Multi-factor authentication protects against such attacks by requiring an additional factor besides a password upon login, e.g., a code generated by authenticator app.
    3. Related to an organization’s processing of Platform Data, remote access to these tools must be protected with multi factor authentication (i.e., not simply a password):
      1. Tools used to deploy and manage code/configuration changes to the app/system
      2. Administrative access to a cloud or server environment, if applicable
      3. Specifically:
        1. Collaboration / communications tools - for example business email or Slack
        2. Code repository - e.g., GitHub or another tool used to track changes to the app/system’s code/configuration
      4. And, if the organization processes platform data in a cloud/server environment:
        1. Software deployment tools - tools used to deploy code into the cloud/server environment, e.g., Jenkins or another Continuous Integration / Continuous Deployment (CI/CD) tool
        2. Administrative tools - portal or other access used to manage / monitor the cloud or server environment
        3. Remote access to servers - SSH, remote desktop, or similar tools used to remotely login to servers running in the cloud or server environment
  11. I’m being asked whether my organization has a system for maintaining accounts (assigning, revoking, reviewing access and privileges). What is the scope of this question?

    1. Having good account management hygiene is an important part of preventing unauthorized use of accounts to gain access to Platform Data. In particular, developers must make sure that access to resources or systems is revoked when it’s no longer needed. Accounts are the basic unit of management for granting people access to systems, data, and administrative functions. Accounts are granted permissions that enable specific actions; good practice is to grant only the minimum permissions an account needs – this is called the principle of least privilege.
    2. When a person departs an organization it’s critical that their user accounts are disabled promptly for a couple of reasons:
      1. To prevent access by a that person (i.e., the former employee), and
      2. To reduce the likelihood that an unauthorized person could use the account without being noticed. For example, a malicious actor could use social engineering to cause an IT helpdesk to reset the password for the account. If this account belongs to a current employee, that employee is likely to report their inability to login, whereas if the account is still active but belongs to a departed employee it’s less likely to be noticed.
    3. With this in mind, organizations must have a systematic way for managing accounts, granting permissions or privileges, and revoking access when it’s no longer needed
    4. Developers must have a tool or process for managing accounts for each of the these tools/systems/apps:
      1. Those used to communicate with one another, e.g., Slack or business email
      2. Those used to ship software, e.g. code repository and
      3. Administer and operate their system (as applicable to processing Platform Data)
    5. Developers must regularly review (i.e., not less than once every 12 months) access grants and have a process for revoking access when: (1) it’s no longer required, and (2) no longer being used
    6. They must also have a process to promptly revoke access to these tools when a person departs the organization
  12. I’m being asked whether my organization has a system for keeping system code and environments updated, including servers, virtual machines, distributions, and anti-virus software. What is the scope of this question?

    1. Software components are routinely updated or patched to resolve security vulnerabilities, and eventually these components will reach their end of life when they are no longer supported. Developers who package or rely on these components must keep up to date to avoid running software with known vulnerabilities.
    2. App developers rely on a variety of 3rd party software to run apps/systems that process Platform Data. For example, a developer will rely on some or all of these:
      1. Libraries, SDKs, Packages - developers package these with their own custom code to build an app
      2. Virtual Machine images, app containers, and operating systems - an app runs inside one or more of these containers, which provide services like virtualization and access to networks and storage
      3. Browsers, operating systems, and other applications used by the developer’s employees / contributors - software that runs on the mobile devices and laptop computers that the developer uses to build and run their system
    3. Security vulnerabilities are routinely found in these components, leading to patches being released. Vulnerabilities fixed by patches are then disclosed in public databases with a severity rating (low, medium, high, or critical). Therefore, developers on our platform must have a systematic way to manage patches by:
      1. Identifying patches that are relevant to their app/system
      2. Prioritizing the urgency based on exposure, and
      3. Applying patches as an ongoing business activity
    4. For the following software components, as applicable, the developer must have a systematic way of identifying available patches that resolve security vulnerabilities, prioritizing based on risk, and applying patches as an ongoing activity:
      1. Libraries, SDKs, packages, app containers, and operating systems used in a cloud or server environment
      2. Libraries, SDKs, packages used on client devices, e.g., within mobile apps
      3. Operating systems and applications used by the organization’s members to build and operate their app/system, e.g., operating systems and browsers running on employee laptops
  13. What should I do if I do not have one or more of the security controls in place but have implemented compensating security controls?

    1. If you don’t think one of the controls that Meta is asking you to enact makes sense for your use (e.g. storage or processing) of Meta Platform Data, or if you are implementing compensating controls for one or more of these protections, please explain your circumstances and provide evidence to resolve the violation. If you've received an email from us regarding compliance requirements, you should follow directions in the email to respond on App Dashboard. If you cannot locate your email notification, please visit the My Apps page or App Dashboard for this app.
  14. I am being asked whether there is a publicly available way for people to report security vulnerabilities to us. Do we need to have a specific channel for that?

    1. While our policies do not require a separate channel for reporting security vulnerabilities (and any regularly monitored contact information, such as an email address, contact form or phone, will be compliant) we recommend that developers have a structured program for this purpose, for example by operating a Vulnerability Disclosure Program (VDP) on BugCrowd or HackerOne.
  15. I am seeing a reference to a question number in the communications I have received from the DPA review team. What do these numbers refer to?

    1. You may see a reference to a question number in communications you receive from our review team. The primary use of these numbers is for internal review processes, but for your information this is the reference:
      1. Q9 - Encryption at rest in a cloud or server environment.
      2. Q10 - Protecting Platform Data on Organizational or Personal Devices.
      3. Q11 - Protecting Platform Data with Encryption in Transit.
      4. Q12 - Testing the app and systems for security vulnerabilities.
      5. Q13 - Protect the Meta App Secret and Access Tokens.
      6. Q14 - Have an Incident Response Plan and Test the Incident Response Systems and Processes.
      7. Q15 - Require Multi-Factor Authentication for Remote Access.
      8. Q16 - Have a System for Maintaining User Accounts.
      9. Q17 - Keep Software Up to Date.
  16. I received feedback from the DPA review team about missing or insufficient policy or procedure evidence. What is policy or procedure evidence?

    1. You may get a follow up question about policy or procedure evidence related to one or more of the data security questions, which may be referred to by one of the following numbers by the Meta review team: Q9, Q10, Q11, Q12, Q13, Q14, Q15, Q16, and Q17.

      In all of these cases, policy or procedure evidence is written documentation that explains your organization’s approach for implementing a specific data security protection. For example, your organization might have a written:
      1. Authentication policy document that states that all business, development, and system administration tools where Platform Data could be accessed must be protected with Multi-Factor Authentication.
      2. Encryption policy that states that all Meta Platform Data must be protected with encryption in transit and encryption at rest.
      3. Patching procedure that lists how frequently steps must be taken to identify any available security patches that affect the organization’s cloud application, how to do an impact assessment for each, sets forth the maximum time to deploy patches based on the severity, and how to deploy patches.
      These examples would all be acceptable forms of policy or procedure documentation for the relevant question. Meta’s DPA reviewers will review the documents you provide to confirm that your approach meets our requirements. For more information, including additional examples and details on how to prepare policy evidence for submission, see our developer documentation on preparing evidence.
  17. I received feedback from the DPA review team about missing or insufficient implementation evidence. What is implementation evidence?

    1. You may get a follow up question about implementation evidence related to one or more of the data security questions. Data security questions are sometimes referred to by a number: Q9, Q10, Q11, Q12, Q13, Q14, Q15, Q16, and Q17.

      In all of these cases, including implementation evidence is important to demonstrate implementation of your documented policy or procedure. The evidence could include screenshots of admin systems and outputs from tools that show how an organization has actually executed their approach to a protection. Acceptable implementation evidence must demonstrate that your organization’s implementation meets Meta’s requirements for the protection. You must submit at least one piece of implementation evidence for each protection.

      For more information, see our developer docs on preparing evidence for each protection:
      1. Encryption at rest in a cloud or server environment.
      2. Protecting Platform Data on Organizational or Personal Devices.
      3. Protecting Platform Data with Encryption in Transit.
      4. Testing the app and systems for security vulnerabilities.
      5. Protect the Meta App Secret and Access Tokens
      6. Have an Incident Response Plan and Test the Incident Response Systems and Processes.
      7. Require Multi-Factor Authentication for Remote Access.
      8. Have a System for Maintaining User Accounts.
      9. Keep Software Up to Date.
  18. I am being asked about how I protect platform data on organizational and personal devices. What are Meta’s requirements?

    1. You may get a follow up question about how your organization protects Platform Data that may be stored on organizational and personal devices against loss. These questions may also be referred to by a number, Q10a or Q10b.

      Meta requires you to demonstrate at least one of the following:
      1. Your organization takes steps to block access to systems that contain Meta Platform Data except for devices that have one or both of these:
        1. All devices must have full disk encryption enabled, for example via FileVault or BitLocker.
        2. All devices must have endpoint DLP software running, for example Microsoft Intune or Symantec DLP.
      2. Your organization has a policy that forbids anyone in the organization to store Meta Platform Data on any organizational or personal device and you have written evidence that people in your organization are obligated to follow this policy.
      3. Your organization has a policy that states that:
        1. Only those individuals who need access and have an authorized business purpose may process Platform Data on org devices AND
        2. Individuals must delete Platform Data from these devices when the authorized business purpose no longer exists.
      4. Meta Platform Data is never accessible to anyone in your organization because of the way your software is built (e.g., Meta Platform Data is only stored in transient memory in end-customer devices)
  19. What is a Facebook or Meta App Secret?

    1. You may get a follow up question about how you protect the Facebook or Meta App Secret. This question may also be referred to by a number, Q13.

      The app secret is a parameter associated with Facebook apps that can be used as an access token in certain API calls to change the configuration of your app, for example configuring Webhook callbacks. You can find your app’s Facebook App Secret in your developer dashboard under Settings > Basic. For more information about the app secret, refer to our developer documentation on Login Security.

      For more information on our requirements to protect the app secret and user access tokens including any evidence you may be required to provide, see Protect Meta App Secret and Access Tokens.
  20. I am being asked about how I manage and maintain accounts. What are Meta’s requirements?

    1. You may get a follow up question about whether your organization has a system for managing accounts and some related processes. This question may also be referred to by a number, Q16.
      1. You must have a tool or process for granting and revoking access to the apps and systems your organization uses to communicate within the organization, develop and ship software, and administer your systems / application - to the extent these apps and systems are applicable to processing Platform Data.
      2. At least once every 12 months, you must have a procedure for reviewing previously-granted access and revoking grants that are no longer required and no longer used.
      3. You must have a process to promptly revoke access from all apps and systems when a person departs your organization.
      For more information, including for more information about any evidence you may be required to provide, see our developer documentation - Have a System for Maintaining User Accounts.
  21. I am being asked about how I test my system / software for vulnerabilities. What are Meta’s requirements?

    1. You may get a follow up question about whether your organization tests your system / software for vulnerabilities. This question may also be referred to by a number, Q12.

      If your organization processes Meta Platform Data in a cloud environment (e.g., within AWS, Azure, GCP, Alibaba Cloud, Tencent Cloud):
      1. You must have tested your cloud software for security vulnerabilities via:
        1. A penetration test or external scan of the app or system; or
        2. A static or dynamic analysis security tool of the software; or
        3. Operate a Vulnerability Disclosure Program that meets our requirements
      2. You must have also tested your cloud asset configuration for vulnerabilities via:
        1. A tool provided by the cloud host; or
        2. Another commercial or open source tool; or
        3. Operate a Vulnerability Disclosure Program that meets our requirements
        4. If a cloud configuration review is not applicable to your organization, you must explain why this would not be applicable as part of your demonstrated evidence
      3. You must demonstrate that both of the above activities have occurred within the past 12 months.
      Meta generally requires that all critical or high severity vulnerabilities have been resolved.

      If your organization processes Meta Platform Data in a server environment hosted in a different way:
      1. You must have tested your server software for security vulnerabilities via:
        1. A penetration test or external scan of the app or system; or
        2. A static or dynamic analysis security tool of the software; or
        3. Operate a Vulnerability Disclosure Program that meets our requirements
    Meta generally requires that all critical or high severity vulnerabilities have been resolved.

    For all other organizations:
    1. You must have tested your client software for security vulnerabilities via:
      1. A penetration test or external scan; or
      2. A static or dynamic analysis security tool; or
      3. Operate a Vulnerability Disclosure Program that meets our requirements
    2. You must demonstrate that this activity has occurred within the past 12 months
    Meta generally requires that all critical or high severity vulnerabilities have been resolved.

    For more information, including regarding any evidence you may be required to provide, see our developer documentation - Test the App and Systems for Vulnerabilities and Security Issues.


  22. I am being asked about how I keep my systems up to date. What are Meta’s requirements?

    1. You may get a follow up question about whether your organization has a system for keeping system code and environments updated. This question may also be referred to by a number, Q17.
    2. You must demonstrate a defined and repeatable process to:
      1. Identify patches that resolve security vulnerabilities and are relevant to the software you use to process Meta Platform Data,
      2. Prioritize these patches based on risk, and
      3. Apply patches as an ongoing activity.
      Be sure to include enough detail about how your software processes platform data so that we are able to verify that your process meets all of the above requirements. For example, it is not acceptable to only explain that your organization has a process for patching server OS without explaining your approach for other layers of the stack (e.g., virtualization layer, container layer, application container layer, libraries or dependencies within your application).

      For more information, see our developer documentation - Keep Software Up to Date.

Please note that these FAQs are intended to provide helpful information and links to assist you in responding to the DPA, however, the Data Security Requirements govern our assessment of the DPA. The FAQs are only a guide and should not be read to contradict or supersede the requirements.

Learn More