Private Computation: Setup Guide

Step 3: Private Computation Environment Setup (30 Minutes)

  • Navigate to https://<private-computation-infrastructure.instance.url>/hub/ui

You should see the following window:


  • Enter the credentials and log in.
  • Navigate to the Deployment Menu and click Start deployment.

  • A modal will pop up, and the screen will show what you are required to have to complete the deployment.

  • Click Continue and go to the next step. You should see a screen that looks like below.

  • Enter your business ID, and the Graph API token generated in Step 2. Then click the button Get Meta VPC details. The AWS region and peered Meta-side VPC ID should pop up (as a reference).

Please only click Use advanced settings if advised by a Meta representative. The advanced settings option is described here.


  • Click Continue to go to the next screen.
  • You should be in the credential screen now as shown below.

  • Enter AWS Access Key ID and Secret Access Key, your AWS account ID and click on Continue. These credentials should have admin access to create new components - S3 Buckets, Kinesis, VPC, Subnets, ECS Clusters.

  • In Step 4, You can customize the environment. Please fill in the required fields and click Continue.

Environment tag: a string that will be appended to the name or tag of AWS resources to be created. It will be easier for you to identify which AWS resources are created. For ease, we have pre-generated a tag for you (using <month><day> format), but you can change the tag based on your suitable name.

Data bucket: this is the S3 bucket where data for the computation is stored. If you are redeploying PCS, you have the option to reuse the existing data bucket or create a new bucket for this deployment

Note: If a suitable bucket could not be found, the screen will look as shown below:


  • Data Ingestion Settings:
    • We have enabled Manual event upload pipeline by default. This is required for using the Events Uploader modal.
    • Toggle the checkbox “Consumption of Pixel data” if you don’t wish to send pixel events back to Meta as Conversions API events. If you already have a Conversions API integration in place, or if you do not wish to forward your web pixel events as Conversions API events, you can toggle this button to "off".

Share diagnostic data settings:

  • To help clients better troubleshoot issues and improve the product, it’s recommended to opt-in for diagnostic data sharing with Meta. When turned on, diagnostic data sharing will automatically upload logs to Meta within 5 minutes after a calculation run. The diagnostic data will contain: console logs from the study runner (that is, run coordinator), console logs from the worker containers, and logs from the PC data pipeline (Athena, Glue, Kinesis, Lambda). For more details on diagnostic data sharing see Sharing Diagnostic Data with Meta.

Next, review and deploy. This is the final step before actual infrastructure deployment starts, Please review the information for a moment before you click the deploy button.


About 10 minutes later, you should see the following screen confirming the successful deployment.


  • VPC peering status: If the VPC peering has been completed, you should see “Completed” status under “VPC peering setup”. If it has failed, you should see the "Failed" status shown below. Please follow the reference here on how to retry a failed VPC peering connection, and how to proceed.

  • Automatic diagnostic data sharing status:
    • To help clients better troubleshoot issues and improve the product, it is recommended to opt-in for diagnostic data sharing with Meta. Then the diagnostic data will be automatically uploaded to Meta within 5 minutes after a failed run. The diagnostic data will contain: console logs from the study runner (that is, run coordinator), console logs from the worker containers, and logs from the PC data pipeline (Athena, Glue, Kinesis, Lambda). For more details on diagnostic data sharing see Sharing Diagnostic Data with Meta.
    • You can click the Edit button to bring up the dialog“Share diagnostic data with Meta”, and enable or disable the automatic diagnostic data sharing. Your selection will apply to the future calculation runs.




Verify Infra Completeness and Connectedness

Before moving forward, please:

  • Confirm with your Meta POC if they have made changes to routing tables and provided access to Elastic Container Registry (ECR) repositories. You can follow the instructions to run PCE Validator to ensure the setup is correct.
  • (Recommended) You can run an ad-hoc system diagnosis to validate the cloud infra setup.

Data Ingestion

Create Data Sources

  1. Log in to the Private Computation Infrastructure UI by following the subdomain defined in your Cloudformation step during the “Configure AWS and Install” process above. Use the email and password used while setting up the Cloudformation stack.
  2. Go to the “Data sources” page and create a new data source.
  3. Data sources are the ways you ingest conversion/signal data and use it for Private Computation. Conversion signals can be sent to data sources through Meta pixel optionally.
  4. Conversion signals can also be uploaded to data sources manually.

There are two different ways to ingest your data:

Automated data ingestion and computation with pixel:

  • Connect to Meta pixel while creating a new data source. This will facilitate automatic data ingestion.
  • Depending on your needs and study setup, different wait times could apply. Your Meta representative will guide you on the exact wait time.

Prepare your own conversion data:

  • Using the semi-automated ingestion pipeline (Manual Data Upload). Generally, this process should take less than 30 mins to ingest multi-month conversion data.
  • UI option for uploading conversion events data in CSV format.
    • Navigate to the deployment summary page: https://<private-computation-infrastructure.instance.url>/hub/pcs/deployment
    • Click on the ‘Upload events’ button under the ‘S3 Data Ingestion Bucket’ section.
    • Prepare your data in the semi-automated events data format. (see Reference A1) Open the ‘sample file’ link for an example of this data format.
      • Maximum upload size per file is 5GB
    • Upload the events files to the upload modal by either selecting or by dropping the file(s).

    • Note: If you see an error, try refreshing the page first and then reopen the uploader modal. If the error persists and you are unable to resolve it, please reach out to your Meta representative.
    • If you see the JOB_NOT_PROVISIONED_ERROR, please refer to this section for some ideas on how to resolve it.
    • If you see the BUCKET_CORS_MISSING_ERROR, please refer to this section for some ideas on how to resolve it.

  • Troubleshooting
    • If the file was uploaded but the expected events are missing during a computation run, double check if the semi-automated Glue job is up to date.
      • This issue could be resolved by removing spaces and special characters from your file name. Or, you can redeploy the Private Computation infra if the issue persists after changing the file name.

(Optional) PL Synthetic testing

  • While we wait for real data to accumulate, you/advertisers can leverage a “synthetic” lift study (for example, all synthetic, fabricated data on both sides) to test the pipeline E2E (including AWS infra setup, PL binaries correctness and VPC connection with Meta side). It could provide you more streamlined onboarding experiences, enabling the faster feedback loop to flag errors along the pipeline. Please reach out to your Meta representative for more details.