Back to News for Developers

Presence Platform | An overview

April 25, 2023ByNavyata Bawa

Meta’s Presence Platform is a set of technologies and design principles used to create immersive virtual reality (VR) and mixed reality (MR) experiences on Meta Quest devices. It includes a variety of features like advanced tracking and motion sensing technology, high-quality graphics and audio, and intuitive controls and interfaces that work together to create seamless immersive experiences. In addition to providing a platform for developers to create these innovative applications, Presence Platform also includes social features that help people connect and interact with each other in virtual spaces, including voice chat and shared experiences.

Let’s take a look at some of the features and tools available to you through Presence Platform so you can build the future for how we play, create, connect, and work on Meta Quest devices.

Mixed reality, interactions, and social presence icons

Mixed Reality

Presence Platform provides developers with mixed reality tools and features that allow users to blend the physical and virtual worlds, providing more than a truly immersive VR experience - one that encompasses and leverages your surroundings to unlock a new level of engaging experiences to life. These mixed reality tools let people see and interact with both virtual and physical objects simultaneously, creating a more immersive and engaging XR experience.

Some of the foundational mixed reality tools that Presence Platform provides are: Passthrough, Scene and Spatial Anchors.

Passthrough

Passthrough provides a real-time 3D visualization of the physical world inside Meta Quest headsets. The Passthrough API allows developers to integrate the Passthrough visualization with their virtual experiences. Passthrough is a key feature when developing mixed reality apps and one that enables you to see your surroundings in the headset. To learn more about Passthrough, check out our overview documentation where we go over how Passthrough works, how to set it up, and how to enable it in your own experiences.

Passthrough can also be customized based on your use case and your application. To learn how to customize passthrough, check out our customization documentation where we go over how you can use styling, composite layering, and surface-projected Passthrough to customize your Passthrough and to achieve occlusion and Passthrough windows.

Passthrough visualization

Passthrough can be used over Meta Quest Link cable, allowing a Passthrough-enabled app to run while using Meta Quest Link, eliminating the need to build the app on PC and deploy it to a Meta Quest device every time you test it during development, which significantly decreases iteration time when developing Passthrough-enabled apps. To learn more about using Passthrough with Meta Quest Link, check out our documentation where we go over the prerequisites, set up, and steps to enable Meta Quest Link.

Scene

Scene empowers you to quickly build complex and scene-aware experiences with rich interactions in the user’s physical environment. Scene includes two important concepts: Scene Capture and Scene Model.

Scene Capture lets users walk around and capture their scene to generate a Scene Model.

Scene Model is a single, comprehensive, up-to-date representation of the physical world that’s easy to index and query, providing a geometric and semantic representation of the user’s space so you can build room-scale mixed reality experiences.

The fundamental elements of a Scene Model are Scene Anchors, which are attached to the geometric components and semantic labels. For example, the system organizes a user’s living room around individual anchors with semantic labels, such as the floor, ceiling, walls, desk, and couch, with each anchor also being associated with a simple geometric representation: a 2D boundary or 3D bounding box.

Scene Anchors graphic

To learn more about Scene and how it works, check out our documentation where we discuss how Scene works, how to build mixed reality apps using Scene, and how to use the Scene Model.

Spatial Anchors

Spatial Anchors are world-locked frames of reference you can use as origin points to position content that can persist across sessions. This persistence is achieved by creating a Spatial Anchor at a specific 6DOF pose and then placing virtual content relative to it. With Spatial Anchors, developers can create applications that enable users to leave virtual objects in a specific location, and those objects can remain anchored in that location even when the user leaves the area. To learn more about Spatial Anchors and what they’re capable of, check out our documentation where we go over their capabilities and how to persist content across sessions.

Spatial Anchors graphic

Spatial Anchors also allow multiple users to share a common reference point in space, which enables them to interact with virtual objects and data in a collaborative and shared environment. This is useful when building local multiplayer experiences by creating a shared world-locked frame of reference for multiple users. For example, two or more people can sit at the same table and play a virtual board game on top of it. To learn more about how Shared Spatial Anchors work, check out our documentation, where we discuss prerequisites and how anchors can be shared in more detail.

Interactions

Presence Platform provides you with tools and features that you can use to leverage natural input controls including hands, voice, and controllers as you build immersive experiences. These tools include Interaction SDK, Hand Tracking, Voice SDK, Tracked Keyboard, and Audio SDK.

Interaction SDK

Interaction SDK provides a library of components for adding controllers and hand interactions to your experiences, such as ray, poke, and grab, which incorporate best practices and heuristics for user interactions on Meta Quest devices. For hands specifically, Interaction SDK provides hand-specific interaction models and pose and gesture detection, as well as hands-centric visual affordances.

Hand gesture detection with Interaction SDK graphic

To learn more about Interaction SDK, check out our detailed tutorial on how to build intuitive interactions in VR.

Be sure to visit our blog, where we dive deeper into how to get started with Interaction SDK, how to set it up, tutorials, and best practices when integrating interactions in your own experiences.

Voice SDK

Voice SDK allows you to build fully customizable voice experiences in your game. It provides developers with a set of tools, libraries, and resources they can use to add voice recognition and natural language processing capabilities to their VR and MR applications. Voice SDK is powered by the Wit.ai Natural Language Understanding (NLU) service, and it’s compatible with Meta Quest headsets, mobile devices, and other third-party platforms.

Voice SDK graphic

Using Wit.ai, you can easily train apps to use voice commands with no prior AI/ML knowledge required. The combination of Voice SDK and Wit.ai empowers you to focus on the creative and functional aspects of your app, while enabling powerful voice interactions.

To learn more about Voice SDK, check out our documentation where we dive deep into how to set it up, steps to integrate Voice SDK, tutorials, and best practices.

Tracked Keyboard

The Tracked Keyboard SDK provides users with an efficient way to interact with their physical keyboard while inside a VR environment. By rendering a user’s hands on top of a VR representation of their keyboard, the SDK overcomes the limitations of virtual keyboards and blind touch typing.

Tracked keyboard graphic

To learn more about Tracked Keyboard SDK and how to use it, check out our documentation where we go over how to get started with the SDK, how to integrate it in your own applications, and sample scenes that showcase how it works in action.

Audio SDK

Audio is crucial for creating a persuasive VR or MR experience. The Meta XR Audio SDK provides spatial audio functionality including head-related transfer function (HRTF) based object and ambisonic spatialization, as well as room acoustics simulation. Some of the features supported by Audio SDK include audio spatialization, near-field rendering, room acoustics, ambisonics, attenuation and reflection, and many more experimental features for developers to try out.

Audio SDK graphic

To learn more about how Audio SDK works, the features it supports, and how to integrate it into your own applications, check out our documentation where we go over these topics in more detail.

Social Presence

Presence Platform provides you with tools and resources so you can build high-fidelity digital representations of people that create a realistic sense of connection in the virtual world. This can be achieved by body, face, and eye tracking, which is provided through Movement SDK.

Movement SDK for Unity uses body tracking, face tracking, and eye tracking to bring a user’s physical movements into VR and enhance social experiences. Using the abstracted signals that tracking provides, developers can animate characters with social presence and provide features beyond character embodiment.

Movement SDK graphic

To learn more about how Movement SDK works and the prerequisites required to use it, check out our documentation where we go over these in detail. Check out our documentation on Body, Face, and Eye Tracking to learn more about how these work, and learn how these are used in action in the samples such as Aura, High Fidelity, and others.

Other resources

To learn more about Presence Platform, check out our documentation where we go over all the SDKs we discussed above in more detail. Get started with Presence Platform by downloading the Oculus Integration package for Unity and for Unreal.

Our team has worked on several samples to help you get started when integrating these tools and SDKs into your own applications:

  • The World Beyond: A Presence Platform showcase demonstrating usage of Scene, Passthrough, Interaction, Voice, and Spatializer.
  • First Hand: Presence Platform Interaction SDK showcase demonstrating the use of Interaction SDK in Unity with hand tracking. This project contains the interactions used in the “First Hand” demo available on App Lab.
  • Unity-Movement: A package that showcases Meta Quest Pro’s Body, Eye , and Face Tracking capabilities, enabling developers to populate VR environments with custom avatars that bring the expressiveness of users into the virtual worlds that they create.
  • Whisperer: Presence Platform Voice SDK showcase demonstrating the use of Voice SDK in Unity. This project contains the source code for the “Whisperer” demo available on App Lab.
  • Unity Shared Spatial Anchors: Unity Shared Spatial Anchors sample demonstrates how to use the Shared Spatial Anchors API for the Unity game engine, showcasing the creation, saving, loading, and sharing of Spatial Anchors.

Check out the sessions from Connect 2022 where we discuss how to use Presence Platform to build mixed reality experiences, how to incorporate hand tracking in your apps, and much more.