The “Contributor’s Story” series is intended to provide a face and voice to our major open source contributors and community members, an overview of the projects they are working on, and the successes and challenges contributors face when developing.
In this blog post, we will be talking to Soham, a WebXR contributor working on creating immersive AR/VR examples using WebXR media layers through the Major League Hacking (MLH) Fellowship.
“I have discovered that the best way to get better at software development is to not only practice it but to use it to solve real world problems.”
I am a software developer, currently in my final year, creating open source projects and writing about software development, competitive coding, machine learning, cybersecurity and information security awareness. I am also the founder of and lead maintainer at Devstation, a not-for-profit organisation with an aim to encourage startups and organisations to adopt open source tech.
The first time I got to know about the MLH Fellowship was through a previous fellow who worked with Jest as a part of his fellowship. A few months back, I had created a feature request for a Jest-plugin for Puppeteer for introducing a video recording for the tests. Knowing that I can contribute to the fellowship in exciting frameworks like Jest that solve real-world problems is something that really caught my attention.
I came into contact with open source through the speech-to-code engine called Dragonfly. Since then, I have been an active contributor to the Cloud Native Computing Foundation since the past few years. However, my first formal introduction to open-source was in a program similar to MLH Fellowship called Google Summer of Code. There I worked with Wikimedia Foundation as a part of the Release Engineering Team and absolutely fell in love with the open source community.
For the fellowship, I am working with Zhixiang Teoh on creating immersive AR, VR examples on the web that incorporate the WebXR layers specification, particularly with the WebXR media layers. The WebXR media layers specification makes creating and interacting with the video layers in a virtual environment not only performant but also crisper, reducing the dependence on CPU and leveraging the GPU in a rather efficient fashion.
We have particularly focused our attention on achieving these implementations through an existing library like Three.js that makes interfacing with the browser’s WebGL and WebXR API relatively simple by abstracting it away through helper classes and functions. Our goals for the fellowship particularly were to create examples that indicate how multiple media layers with different 3D characteristics (equi-rectangular, quad or cylindrical) can be created and attach controller interactions with them such as fluid resizing, a toolbar for video play/pause and playback as well as moving the layers in the virtual 3D space.
We were lucky to have some reference implementations that the previous fellows had worked on, but the existing code wasn’t really the best way to do it. The code examples were in the form of a single file that made traversing to the code rather convoluted.
Our first step was to migrate the existing code examples to be modular and use as many abstractions provided by Three.js as possible so that the future viewers of the code have an easier time following it. We also made the code well-commented and added support for "snowpack" to leverage support for ES Modules.
One of the major roadblocks we’ve had so far in our contribution is that the WebXR Layers are a relatively new feature recently added to the WebXR world. WebXR itself is particularly new in its field and while browser-support for immersive experiences on the Web has been on a steady rise, a lot of the things required a thorough understanding on our part on other areas of the WebXR API spec itself. This was rather challenging since the spec was intended to serve as a reference to the browser implementers. The process, however, was very exciting since it got as familiar with a lot of the things that work under the hood of a browser that make an immersive experience on the web achievable.
As the fellowship nears its close, we have achieved most of the deliverables with a well-commented code. Currently we are working to make the existing code more refined, filtering out potential bugs, adding documentation for caveats and work around that we have incorporated and making the code more performant in places where we can.
The last time I worked with Three.js was two years ago with a primary aim to learn it while constructing my personal website. Since peeking into Three.js implementation of native WebGL and WebXR APIs became a recurring trend, working with WebXR as a part of the fellowship gave me a chance to explore Three.js a lot more in depth. I now feel confident to contribute to Three.js and the Immersive Web community and feel ready to dive deep into complicated codebases.
There is one key experience that I particularly value a lot as the fellowship comes to a close and that is getting to know and work with Teoh. I have a tendency to over engineer, and Teoh was always helpful in keeping me track without spending too much time on refining a feature and taking breaks when necessary. I’ve learnt that quick pair programming sessions are a great way to work on complicated tasks and a great way to get to know your fellow developers.
Overall, contributing to open source and knowing that our examples will serve as a reference for future implementations of the layer specification fills me with a sense of satisfaction.
WebXR looks quite more intimidating than it actually is. I think the best way to approach it is to first read the MDN Web Docs on the WebXR specification and then read the official WebXR API spec. The official WebXR specification is meant for browser tormentors. This makes it easy to get lost. The idea is to use this specification only to understand the corresponding documentation on MDN in more detail.
Three.js does a wonderful job of working with WebGL. Using the abstractions provided by Three.js has really helped us to work with WebGL and WebXR APIs with relative ease. 1
We discovered another such framework called A-frame halfway into the fellowship. A-frame makes working with immersive experiences on the web simpler and I highly recommend future contributors take a look at it.
If you’d like to learn more about Facebook Open Source, follow us on Twitter, Facebook, and YouTube for relevant updates, and check out the WebXR website for how to get started. Also, we recently sponsored Open Web Docs, where we hope to do our part to continue the MDN tradition of providing quality web documentation on a variety of technologies, including XR.