HomeBlogsEnterprise MicrosoftBuild .NET apps for the metaverse with StereoKit
bySimon Bisson
Contributor
analysis
Aug 03, 20227 mins
APIsC#Development Libraries and Frameworks
Microsoft’s open source mixed-reality tools make it easy to build OpenXR apps in .NET.
Credit: Thinkstock
Much of the Windows mixed-reality platform depends on Unity. However, that’s not always the best option for many reasons, especially its licensing model that’s still very focused on the games market. There are alternatives. You could use WebXR in an embedded browser or work with the cross-platform tools in the Power Platform that’sbuilt around Babylon.js’s React Native implementation. But if you’re working with .NET code and want to extend it into augmented reality and virtual reality, you still need a set of .NET mixed-reality libraries.
OpenXR: an open mixed-reality standard
Luckily, there’s an openstandards–based approach to working with mixed reality and a set of .NET tools for working with it. The Khronos Group is the industry body responsible for graphics standards such as OpenGL and OpenCL that help code get the most out of GPU hardware. As part of its remit, it manages the OpenXR standard, which is designed to allow you to write code once and have it run on any headset or augmented-reality device. With runtimes from Microsoft, Oculus, and Collabara, among others, OpenXR code should run on most platforms that can host.NET code.
OpenXR’s cross-platform and cross-device nature makes it possible to have one code base that can deliver mixed reality to supported platforms if you’re using a language or framework that works across all those platforms. As the modern .NET platform now supports most of the places you’re likely to want to host OpenXR applications, you should find the Microsoft-sponsored StereoKit toolan ideal way to build those apps, especially with cross-platform UI tools like MAUI hosting non-OpenXR content. You canfind the project on GitHub.
As it’s being developed by the same team as Windows’ Mixed Reality Toolkit, there are plans to evolve towards being able to use Microsoft’s Mixed Reality Design Language. That should allow the two tools to support a similar feature set so you can bring what would have been Unity-based applications to the wider C# development framework.
Working with StereoKit
StereoKit is purely designed to take your 3D assets and display them in an interactive mixed-reality environment with a focus on performance and a concise (the documentation refers to it as “terse”) API to simplify writing code. It’s designed for C# developers, though there is additional support for C and C++ if you need to get closer to your hardware. Although it was originally designed for HoloLens 2 applications and for augmented reality, the tool is suitable for building virtual-reality code and using augmented reality on mobile devices.
Currently platform support is focused on 64-bit applications, with StereoKit shipping as a NuGet package. Windows desktop developers currently only get access to the x64 code, though you should be able to use the ARM64 HoloLens Universal Windows Platform (UWP) on other ARM hardware such as the Surface Pro X. The Linux package has support for both x64 and ARM64; Android apps will only run on ARM64 devices (though testing should work through the Android Bridge technology used by the Windows Subsystem for Android on Intel hardware). Unfortunately, we can’t be completely cross-platform at present as there’s no iOS implementation because there’s no official iOS OpenXR build. Apple is focusing on its own ARKit tool, and as a workaround, the StereoKit team is currently working on a cross-platform WebAssembly implementation that should run anywhere there is a WebAssembly-compatible JavaScript runtime.
Developing with StereoKit shouldn’t be too hard for anyone who’s built .NET UI code. It’s probably best to work with Visual Studio, though there’s no reason you can’t use any other .NET development environment that supports NuGet. Visual Studio users will need to ensure that they’ve enabled desktop .NET development for Windows OpenXR apps, UWP for apps targeting HoloLens, and mobile .NET development for Oculus and other Android-based hardware. You’ll need an OpenXR runtime to test code against, with the option of using a desktop simulator if you don’t have a headset. One advantage of working with Visual Studio is that the StereoKit development team has provided a set of Visual Studio templates that can speed up getting started by loading prerequisites and filling out some boilerplate code.
Most developers are likely to want the .NET Core template, as this works with modern .NET implementations on Windows and Linux and gets you ready for the cross-platform template under development. Cross-platform .NET development is now focused on tools like MAUI and WinUI, so it’s likely that the UWP implementation will become less important over time, especially if the team ships a WebAssembly version.
Build your first C# mixed-reality app
Building code in StereoKit is helped by well-defined 3D primitives that simplify creating objects in a mixed-reality space. Drawing a cube (the mixed-reality version of “Hello, world”) can be done in a handful of lines of code with another sample, a free-space drawing app, in just over 200 lines of C#. The library handles most of the interactions with OpenXR for you, allowing you to work with your environment directly rather than having to implement low-level drawing functions or have code that needs to manage different cameras and screens.
You will need to consider some key differences between traditional desktop applications and working in StereoKit when writing code. Perhaps the most important is managing state. StereoKit needs to implement UI elements in every frame, storing as little state as possible between states. There are aspects to this approach that simplify things considerably. All UI elements are hierarchical, so toggling one element off automatically toggles its child elements.
This approach lets you attach UI elements to other objects in your model. StereoKit supports many standard 3D object formats, so all you need to do is load up a model from a file before defining interactions and adding a layout area on the model, which acts as the host for UI elements and makes the object the top of the UI hierarchy. It’s important not to reuse element IDs within a UI object as they form the basis of StereoKit’s minimal interaction state model and are used to track which elements are currently active and can be used in user interactions.
StereoKit takes a “hand-first” approach to mixed-reality interactions, using hand sensors such as HoloLens’s tracking cameras where available or simulating them for mouse or gamepad controllers. Hands are displayed in the interaction space and can be used to place other UI elements with respect to hand positions, for example, making a control menu always close to the user’s hands, no matter where they are in application space.
If you need inspiration for how to implement specific features, a usefullibrary of demo scenes is in the StereoKit GitHub repository. These include sample code for working with controllers and managing hand input, among other necessary elements of a mixed-reality interaction. The code is well documented, giving you plenty of tips on how to use key elements of the StereoKit APIs.
Removing Microsoft’s dependency on Unity for mixed reality is a good thing. Having its own open source tool ensures that mixed reality is a first-class citizen in the .NET ecosystem, supported on as much of that ecosystem as possible. Targeting OpenXR is also key to StereoKit’s success as it ensures a common level of support across mixed-reality devices like HoloLens, virtual reality like Oculus, and augmented reality on Android. You’ll be able to use the same project to target different devices and integrate with familiar tools and technologies such as MAUI. Mixed reality doesn’t need to be a separate aspect of your code. StereoKit makes it simple to bring it into existing .NET projects without having to make significant changes. After all, it’s now only another UI layer!
Related content
- analysisHow Azure Functions is evolving Microsoft has delivered major updates to its serverless compute service to align it more closely with development trends including Kubernetes and generative AI. By Simon BissonJul 11, 20247 minsAzure FunctionsMicrosoft AzureServerless Computing
- analysisUnderstanding DiskANN, a foundation of the Copilot Runtime Microsoft is adding a fast vector search index to Windows. It’s tailor-made for fast, local small language models like Phi Silica.By Simon BissonJul 04, 20247 minsSoftware DeploymentGenerative AIArtificial Intelligence
- analysisAI development on a Copilot+ PC? Not yet Microsoft’s new AI-infused hardware shows promise for developers, but the platform still needs work to address a fragmented toolchain.By Simon BissonJun 27, 20249 minsVisual Studio CodeSoftware DeploymentGenerative AI
- analysisInside today’s Azure AI cloud data centers At Build, Microsoft described how Azure is supporting large AI workloads today, with an inference accelerator, high-bandwidth connections, and tools for efficiency and reliability.By Simon BissonJun 20, 20247 minsMicrosoft AzureTechnology IndustryArtificial Intelligence
- Resources
- Videos