Starting today, some developers can use the popular Unity software to create apps and games for Apple’s upcoming Vision Pro headset.
A partnership between Unity and Apple was first announced during Apple’s WWDC 2023 keynote last month, introducing the Vision Pro and visionOS in the same segment. At the time, Apple noted that developers could immediately begin creating visionOS apps using SwiftUI in a new beta version of the company’s Xcode IDE for Macs, but it also promised that Unity would begin supporting Vision Pro this month.
Now it’s here, albeit in a slow, limited rollout to developers who sign up for a beta. Unity says it will gradually allow a wide range of developers into the program over the coming weeks or months, but hasn’t gone into detail about the criteria it uses to choose people, other than to say it’s not just targeting AAA game makers .
Once developers get to work with it, the workflow will be known. It closely mirrors how they’ve already worked on iOS. They can create a project targeting the platform, generate an Xcode project from there, and quickly preview or play back their work from the Unity editor via an attached Vision Pro devkit or Xcode’s Simulator for visionOS apps .
Shared Spaces, RealityKit and PolySpatial
Unity is best known as a 2D and 3D video game creation engine, but the company offers a range of tools aimed at making it something of a one-stop shop for interactive content development – gaming or otherwise . The company has a long history on Apple’s platforms; many of the early 2D and 3D games on the iPhone were built using Unity, which contributed to the company’s fame.
Unity has also since been used to create some popular VR games and apps for PC VR, PlayStation VR and VR2, and Meta Quest platforms.
There are a handful of specific contexts in which a Unity-made app can appear on visionOS. 2D apps that run in a flat window within the user’s space are the easiest to implement. It should also be relatively easy (although not necessarily trivial) to port fully immersive VR apps to the platform, assuming the project in question uses Unity’s Universal Render Pipeline (URP). If not, the app won’t get access to things like foveated rendering, an important feature for both performance and reliability.
Still, that’s a walk in the park compared to the other two contexts. AR apps placed in the visible physical environment of the user will be more complicated, and some apps may want to present interactive 3D objects and spaces alongside other visionOS apps, i.e. they want to support multitasking.
To make that happen, Unity is launching “PolySpatial,” a feature that allows apps to run in visionOS’ Shared Space. Everything in the Shared Space relies on RealityKit, so PolySpatial translates Unity materials, meshes, shaders, and so on into RealityKit. Even within that context, there are some limitations, so developers will sometimes have to make adjustments, build new shaders, and so on to get their apps to work on Vision Pro.
It’s worth noting that, allegedly in the name of privacy, visionOS doesn’t give apps direct access to the cameras, and there’s no way around the need to work with RealityKit.
Much of the discussion so far has been about tweaking existing apps to get their software on Vision Pro in time for the product’s launch next year, but this is also an opportunity for developers to get into completely new apps for visionOS to work. It’s been possible to use SwiftUI and other Apple toolkits to create apps and games for visionOS for about a month now, but Unity has a robust library of tools, plugins, and other resources, especially for game creation, which makes much of the legwork compared to working in SwiftUI – at least for some projects.