WWDC26 Wish List

My wish list heading into WWDC26 — what I'd like to see in visionOS, RealityKit, and Reality Composer Pro for the year ahead.

WWDC is the main event of my developer year. As has become tradition, here's my wishlist. We'll find out on June 8, 2026 when Apple previews updates for all the things (iOS, visionOS, macOS, tvOS, and watchOS) to 27.

visionOS

  • Full Liquid Glass Make it easier to write cross-platform apps by using glassEffect everywhere.
  • A Camera in RealityView I would love to be able to add a camera within a RealityView and be able to record or render that view elsewhere. As a source for a PortalComponent could also be very interesting.
  • ARKit body detection of the wearer having more of the skeleton of the visionOS user would unlock a lot of exciting game and health use cases
  • Widgets in environments I want to leave a widget in a specific place on the Moon, or on the surface of Haleakalā, and have it still be there next time I'm in that environment.
  • Even more flexible widget placement and window pinning I have objects on my walls that interfere with widgets, and I'd like to use any spot along that wall regardless of what's physically there. One idea: if I shake the widget a lot, or hold it in place long enough, it breaks through the restrictions and gives me a more custom positioning that reflects the wall itself instead of the room scan.
  • Shared coordinate spaces between iOS and visionOS We have shared in-person experiences for visionOS to visionOS, and for iOS to iOS, but not crossing that bridge. I have a lot of app ideas that would be unlocked with this, which is why I've been working to get this going manually, but I would love to get native support in iOS 27 and visionOS 27.
  • visionOS and tvOS more automatic media synchronization I'd like to start watching something on the TV and in Vision Pro at the same time, with playback controls synced automatically.
  • AirPlay by gesture Perhaps some gestures to point at a device in space to start casting to it.
  • Spatial templates and SharePlay for more than five participants I want to add "flat" FaceTime participants from iOS, macOS or the web to an immersive template and give them spatial positions in the scene, instead of grouping them on the shared canvas. Literally let me place people around a virtual table, whether they're Spatial Personas or flat video heads.
  • ARKit information from immersive scenes Let me place virtual content in the user's selected immersive space as if it were a real space they're standing in — the contours of the volcano, the craters and rocks on the Moon.
  • Render and interact with a captured space Gaussian Splats perhaps — with an associated mesh exposed as ARKit anchors so I can treat it like a real space for placement and physics.
  • "Leave behind" objects into the shared space I want to offer that a user can take elements out of my apps and set them on a table, a bookshelf, wherever, where they remain interactive as new windows of my app in the shared space. Think of this like tear off toolbars from the Mac. I want to be able to "pin" something from my mixed immersive experience to the real world and have it stay in the Shared Space when I leave.

RealityKit

  • More pre-made assets — objects, textures, sounds, everything. Please bring the models from the old Reality Composer app to current USD and more.
  • More lights, and more lighting options More lights at one time in a RealityView scene, and more lighting effects. I'd love to be able to have light masks so they cast unique patterns, animate them, and more.
  • Live previewing of Components and Systems inside Reality Composer Pro Let me test and run my Systems in the same preview environment where I'm authoring the scene.
  • Precise camera control - I would like to be able to specifically position the camera and its focal properties in RealityView.
  • Immersive on iOS and macOS - I would like some equivalent to Immersive Spaces on iOS and macOS. To go full screen into a 3D view, treating the screen as a portal you are looking thru could be a really compelling way to bring visionOS experiences on iOS and macOS.

Reality Composer Pro

  • Dramatically faster compile times Builds come to a long stall compiling rkassets. Any performance improvements here would be very welcome.
  • Baked lighting If I could mark lights or objects as static in an RCP scene and have that lighting baked into the textures at save/load time, scenes would look dramatically better with no extra runtime cost

Foundation Models Framework and CoreAI

  • I would like to see some built-in tools to connect to Foundation Models Framework that provide access to information and context about the user (with their permission of course), to make for more capable and interesting combinations of personalized reasoning for users.
  • There is a lot more to explore with on-device reasoning and I'm really excited to see where things expand beyond Foundation Models Framework.
  • I've seen rumors of a CoreAI SDK, and would be excited to see what additional tools Apple might bring to provide developers with more powerful reasoning.

MapKit

  • Camera control with 3D and street level views - I want to be able to animate a camera moving along a street with the street-level perspective and views. Imagine a leisurely walk down a street on any device.

ARKit

  • Cross-platform I want more aligned sensor data between iOS and visionOS
  • Multiple body detection I would like to be able to detect and track multiple figures and body skeletons and get 3D positioning for all of them. Bonus if this uses the ultra-wide camera

Miscellany

  • Gaussian Splat SDK My dream would be a full SDK with tools for creating / editing / combining splats into RealityView scenes.

That's the list. Some of it is incremental, some of it is a big stretch, and hopefully at least one or two items will show up in June.