Skip to content

Apple’s VisionOS takes a bold leap into computer interface

    Just like everyone who got to test Apple’s new Vision Pro after its unveiling this week at the Worldwide Developers Conference in Cupertino, California, I couldn’t wait to experience it. But when an Apple technician at the ad hoc testing facility used an optical device to check my lenses for strength, I knew there might be a problem. The lenses in my glasses have prisms to address a condition that otherwise causes me to see double. Apple has a set of pre-cut Zeiss lenses to handle most of us who wear glasses, but none could solve my problem. (Since the Vision Pro is about a year away from launch, I didn’t expect them to include all the regulations in this beta; even after years of use, Warby Parker still can’t sharpen my lenses.) At least I can do not sharpen lenses. , my fears were justified: when I arrived in the demo room, the eye-tracking setting – a critical feature of the device – was not working. I was only able to experience part of the demos.

    What I did see was enough to convince me that this is the world’s most advanced consumer AR/VR device, and I was amazed at the fidelity of both the virtual objects and the icons floating in the artificially rendered room I was in, and the alternate realities delivered in immersion mode, including sporting events that sidelined me, a 3D mindfulness dome that enveloped me in comforting flower petals, and a stomach-churning mountaintop excursion that rivaled the best VR I’ve ever had tasted. (You can read Lauren Goode’s description of the full demo.)

    Unfortunately, my eye-tracking problem meant I couldn’t get a taste of what is arguably the most important part of the Vision Pro: Apple’s latest leap in computer interface. Without a mouse, keyboard or touch-sensitive display, the Vision Pro makes it easy to navigate by viewing the images beamed onto two high-resolution micro-OLED screens and using finger gestures such as tapping to choose menu items, scrolling , and manipulating artificial objects. (The only other controls are a button called a digital crown and a power button.) Apple describes this as “spatial computing,” but you could also call it naked computing. Or maybe that designation should wait until the dive-style face mask of about 1 pound is replaced with supercharged goggles in a future version. Those who did test it said they mastered the tools almost immediately and found they could easily pull up documents, browse Safari, and take photos.

    VisionOS, as it’s called, is a major step in a half-century journey away from the original prison of computers: the clumsy and rigid command line, where nothing happened until you summoned a stream of alphanumeric characters with your keyboard, and everything that happened next was an equally restrictive keyboard solution. Beginning in the 1960s, researchers launched an attack on that command line, starting with Doug Engelbart of the Stanford Research Institute, whose networked “augmenting computing” system introduced an external device called the mouse to move the cursor and select options. selectable through menu choices. Later, scientists at Xerox PARC adapted some of those ideas to create what would be called the graphical user interface (GUI). PARC’s most famous innovator, Alan Kay, envisioned an ideal computer he called the Dynabook, a kind of holy grail of portable, intuitive computers. After reviewing PARC’s innovations during a lab visit in 1979, Apple engineers brought the GUI to the mass market, first with the Lisa computer and then with the Macintosh. More recently, Apple provided a paradigm with the iPhone’s multi-touch interface; those pinches and swipes were intuitive ways to access the digital powers of the small but mighty phones and watches we carried in our pockets and on our wrists.

    The mission of each of those computing shifts was to lower the threshold for interacting with the powerful digital world, making it less difficult to take advantage of what computers had to offer. This came at a price. Besides being intuitively designed, the natural gestures we use when we’re not on the computer are free. But it is expensive to make the computer as user-friendly and vibrant as the natural world. It took a lot more calculations when we moved from the command line to bitmap views that could display alphanumeric characters in different fonts and let us drag and drop documents that slid into file folders. The more the computer mimicked the physical world and accepted the gestures we used to navigate the real world, the more work and innovation was required.

    Vision Pro takes that to the extreme. That’s why it costs $3,500, at least in this first iteration. (There’s an argument to be made that the Vision Pro is a 2023 version of Apple’s 1983 Lisa, a $10,000+ computer that first brought bitmapping and the graphical interface to a consumer device – then shrugged it off for the Macintosh (which was 75 percent cheaper and much cooler too.) Inside that facemask, Apple crammed one of its most powerful microprocessors; another piece of custom silicon designed specifically for the device; a 4K-plus display for each eye; 12 cameras, including a lidar scanner; a suite of sensors for head and eye tracking, 3D mapping and hand gesture preview; audio pods with two drivers; exotic textile for the headband; and a special seal to prevent the light of reality from seeping in.