No question about it it: Nadella's Microsoft is a triumph. Finally, in the 1920s, Microsoft focused on the most innovative technology since the PC itself. And while revenue from AI products can't yet offset Microsoft's massive investments, the company has the confidence — and resources — to wait until the products improve and users find them useful.
But can Microsoft really avoid the hubris that set it back so far? Consider what happened in May of this year with a product called Recall.
The position was supposed to embody Microsoft's integration of AI into its hardware, software and infrastructure. The idea was to give users something like a personal version of the Internet Archive. Recall would continuously record everything that happens in your machine: what you read, what you write, photos and videos you view, sites you visit. Simply describe to your machine what you are looking for: What were those carpet samples I was considering for my living room? Where is that report on the ecology of the Amazon? When did I go to Paris? Those moments magically appeared, as if you had a homunculus that knew everything about you. It sounds scary – a bit like having Big Brother on board – but Microsoft insisted that users could feel safe. Everything stays on your computer!
Almost immediately, critics labeled it a privacy nightmare. For starters, they noted that Recall worked by default and gobbled up your personal data, no matter how sensitive, without asking permission. Although Microsoft has emphasized that only the user has access to Recall, security researchers discovered “holes you could drive a plane through,” as one tester put it.
“Within about 48 hours we went 'Wow, this is extremely exciting!' for people who express any concerns,” says Brad Smith. As the press piled in, Smith was on a plane to meet Nadella in Washington, DC. By the time he landed, he thought it would be wise to only allow Recall to work if users opted in; Nadella agreed. Meanwhile, Microsoft's senior executives in Redmond poured into conference rooms to figure out how to scale back the product. Luckily, they didn't have to invoke Recall since the feature hadn't been released yet. They postponed the launch. And they would add security features like just-in-time encryption.
“People pointed out some obvious things that we should have done and things that we should have noticed,” Nadella says. But his own Responsible AI team missed them too. A certain amount of “know it all” had led to the announcement of a product that fell short, indicating that Microsoft, even if led by a so-called empath, still retains too many of its previous character flaws. Only now it's a $3 trillion company with locked-down access to the leading AI operation's products.
“There are two ways to think about it,” says Brad Smith. “One is, 'Gosh, I wish we had thought about this sooner.' Hindsight is a wonderful thing. Or two: “Hey, it's good that we're using this to make this change – let's be explicit about why.” It was really a learning moment for the entire company.”
That's fine. But after fifty years, it's a lesson that Microsoft – and Nadella – should have learned a long time ago.
Getty Images (timeline)
Let us know what you think of this article. Send a letter to the editor at mail@CBNewz.