At the corner of Iris Ave and Folsom in North Boulder, my Model 3 Tesla was driving by itself when it spotted two human drivers in the lane directly ahead of us. All three of us were turning left—two real drivers and my robot—when the two humans broke a basic rule of the road by veering too far and into the right lane.
The Tesla kept to the inside lane, as the driver’s manual says is the correct traffic rule. I wish my teenage son had been watching.
Milo is 15 and has a learner’s permit. I imagine that when he gets his driver’s license, he will develop the memory, routine behaviors, and everyday habits that have been made possible in the short lifespan of self-driving cars. On the other hand, my son is less likely to suddenly let go and stop steering altogether (requiring me to take over) than Tesla’s software.
The machine and the adolescent both have developing brains, human minds driven by millions of years of evolutionary biology and algorithms shaped by decades of engineering. Seen through the lens of cognition and neuroscience, the contrast says a lot about the next generation of leaders.
So far, only the Tesla (not my son) has been involved in accidents. The federal government reported in April that Tesla’s Autopilot technology was involved in 956 crashes between January 2018 and August 2023, including 29 fatalities. The National Highway Traffic Safety Administration’s report concluded that “Autopilot’s system controls may not be adequate for a driver-assist system that requires constant human driver supervision.”
Elon Musk also recently wrapped up talks in Beijing to clear the way with regulators to bring Autopilot to Chinese roads. Many other companies are developing their own versions — General Motors, BMW, Mercedes, Lincoln, Kia and others — most of which take some control in limited situations, such as on the highway.
In short, have no doubt that these cars will be on the road soon, just as surely as my son will have his driver's license within the year. He inherits a daunting task; statistically speaking, driving is the most dangerous regular activity most of us will do in our lives.
After years of reporting on driver safety, I can say with certainty why we face such risks: a mismatch between the capacity of the human brain and the complexity of the road. It presents an onslaught of rapidly changing stimulation, input, and risk. Cars, pedestrians, and cyclists dart in and out of our frame, our brains get tired, distracted, miss an input; we are humans, with biologically limited attention and cognition, driving a car that is a rocket at highway speed.
From this perspective, I watch my son learn in our Toyota Highlander. To stay focused, he prefers the radio off and the volume of parental commentary turned down. His solemn task is evident in his tight grip on the steering wheel, the hunched posture of his body as if he wants to become a little more connected to what’s happening outside the windshield. As he gets into a rhythm, I ask him to identify the various inputs around him—the car in his blind spot, the cyclist turning right without signaling, the pedestrian looking down at a phone. He’s getting tired. Driving safely takes effort. For all the control and adolescent glory that comes with taking the wheel, sometimes he might as well not be able to do it.
When it comes to monitoring our leased Tesla, there is one aspect of the technology that, for me, highlights the study in contrasts: on the screen, where the map is displayed, an animation shows the surrounding input that the Tesla picks up with its multiple cameras and sensors. Cars materialize around us, intersections appear as we approach, dotted with the presence of cyclists or pedestrians. It seems to see everything, everywhere, all at once, processing multiple streams of information in parallel. For example, when the Tesla “sees” something to the right, it does not do so at the expense of seeing something to the left; my son can only see one way at a time.
The algorithm drives at night and in the rain. It is extremely rule-driven. In fact, its strict enforcement has frustrated other drivers — and my teenage passengers — who were strictly adhering to the speed limit. One teenage passenger said to me, “It’s sus,” which means suspicious, “because it’s only going 20 miles per hour.” We were in a school zone.
“Although automated vehicles are not perfect, they work surprisingly well and are getting better,” David Strayer, a cognitive neuroscientist at the University of Utah and one of the world’s leading experts on driver distraction, told me. “We really need to focus on relative risk,” he added, meaning that computers can save far more lives than they risk taking, largely because of their cognitive advantage.
“They don't get distracted. They don't get tired. They don't get drunk or high. They only speed when the driver tells them to.”
They do stutter. Over the past few months, since I’ve been testing the technology, the Tesla’s software has occasionally disconnected, requiring me to immediately take control. I feel like an attendant on a high-speed Disney ride who inexplicably jumps the tracks and heads for Burger Hut. It’s jarring. Take control! Save us all!
To be fair, the system constantly warns me to keep my hands on the wheel and be ready to take over. Sometimes the auto-driving disengages because I pull too hard on the wheel, suggesting I’m tempted to take over. Other times when it gets stuck, who knows what happened in that mysterious algorithm? Did a one and zero pass?
The federal government's latest report on Tesla's crash activity says the car still relies too much on human supervision, and that people aren't always at work either. Crash outcomes can be severe “when a driver is not involved in a Tesla vehicle operating in Autopilot and the vehicle encounters a condition that is outside of Autopilot's object or event detection response capabilities.”
What a powerful statement: the car and the human must pay attention, while both brains are not yet fully suited to the task.
But for now, I’m still going to trust my son to drive me home before I trust the Tesla. I just don’t know when it’s going to work, why, and what I can do to prevent this problem.
Soon, humans will hand over control. When that happens, I’m not sure it will be because robots have virtually unlimited cognition and will keep us safer, as true as that may be. The real reason robots are taking over the wheel is because humans have better things to do than drive. Stream a show, stretch our legs, take a nap. At that point, when machines rule the road, I’ll be able to tell my teenager to watch TikTok while driving, although I don’t think I’d mind the development of self-watching social media at that point.