Boom is ho: how one accident paralyzes a thousand robot taxis

‘End ride’. That prophetic message is on the button in the ceiling of the hundreds of Cruise robot taxis that drive the streets of San Francisco. Rode, you have to say. After a serious accident, Cruise lost his license because the robot taxis are not safe according to the DMV regulator.

On October 2, a passerby was trapped under a Cruise car on 5th Street. She had just been hit by another vehicle – and its driver. The Cruise car ran over the victim, stopped on top of her and then drove toward the sidewalk to clear the roadway. The victim, screaming in pain, was dragged several meters and suffered much more serious injuries.

The robot taxi thought wrongly being hit from the side. That’s why Cruise announced this week that it is recalling all of its cars, almost a thousand of them, because of this ‘software bug’. Automaker GM, the main investor, has halted production of the vehicles. One accident could spell the end for a company that lost more than $700 million last quarter. Developing robot taxis requires a lot of money and a lot of patience, both of which are lacking at Cruise.

The robot taxis had just been given a wide berth: since August, both Cruise and Waymo, a Google spin-off, have been allowed to drive their unmanned fleets 24 hours a day and charge money for rides. The tech companies have to expand much further if they ever want to make any money with their cars full of very expensive sensors. The controls also cost money: even though no one is behind the wheel, the robot taxis are closely monitored remotely by human ‘operators’, guardian angels with a joystick. According to The New York Times has Cruise by car one and a half employees employed to intervene.

These operators have their hands full, noticed Dariu Gavrila, professor of intelligent vehicles at TU Delft. He took a few robot rides around San Francisco in May, with both Cruise and Waymo. He felt safe along the way and was impressed by the technology. We already have the robot vacuum cleaner, but this is still a historic moment: “These are the first robots that we encounter on a large scale in our daily lives.”

Waymo has the best credentials when it comes to safety reporting and makes relatively few mistakes. Cruise doesn’t seem to be that far yet. During Gavrila’s test drive, Cruise employees had to provide assistance with boarding and disembarking. In addition, the car responded unexpectedly to emergency services flashing lights. At a left turn, the robot taxi decided to make a right turn three times – less dangerous and that’s how you get there.

Move fast and lose billions

Robot taxis cannot afford serious mistakes. That’s the lesson from Uber, the taxi service that trained self-driving cars in Arizona to replace drivers. Uber was in a hurry, was sloppy and abandoned the project after a fatal accident in 2018. Uber’s cowboy approach– move fast and break things – led to a billion-dollar flop and that fate now hangs over Cruise’s head. The renewed skepticism about robot taxis also reflects on competitors such as Waymo or Zoox, a steeringless minibus from Amazon.

Under most circumstances, robots cause fewer accidents and traffic fatalities than humans. It is just difficult to estimate exceptional scenarios. No matter how much training data you collect, how many test kilometers you drive: there are always unforeseen cases, edge casesin which the software will fail.

According to Gavrila, this raises a philosophical question. Not that classic ethical consideration – whether the car is programmed to scoop an old lady or a pram in an emergency. No, the question is when we will accept the robot taxi on the street. What if, statistically speaking, he drives as safely as humans? Or should the robot taxi be ten times safer? If you focus too much on very incidental misses, you lose sight of the gains from all avoided accidents.

Philip Koopman, an American expert in car safety and critic of AVs (autonomous vehicles), is on holiday for a week but still answers the phone. He would like to calculate from memory why he thinks Cruise’s approach is too aggressive compared to Waymo’s. According to Cruise, the mistake of October 2 would occur average ‘only’ occur once every 10 million to 100 million miles. “But suppose you have a fleet of ten thousand taxis that drive 200 miles every day, then statistically that is every week.” Unacceptable, says Koopman. “It’s nice that you can take passengers from A to B comfortably. But safety is not about the days when everything goes well. It’s all about that one bad day, and how often it happens.”

A black box with bumper

Cruise’s blunder makes you think. A pause button? In any case, a well-considered introduction of robot taxis, where safety is always tested and guaranteed, says Dariu Gavrila. According to him, the massive use of robot taxis still has limitations as long as they are programmed with supervised learning: the training data relies largely on people labeling the ‘objects’ in traffic in advance. Every conceivable pedestrian, cyclist, car, zebra crossing and traffic light.

If you were to give artificial intelligence complete free rein in assessing data, it would work better than what humans program. But you do create one blackbox on wheels that cannot be understood and cannot be corrected afterwards. In addition to AI, you can equip cars with a second, much simpler operating system: basic software that intervenes as soon as a collision is imminent. As a bumper in case the all-knowing AI no longer knows.

The steady adoption of self-driving cars is in stark contrast to the wild AI race that has erupted since ChatGPT was introduced. Everyone and their mother now uses artificial intelligence that makes things up on their own. No one can estimate what the social consequences of this will be. Optimists coo with delight at every trick their AI model learns, while critics count down the countdown to Armageddon. It will probably end up somewhere in the middle.

The Cruise car accident is a rewarding metaphor for the way AI is entering everyday life. How do we prevent technology from unintentionally dragging us in a direction we do not want? We can learn from the robot taxis. They must meet strict conditions: there are traffic rules, permits, supervisors with power and employees with godfashion. But if you make a mess, the ride ends. Boom is whoa.