For some time now, we've known that the future of mobility is self-driving cars. From Google to Tesla, from startups to automakers, it's the technology that everyone is working on and the first iterations are already out there.
Until now, the market has been focused on the technology—how to create the level of artificial intelligence (AI) that's necessary to drive a car and give drivers the confidence to be mere passengers in their everyday vehicle.
But the focus has changed with MIT's Moral Machine survey. The survey took the old trolley problem and presented it to people as a problem for autonomous vehicles. The original ethical brainteaser proposed that you were in the cab of a trolley about to run over five innocent people tied to the tracks. If you flipped a lever, the trolley would divert to a siding on which one innocent person was standing. What's the ethical thing to do?
Researchers from MIT, France's CNRS and the University of Oregon revamped the trolley problem for the modern age. If you were in a self-driving car that was about to hit 10 pedestrians, would you want that car to swerve and kill you, but save them–or not? When presented with problems like this, most people objectively want the altruistic outcome–in this case, that the car swerve, endangering the passenger but saving 10 lives — even if they (or a family member) were in the car. But when asked if they themselves would buy such a car, the answer is no.
This presents an interesting dilemma for car-makers and startups in this sector. How they design the artificial intelligence behind their autopilots, how much control the autopilot has over the vehicle and how they market their self-driving cars will all have a huge impact on whether or not customers will actually want to buy their cars.
The Case for Self-driving Cars
The future self-driving car that automakers and startups are trying to sell is one where robot-driven cars erase the problem of human error and thereby dramatically reduce road accidents.
Between 2005 and 2007, the National Highway Traffic Safety Administration conducted a national survey of accidents and their causes and found human error to blame in a whopping 94 percent of cases.
From major mistakes, such as inattention and falling asleep, to errors in judgment, such as guessing the speed of another car wrong or taking a curve too fast, human beings are subject to behind-the-wheel errors that costs tens of thousands of lives every year.
In a golden future of self-driving vehicles, AI software would save those lives. Self-driving cars would never fall asleep or get distracted. Plus, they would have instant access to the kind of information that would allow them to make much better decisions.
Autonomous cars will likely beam out information about their speed, direction and upcoming turns for the next few minutes to all other cars in the vicinity, giving the other self-driving vehicles the maximum necessary information to make the safest decisions about their own speed and direction. Traffic lights and pedestrian crossings will be able to tell passing cars when they're about to change, long before they actually do.
In fact, it's not beyond the realm of possibility that cities will some day have such well-oiled, connected roads that self-driving cars will know exactly when it would be best to arrive at each light in order to get greens all the way.
Accidents Will Still Happen
However, it won't be possible to eradicate road accidents completely. There will still be vehicle faults, such as sensors that fail and detection devices that miss a bald tire or other simple mechanical problems. Although the robot driver will also probably be hooked into weather reports, forecasts are imperfect and there will be times when there's unexpected ice on the road. Deer will still run out in front of cars and so will people.
It's in these cases that self-driving cars will have to decide what action to take and that action must be based on an ethical algorithm of some kind. It's highly unlikely that the choices will be as stark as death for pedestrians or death for the driver. Even if the underlying ethic dictates that the car values the lives of the many over the lives of the few, it's still going to first save the many and then do everything it can to save the few. Not every car-swerve from disaster is going to end in the occupants' deaths.
It may be that the actual number of deaths from road accidents becomes vanishingly small, reaching one or two a year—or even none. Compared to tens of thousands now, that's an incredible world-changing advance.
Still, the ethical problem remains. Objectively, people want an outcome that serves the greater good, but when it comes to buying a car, they want one that will save their life.
Everyone with a stake in the shiny future of self-driving vehicles—from automakers and technology companies to startups and governments—needs to start addressing this problem if they want to create a market for these cars.
Legal and Liability Issues Need to Be Resolved
There are also both legal and liability ramifications to deal with, regardless of whether or not companies try to create altruistic self-driving vehicles. Mercedes seemed to imply in October that its products will value the lives of passengers above pedestrians.
“If you know you can save at least one person, at least save that one. Save the one in the car. If all you know for sure is that one death can be prevented, then that's your first priority," Mercedes-Benz executive Christoph von Hugo told Car and Driver at the Paris Motor Show.
But the firm later insisted to Fortune magazine that von Hugo had been misquoted and its official position is that “neither programmers nor automated systems are entitled to weigh the value of human lives." It added that the company wouldn't legally be allowed to favor one life over another in Germany or other countries.
The legal issues around the ethical conundrums will have to be thrashed out between governments, car manufacturers and other self-driving firms, but once they have been, those rules could reassure the buying public.
Taking responsibility for liability in the event of an accident could be another way to reassure the markets. Volvo has already announced that it will pay for any injuries or property damage caused by its fully autonomous IntelliSafe Autopilot system when it gets deployed in its cars starting in 2020.
“Liability is crucial. We don't believe it's a very bold statement to say 'when the car is in autonomous mode it's a product liability issue, if that system malfunctions it's our responsibility.' I think if you are not prepared to make this statement then you really have no product to offer. Who wants an autopilot you have to supervise? Either you do this or you shouldn't be in the business," Volvo President and CEO Håkan Samuelsson has said.
He has also said that Volvo sees this as part of the necessary framework for getting autonomous vehicles on the road and called for governments and automakers to work together to make self-driving mobility a reality.
For the market, the idea of a car that is insured by someone else has to be hugely attractive. It must also be reassuring to potential customers that Volvo is willing to bet money that its autopilot system will be safe.
Interim Solutions
As governments and automakers handle the legality and liability of the moral dilemma, innovative corporations and startups will always come up with new ways to reassure the markets.
Some companies are getting potential users comfortable with the idea of self-driving cars by giving them stepping stones towards the technology. Parts supplier Delphi has a concept car that offers partial autonomous driving. The vehicle is designed to encourage drivers to trust the car, but also to remain vigilant and ready to take over if necessary.
San Francisco startup Cruise started out with a retrofit kit that allowed luxury cars to take over from the driver on long stretches of highway. The firm has since been bought up by GM and is looking to extend their tech into a product it can sell to auto manufacturers to make their cars entirely self-driving.
Other companies are focusing on making their vehicles more human. Startup Drive.ai is using deep learning to develop its self-driving AI into a more human mind, both in how it makes decisions and in how it communicates with people. Instead of the programmers instructing the car on how to drive with a long list of code, deep learning will help the AI to “learn" to drive using examples, just as we do.
It will also make the car friendlier to other road users, with an LED sign on its roof to communicate with pedestrians, cyclists and other drivers. It could, for instance, let pedestrians know that it's seen them and it's safe to cross at a walkway.
Public Transport and Freight
Not surprisingly, Drive.ai is initially planning to use its technology in route-based fleets, such as freight delivery vehicles or public transport.
Other companies are exploring this option too. In April, trucks from a number of European companies took part in a semi-autonomous platoon experiment. DAF, Daimler, Scania and Volvo were among the companies behind the test, which saw fleets of trucks being controlled wirelessly by a lead, human-driven vehicle.
The trucks still carried human drivers for safety, but the experiment showed the possibility of a future where long-haul, truck-based freight could be run almost like trains. Unlike train cars however, a group of trucks could be run with a certain distance between them, allowing traffic to flow naturally around them.
The chances are, if the market for autonomous vehicles is ever to be successful, self-driving cars will first have to prove themselves as taxis on pre-approved routes, such as between an airport and the city center, or common tourist routes around towns.
Public transit is one area where people are already pretty comfortable with robot drivers. Most know that tram drivers, train drivers, as well as airline pilots, are aided to some degree by technology, but they're comfortable with that assistance because it's a set route.
Getting autonomous vehicles to do the same will be another path into opening the market for this technology, by familiarizing people with the technology first before asking them to pay for it.
Looking Forward
The moral dilemma of self-driving cars is a thorny issue. Given that life-or-death decisions will likely be extremely rare, the trolley problem is perhaps not the most useful way to frame the problem. However, some form of ethical algorithm will need to be employed in decision-making.
Meanwhile, there are a multitude of paths to take to address the moral dilemma of autonomous vehicles and bring this new technology to market. A slow introduction of driverless cars through public transit systems and road freight fleets will quickly help consumers get comfortable with the idea and these systems provide ample opportunity for carmakers and startups to develop and hone the artificial intelligence driving these machines.
The fact that some autonomous vehicles are already on the road shows that there's an appetite out there for this technology, even if the customers right now are early adopters. By the time the golden vision of safe autonomous driving comes around, companies will have primed the consumer to be not just ready, but eager for their own driverless car.
Want to learn more about the future of autonoumous vehicles? Stay up to date on the launch of our Mobility Tech Accelerator to discover new emerging technologies and disruptive startups in the space.
Interested in more reads like this? Subscribe to our Corporate Innovation Blog!