Like it or not, self-driving cars are becoming a reality. Companies such as Google/Waymo, Tesla and Uber are already working on these types of vehicles.
One of the reasons for the boom in self-driving technology is that these cars will allegedly be safer. They are predicted to help greatly reduce the number of fatal car accidents in the U.S. And that may turn out to be true. There are, however, some questions about the ethics of how self-driving cars will attempt to avoid accidents.
Your car may already have some self-driving capabilities
Many newer cars on the road today already have some self-driving or driver assist functions such as:
- Automatic braking
- Automatic parking
- Lane-keep assist
- Steering-assist systems
- Adaptive cruise control
These features are designed to enhance driver safety. But questions remain about other aspects of self-driving vehicles, especially those involved in decision making in life-and-death situations.
Accidents are not always unavoidable, even in self-driving cars
The reality is that self-driving vehicles with numerous safety features can still get into accidents. In these cases, what will the vehicle be programmed to do?
A 2017 study showed that people agreed that a vehicle should kill the fewest number of people in an unavoidable accident. The issue of who those people are, however, is trickier.
If a car’s programming tells it to kill the fewest number of people, who will die? A pedestrian? A motorcyclist? Or the occupants of the self-driving vehicle?
There are currently no good answers to these questions. Computers in vehicles will make these types of decisions in the near future. Unfortunately, programming cars to completely avoid accidents – and fatalities – is not possible. Regardless of how sophisticated self-driving cars become, there will always be unforeseen events leading to accidents.