There is a scenario in which a self-driving car traveling at highway speed encounters an inevitable collision. This scenario is currently theoretical but is getting closer to reality every year. Some pedestrians have entered the road. It’s clear in the oncoming lane. The vehicle has about 200 milliseconds to take action. Additionally, an answer already exists somewhere in its code. It was written by someone. It was approved by someone. Furthermore, very few people who purchased that vehicle are aware of its contents.

Beneath all the excitement surrounding autonomous vehicles lies a quiet, unsettling truth. For years, engineers at Tesla, Waymo, Ford, and GM have been honing sensors, improving lane-detection software, and shortening stopping distances.
| Topic | The Ethical Dilemmas of Programming Morality Into Self-Driving Cars |
|---|---|
| Field | Autonomous Vehicles (AVs) / AI Ethics / Transportation Technology |
| Key Companies Involved | Tesla, Google (Waymo), GM, BMW, Mercedes-Benz, Ford, Toyota, Audi |
| Core Ethical Framework | Trolley Problem, Utilitarianism, Kantianism, Duty of Care |
| Annual Global Road Deaths | ~1.2 million (WHO estimate) |
| Human Error in U.S. Accidents | ~93% of 5.5 million crashes (ENO Transportation Center) |
| Projected Accident Reduction by AVs | Up to 90% (various studies) |
| Key Academic Voice | Prof. Chris Gerdes, Stanford University – Center for Automotive Research |
| Legal Principle Guiding AVs | Traffic law + duty of care to other road users |
| Primary Ethical Dilemma | Passenger safety vs. public safety in unavoidable crash scenarios |
| Reference Website | Stanford Center for Automotive Research (CARS) |
Beneath those accomplishments, however, lies a problem that cannot be fully resolved by radar or lidar: the problem of moral decision-making, which is programmed into a machine by individuals who are unsure of the correct answer themselves.
For many years, philosophers have struggled with a version of this. The trolley problem seems like a classroom exercise: should you pull a lever to divert a runaway trolley, killing one person instead of five? It is no longer the case. Before their vehicles ever set foot on a public road, autonomous vehicle designers must respond to this exact question, albeit slightly modified.
The decisions made months or years ago in engineering meetings, ethics reviews, corporate legal departments, and regulatory frameworks are reflected in what the car does in those 200 milliseconds. It’s possible that the majority of customers have never considered this at all.
Chris Gerdes, a Stanford University mechanical engineering professor and co-director of the Center for Automotive Research, has dedicated a significant amount of time to deciphering this. His viewpoint is more grounded and possibly more helpful than the trolley problem framing might imply. He contends that engineers don’t need to develop a new ethical framework in order to find the solution.
It has already been incorporated into more than a century of traffic laws, court rulings, and jury instructions. He thinks that the social contract we uphold as drivers—the duty of care and the expectation of reasonable behavior—can direct AV behavior in ways that are morally and legally sound.
Although it helps, the framing doesn’t completely ease the tension. Because human judgment was assumed at every stage and traffic law was written with humans in mind. When the judgment is mechanical, what happens? What happens if a system with only code and no instinct, fear, or conscience is subjected to the “reasonable driver” standard?
Roughly 1.2 million people lose their lives in traffic accidents each year, according to WHO data. According to research from the ENO Transportation Center, human error—distraction, intoxication, exhaustion, and bad decisions made in a split second—is responsible for approximately 93% of crashes in the US. Based just on those figures, the argument for autonomous cars is really strong.
According to some studies, there could be a 90% decrease in traffic accidents if AVs are widely used. It’s difficult not to feel that something needs to change when you stand outside a busy intersection in any major city and watch the number of near-misses add up over the course of twenty minutes.
However, during 1.7 million miles of test driving, Google’s own autonomous vehicle, which is regarded as one of the most meticulously developed in the world, recorded 11 minor collisions. Yes, small numbers. However, they all subtly pose the same query: what principle was the car adhering to at that precise moment when it made a decision? And who determined the correctness of that principle?
Until you sit with it, the utilitarian solution—minimize overall harm, save the greatest number of lives—seems reasonable. It implies that the car is theoretically capable of determining whether your life is worth sacrificing for three strangers. This framing has been largely abandoned by AV manufacturers. Ford has a clear corporate policy: always abide by the law.
In a recent paper, Gerdes and his Ford colleagues made the case that the moral conduct expected of human drivers—observe traffic laws and only violate them when necessary to prevent an accident—also offers a practical model for autonomous vehicles. A car may have technically broken the traffic code but not the moral contract if it crosses a double yellow line to avoid colliding with a cyclist when there is no oncoming traffic. It used its discretion. It upheld its duty of care.
That strategy has a subtly comforting quality, but it’s also a little lacking. Cleaner cases are handled by it. But what about the truly impossible ones, where every possible course of action causes someone to suffer harm? The majority of respondents to a survey on attitudes toward self-driving cars across several nations stated that they wanted to own one, understood the moral dilemmas associated with it, and believed that a car should prioritize protecting its own driver. There is no irrationality in that contradiction. It is profoundly human. A car that could perform a cold calculation will be purchased by people. Simply put, they don’t want to lose.
Autonomous vehicle laws and regulations are still developing. In the majority of jurisdictions, moral responsibility issues such as who is responsible when an AV’s algorithm kills someone—the programmer, the manufacturer, or the owner—remain genuinely unresolved.
According to some researchers, applied software engineering ethics may eventually provide organized methods for incorporating moral reasoning into AVs. However, juries and judges are not the same as ethics committees and algorithm designers, and there is a genuine conflict between those two domains that has yet to be resolved.
As this field develops, it becomes more and more apparent that technical advancement is outpacing moral clarity. Automobiles are becoming more intelligent. Discussions about the values they ought to uphold are falling behind. A self-driving car in a real city will eventually encounter the exact kind of situation that philosophers have been debating in seminar rooms for decades, most likely sooner than most people realize. It will take 200 milliseconds to make a decision. The effects will be felt for a lot longer. Whether the authors of the code have really taken that into consideration is still unknown.
