iXperience
January 10, 2022
The year is 2025. A woman slides into her autonomous car headed for her office. She pulls out a tablet to read the morning news as the car pulls into the street. A green light — the car proceeds through the intersection. Just then, a ball bounces onto the road and a child darts into traffic after it — too late. The car’s computer calculates a crash is imminent. There are two outcomes. The car attempts to stop, but will hit and kill the child. Or the car swerves to avoid the child, but will ram into the median and kill its passenger. What should the car do? Save the owner or the innocent child?
What would you do?
But the car won’t really be making a choice; it will do what its algorithms dictate. The real choice was made years earlier when the algorithm was designed. That real choice is now. We are the ones to decide who self-driving cars will allow to live and choose to kill.
If you need proof that this is a choice for right now, Waymo (owned by Google) began testing a driverless rideshare service in Phoenix in December 2018. Meanwhile, Tesla has been selling cars that have the hardware needed for full autonomy since 2016. And major automakers believe they will achieve autonomous driving by the early 2020s.
photo by Roberto Nickson
But who are actually making those decisions? Right now it’s engineers. Engineers are great at building cars. Engineers are great at building software to drive cars. But engineers aren’t qualified to codify life or death ethics into software. Liberal Arts majors, however, have the understanding of philosophy and ethics necessary to make those choices.
This isn’t a critique of engineers. The ones building self-driving software are doing some of the most impactful work in tech today. However, like anyone engineers aren’t good at what they haven’t learned to do. Experts in self-driving software aren’t experts in ethical principles. We wouldn’t expect engineers to do the marketing for their products. It’s just a different skillset. Yet right now that is exactly what we are expecting them to do with ethics.
Self-driving cars will affect the lives of billions of people. That is too big of an impact to let engineers give it their best guess. These cars need teams of Liberal Arts educated ethicists to guide their ethical development.
But this is bigger than just cars. This is an inflection point in public morality.
On a macro level this is an opportunity for us to choose the moral code we want for society, written down for all to see. That’s a big deal. Up until now individuals, businesses, and governments have been guided by moral ideas to be sure, but they weren’t transparent.
For example, if I cut you in line, you don’t know what moral code I used to justify that action. But when a car chooses to kill its passenger, we will all know what moral code it used. Once written, software will determine how cars react in unavoidable accidents with deadly consequences. Those lines of code will follow an ethical principle. A principle we will have agreed to whether through intentional dialogue or by remaining silent.
This generation is uniquely situated in time to bring ethics out of academia and into everyday life. For the first time these principles will be hardcoded into our products and services. These will affect how cars determine which lives to save, but more importantly they will direct and determine society’s moral compass for how other technology is developed.
Liberal Arts education provides the best grounding for building this moral compass. It’s time for Liberal Arts to join the tech industry.
Explore all articles
Our advisors are available to answer your questions and assist.