Moral Philosophy of Autonomous Vehicles

Self-driving cars are quickly becoming a reality. Everyone from Volvo to Google are jumping on the bandwagon to design and build autonomous vehicles for the commercial market. California is even already in the process of creating legislation to allow driverless vehicles on the road without a licensed driver.

Taking the human out of the loop is almost always a good engineering decision when it comes to reducing error. The move to autonomous vehicles will undoubtedly make our roads safer and significantly reduce the number of road accidents. However, as we have already seen with Tesla in Germany, accidents are still bound to occur for one reason or another. The question is, when accidents do happen how are they going to be managed?

car-crash

Naturally there are the legal questions, such as: Who is financially responsible for a collision, the car owner or the car manufacturer? Who is allowed to operate an autonomous vehicle? Will we be allowed to drink and drive now?

But the far more interesting questions (in my opinion) arise from the ethical decisions inherently programmed into the vehicles themselves. The good thing about having humans in the loop is that we are conscious beings capable of making moral judgements based on a given set of information. When it comes to autonomous vehicles however, these kinds of decisions will have to be left to the machine. Of course the machine can only do what it is programmed to do (at least at this point in time), so most of these philosophical questions need to be foreseen and decided on in advance by the car manufacturers.

How will this work? Will each car model have a different moral code embedded into its software? Will the consumer have any choice in whether they purchase the altruistic or egocentric model; one which sacrifices their own life for the greater good versus one which saves them every time? Can we finally enforce philosophical ideals in the real world? How much information can the car’s sensors gather in order to allow them to make the best decision possible?

MIT have developed a very interesting exercise to get the general public thinking about these sorts of questions. I encourage you to have a play on their Moral Machine website and see how tricky some of these problems can get. The bonus of playing around on their site is that data will be gathered about your choices which may be used to influence some of these decisions being made by the manufacturers.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s