Autonomous Vehicles Ethics (Programming)

Autonomous Vehicles Ethics (Programming)
Autonomous Vehicles Ethics 

Over the past several years, more and more autonomous features have been embedded in cars. And just a couple of months back, Tesla released the following video in which it boasted about having achieved “Full Self-Driving.”


A Techopedia article reported that even earlier Tesla cars contained "the necessary hardware for autonomous driving," though activating the ability depended on a software update. 

The article also envisioned the difference between the way autonomous cars built today will differ from the ones in the future.



Currently, Tesla cars are equipped with the necessary hardware for autonomous driving, but software updates are required to fully enable the feature.

 While it will allow fully autonomous driving, it will also still allow the human driver to take control when the situation calls for intervention.
The next generation of autonomous vehicles, however, would not need steering wheels, pedals or transmissions. The advantage of such cars is the possibility of reducing accidents and providing necessary transportation for people who are incapable of driving like the elderly or those with vision or physical disabilities.

But there is also a potential downside: the necessity for the human agency that sets up the car's programming to foresee all possible scenarios and to direct the car to make the kind of judgments people have to when the scenario calls for action that will inevitably cause some form of harm.



While Tesla may be the most famous name on the AI front for vehicles, it certainly is not the only player in this growing market. Some far more venerable names in the industry have also gotten into the act.

The last fatality it reported was the one involving a Tesla Model X on March 23, 2018. The driver of the car died when it hit a highway barrier. Tesla blamed it on the barrier’s interference with the autonomous driving system of the vehicle:

"The reason this crash was so severe is that the crash attenuator, a highway safety barrier which is designed to reduce the impact into a concrete lane divider, had been crushed in a prior accident without being replaced," Tesla said in its statement.
The company added: "We have never seen this level of damage to a Model X in any other crash."

Unfortunately, though, that was not the end of fatal crashes for Tesla’s self-driving cars. A number of them occurred this year.
Among the incidents was one on March 1, 2019. It’s been confirmed by the US National Transportation Safety Board (NTSB) that the semi-autonomous Autopilot software was engaged on a Tesla Model 3 when it slammed into a tractor-trailer attempting to cross a Florida highway and the car driver was killed.




Though they are still relatively rare, compared to the car accidents caused by human drivers, the fact that there are any accidents and fatalities caused by self-driving cars have made people concerned about their safety and programming. In fact, this year Quartz cast some doubt about Tesla’s safety claims.

Like that Tesla accident, most autonomous car accidents result in the death of the person sitting in the driver’s seat. However, there have been cases of people outside the car struck and killed by autonomous cars.

The most infamous incident of that sort may be the one involving Uber in the March 2018 death of Elaine Herzberg. The 49-year-old woman was walking and pushing her bicycle across the Mille Avenue in Tempe, Arizona when the Uber car struck her.

As a result of that, Uber adopted a policy of making sure to include human drivers in its cars. The story was reported here: Uber Puts Self-Driving Cars Back to Work but With Human Drivers.
This is a way for Uber to circumvent the problem that we will have to confront, if and when fully autonomous cars become the norm: how to program them to incorporate the instinct to preserve human life.

Programming AI with concern for ethics


As we saw in another article, Our Brave New World: Why the Advance of AI Raises Ethical Concerns, with the great power of AI comes great responsibility, to ascertain that technology does not end up making situations worse in the name of progress.

 The study of ethics for AI has captured the attention of people who think about what needs to be done ahead of implementing automated solutions.

One of those people is Paul Thagard, Ph.D., a Canadian philosopher, and cognitive scientist brought up some of the issues we have to now confront with respect to programming ethics into AI in How to Build Ethical Artificial Intelligence.



He raises the following 3 obstacles:



Ethical theories are highly controversial. Some people prefer ethical principles established by religious texts such as the Bible or the Quran. Philosophers argue about whether ethics should be based on rights and duties, on the greatest good for the greatest number of people, or on acting virtuously.

Acting ethically requires satisfying moral values, but there is no agreement about which values are appropriate or even about what values are. Without an account of the appropriate values that people use when they act ethically, it is impossible to align the values of AI systems with those of humans. 


To build an AI system that behaves ethically, ideas about values and right and wrong need to be made sufficiently precise that they can be implemented in algorithms, but precision and algorithms are sorely lacking in current ethical deliberations.


Thagard does offer an approach to overcome those challenges, he says, and references his book, Natural Philosophy: From Social Brains to Knowledge, Reality, Morality, and Beauty. However, in the course of the article, he does not offer a solution that specifically addresses self-driving car programming.




Self-driving cars and the Trolley Problem



Ideally, drivers avoid hitting anything or anyone. But it is possible to find oneself in a situation in which it is impossible to avoid a collision, and the only choice is which person or people to hit.

This ethical dilemma is what is known as the Trolley Problem, which, like the trolley itself, goes back over a century. It’s generally presented as follows:

You see a runaway trolley moving toward five tied-up (or otherwise incapacitated) people lying on the tracks. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto a side track, and the five people on the main track will be saved. However, there is a single person lying on the sidetrack.

You have two options:

Do nothing and allow the trolley to kill the five people on the main track;
Pull the lever, diverting the trolley onto the side track where it will kill one person.


Of course, there is no really good choice here. The question is which one is the lesser of two bad options.


 It was just this kind of a dilemma that the Green Goblin presented Spiderman in the 2002 movie, attempting to force him to choose between rescuing a cable-car full of children or the woman he loves:

Being a superhero, Spiderman was able to use his web-spinning abilities and strength to save both.
 But sometimes even superheroes have to make a tragic choice, as was the case in the 2008 film The Dark Knight in which Batman's choice was to leave the woman he loved in the building that exploded.

So even those who have superior abilities cannot always save everyone, and the same situation can apply to AI-enabled cars.
The question then is: Which code of ethics do we apply to program them to make such choices?


What should the self-driving car do?



MIT Technology Review drew attention to some researchers working on formulating the answers a few years ago in How to Help Self-Driving Cars Make Ethical Decisions.


 Among the researchers in the field is Chris Gerdes, a professor at Stanford University who has been looking into "the ethical dilemmas that may arise when vehicle self-driving is deployed in the real world."

He offered a simpler choice: that of dealing with a child running into the street, which forces the car to hit something but allows it to choose between the child and a van on the road.

 For a human that should be a no-brainer that protecting the child is more important than protecting the van or the autonomous car itself.

But what would the AI think? And what about the passengers in the vehicle who may end up sustaining some injuries from such a collision?

Gerdes observed, "These are very tough decisions that those that design control algorithms for automated vehicles face every day.”

The article also quotes Adriano Alessandrini, a researcher working on automated vehicles at the University di Roma La Sapienza, in Italy who has served as head for the Italian portion of the European-based CityMobil2 project to test automated transit vehicle. 
Comments



Font Size
+
16
-
lines height
+
2
-