just a week Ago first fatal accident that has been implicated in an autonomous vehicle. Or rather, a Tesla Model S that circulated in a semi-autonomous, whose system was in beta stage. In the aftermath of this accident you have written rivers of ink about cars able to drive themselves, while some begin to see them as a much higher risk to the real. However, the evolution of the autonomous car continues on its way, unstoppable, and soon will be required to take moral decisions, playing God.
Do you subirías to a car that could kill you if it considers that it is the least bad option in case of problems?
do You subirías to a car that is programmed to kill you? we have Already discussed this idea at length, in an opinion article that we published some months ago. If you are lazy to read it, I summarise the conclusions: the time will come when the autonomous car has to decide between to sacrifice to its occupants or to cause the death of pedestrians or other drivers. This could be due to a mechanical failure, electronic or simply, a situation not expected caused by the unpredictability of human behavior, or animal.
As the classic example: a child crossing the road behind a ball, in full curve blind national road. The reaction of a human could change in such a situation, but the computational power of the computer that governs the autonomous car should be fast, infallible and unforgiving. If to avoid the abuse of the child implies a dodge to an opposite lane occupied by another vehicle – and this results in death of the occupants of both vehicles – the autonomous car could decide the abuse of the child.
Since the artificial intelligence at the human level does not exist, we have to program their algorithms morales.
“Is the least bad option”, I would think its electronic brain. As we have not yet come to develop an artificial intelligence to an advanced level – just a artificial intelligence able to perform a single task outstanding – we who must program their behavior. A short paragraph: if you want to know more about the artificial intelligence and the road to a superintelligence, I recommend you to read this fantastic article on WaitButWhy is simply revealing.
Returning to our car self – employed- in which a family travels happily looking at their smartphone or watching a movie – we’ll be the ones that have to program their behavior in the event of unforeseen circumstances. It is us who must decide if the car should at all costs protect their occupants, or destroying them if the prejudice that generates an accident is greater than the value of the human lives that travel in its interior. Is definitely play to be a god, a being superior, omnipotent and relentless.
How can an autonomous vehicle to play God?
Many believe that it would be ideal to apply the laws of robotics of Isaac Asimov – source literature, but considered to be almost a canon of philosophical – the car without a driver, that could be considered a robot. What are the laws of robotics of Asimov?
1. A robot will never harm a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given by human beings, except if these orders come into conflict with the 1st Law.
3. A robot must protect its own existence as such protection does not conflict with the 1st or the 2nd Law.
Since the artificial intelligence is not at such a high level, the inclusion of these complex rules in the brain of a driverless car seems unfeasible, at least for the moment. To decide the actions the morality or ethics of an autonomous car, you must learn from the experience. Experience that we want to include in your electronic, on the basis of a multitude of scenarios with their consequent results – predefined. MIT has started a project called “Moral Machine”, that will help the autonomous vehicle to make decisions.
If you’re feeling creative, you can also design your own scenarios and moral choices.
A kind of game, a series of exercises in which the you must decide what is right and what is wrong. As will act in situations of life or death. Should I run over a dog and a cat, if it saves the life of its occupants? Should sacrifice the life of several athletes if it saves a pregnant woman? What is right and what is wrong? Making complicated, with many nuances that only you can take. The results will be part of a paper, whose conclusions could contribute to the building of algorithms morales.
If you don’t want your answers – completely anonymous – to be included in the sample data, you decide when you finish the thirteen situations in which you must decide. To begin this macabre but necessary “game”, you just have to click on this link.