Are you dead or dead: the ethical problem of self-driving cars

Computers in autopilot vehicles can make life and death decisions quickly. But should they do this? We face the problem of bioethics in the machine age.

There is a well-known ethical problem you must have heard: suppose you are responsible for controlling the direction in which a train is heading. At this time, you find that there is a school bus stuck in the track of the train, which is filled with children, and it is bound to move forward. Kill them; then you find that there is another railroad track in front, turning in the direction to avoid the school bus, but there is another child on the rail: your child, where he plays and waits for you to get off work. Now that the switch is in your hands, you can only save the child in one direction. How do you choose this time?

This ethical dilemma is also known as the "tram problem". Of course, the details of the version are different, but they are essentially the same. Before that, this was just an interesting thought experiment, because it is difficult for the human brain to make decisions based on reason in the face of such an unexpected situation. But now is the 21st century, we began to have auto-driving vehicles, and this problem began to change from theory to research topics with practical application value.

Google's self-driving cars have ran more than 1.7 million miles on American roads, and despite some accidents, Google claims that its self-driving cars are not the cause of the accident. Volvo says its self-driving cars will appear on the Swedish highway before 2017. Elon Musk also said that the future of autonomous driving is quite close, and Tesla's autopilot model will begin testing this summer.

are you ready?

This technology has arrived, but are we ready?

Google's automatic car has been able to deal with some real-world realities. For example, when the current car suddenly turns in front of them, Google's car will be intelligently evaded. But in some cases, collisions are inevitable. And in fact, Google's prototype has experienced a lot of small car accidents in the road test, although Google has pushed these responsibilities to humans. So how do you deal with autonomous vehicles for the above-mentioned problems without "reasonable" solutions? For example, the vehicle in front suddenly turned sharply, and Google's self-driving car was inevitably hit, but the road was on both sides of the road, so the choice for the car was only to hit the front car or hit the wall. On-board computers certainly have enough computing power to make decisions based on "intelligence", which is different from humans. The computer has enough time to scan and calculate the probability that the relevant humans will survive. So should the computer make a decision that is good for its owner? Even if the other party is a school bus full of children? Who should make this decision? How will they decide?

“Ultimately, this issue became a choice between utilitarianism and morality,” said Ameen Barghi, alumnus of the University of Alabama at Birmingham (UAB). He graduated in May this year and will enter Oxford University as a UAB third Rhodes scholar this fall. He is a veteran ethical dilemma researcher. He was the senior leader of the school's bioethics team and won the 2015 National Bioethics Cup competition. The subject of their winning debate included clinical trials of the Ebola virus and the hypothesis of a drug that would allow one to fall in love with another. In last year's Ethics Cup competition, the team discussed another thought-provoking theme about autopilot cars: If it turns out that autonomous driving is safer than human driving, should the government ban human driving? (Their answers are summarized as follows:).

Death in the driver's seat

So your self-driving car should sacrifice your life to save others? Barghi said that we have two philosophical methods for such problems. He explained: "The utilitarianism tells us that we should always choose a solution that will bring the most benefits to the most people." In other words, if you are going to kill you on the wall and save a car child Make a choice and your car will choose the one.

However, morally speaking, "There is some value that is always correct." Barghi continued: "For example, murder is always wrong, and we should not do such a thing." Back to the train question that was first proposed, "Even if the direction of rotation can save five lives, we should not do it, because then we actively kill another person." Barghi said, so despite the strangeness, a self-driving car should not be programmed to sacrifice. Avoid your own passengers and avoid others.

There are other variants of the "Tram Problem": If that person is your child? The other five are five murderers? The answer is simple. Barghi said, “Let users choose between morality and utilitarianism,” if the answer is utilitarianism, then there are two sub-options: rules or behavioral utilitarianism.

"Rules of utilitarianism say that we must always choose the most favorable actions, without having to care about the environment - that is to say, according to this model, any version of the tram problem can easily get a result." Barghi said: calculation The number of individuals can be chosen for most benefits.

And behavioral utilitarianism, he continued: "It is said that we should regard each individual as a subset of behavioral decisions," which means that there are no quick and simple rules, and each situation has a specific idea. So how can the computer master all of these situations.

“Computers can't be programmed to master everything,” said Dr. Gregory Pence, Department of Philosophy, Arts and Sciences at UAB. “We can understand this by understanding the history of ethics. Sophistry, or based on the application of St. Thomas Christian ethics, has tried Each question provides a prescription. In the end, they have miserably failed. On the one hand, there are too many unique situations, and the prescriptions are constantly changing."

Prepare for the worst case

UAB's ethics team and bioethics team spent a lot of time researching these issues, which combined philosophy and futurism. Both teams are now led by Pence, a well-known ethicist who has been training UAB medical students for decades.

In order to reach a conclusion, the UAB team will also have a heated debate, Barghi said. "With the addition of Dr. Pence, there has been frequent debate between us, and everyone on the team can sing a certain case," he said. “Before the competition, we tried our best to develop as many situations and potential positions as possible to refute us so that we can have a more comprehensive understanding of this topic. Sometimes we will completely change ourselves in a few days. Position because there have been some new situations that have not been considered."

For driverless cars, we are still just getting started, and there may be deeper problems that have not yet appeared. Although future technologies are expected to allow autonomous vehicles to make detailed calculations in the event of an accident to avoid catastrophic accidents, accidents are always inevitable. It is a reasonable move to try to consider all the worst situations and provide pre-prevention plans before these cold-blooded high-speed machines are put into practical use. In this way, when the accident really happens, the car will not make mistakes because of misjudgment. Decide.

For example: If a car believes that in order to save you in the driver's seat, you need to kill people all over the world. What should you do now?

Gas Water Heater

Natural Gas Water Heater,Gas Water Heater,Ce Gas Water Heater,Natural Gas Hot Water Heater

xunda science&technology group co.ltd , https://www.gasstove.be

Posted on