• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

What to Expect from Science in 2017: Philosophy

The debate about the concepts of ethics no longer applies solely to an individual’s moral principles and self-knowledge. Ethics has become an engineering problem. The question then arises, what principles should we build in to autonomous machines that operate and make decisions in the real world?
Kirill Martynov

Associate Professor at the School of Philosophy

Interesting things are happening today in philosophy, and next year these current trends will only intensify. Problems that had always seemed abstract and speculative in the modern world are taking on particular engineering and social importance. For example, the debate about concepts in ethics no longer relates only to an individual's moral principles and self-knowledge. Ethics has become an engineering problem. From now on it is also a question of which principles we should build in to autonomous machines that operate and make decisions in the real world.

The obvious example of such machines is robotic (self-driving) vehicles. The world is now preparing for the introduction of the first commercial ones. Twenty years on, millions of these 'drones' may well be driving drive across the streets of our cities. They may well be safer than modern cars, and the number of road accidents will reduce.

However safe they are, emergencies will still occasionally occur, and then the robots will have to make difficult decisions. For example, should it stick to the programmed route, if that has an 80 percent probability of killing a pedestrian who broke the rules? Or should it choose a 30 percent risk of causing severe injuries to innocent passengers in the same situation? In dilemmas like this the variables are infinite, and robots will need to have the correct answers for each set of conditions.

Of course, the problem is that, in many cases, the ‘right’ answer depends not only on the facts but also on the ethical system you adhere to. For example, do you follow the doctrine of utilitarianism proposed by philosopher Jeremy Bentham? Or do you share the view of Immanuel Kant, who said that ethical behavior does not depend on utility?

In the first case the robot should definitely risk the passengers’ health, even if the pedestrian broke the rules. In other ethics systems there are other options. We have started to teach ethics to robots and this means that soon we will be able to select the ethical systems of the robots we use – meaning that ethics will have become a market product.

In addition, in creating ethics for robots we get a powerful tool to explore our own views on good and evil – and this focus on the practical philosophy is also a key trend. Finally, it means that we actually programme our robots to kill. Information on how it works can be found on the webpage of MIT's Moral Machine (moralmachine.mit.edu). In general, philosophers will have a lot of work in 2017 and in the years to come.

Additional materials:

Review of the book ‘Moral Machines: Teaching Robots Right from Wrong’ by Wendell Wallach and Colin Allen

What to Expect in 2017 — a Research Forecast

On the eve of New Year’s, it is customary to take a look into the near future. We asked HSE experts in various fields to share their forecasts on which areas of research might be the most interesting and promising in 2017. They tell us about what discoveries and breakthroughs await us in 2017, as well as how this could even change our lives.

Read all forecasts

December 26, 2016