Can artificial intelligence-powered machines have ethics? Is it possible to formalize ethics and insert it into a computer code? These and other issues were discussed by the participants in the Open Innovations International Forum 2019 within the "Ethical Dilemmas. Digital Identity in Modern Society" session.

Speakers of the discussion have agreed that any artificial intelligence development strategy must consider ethical and legal aspects of applying such technologies.

"It is important that people, while researching the limits of artificial intelligence’s opportunities, keep observing ethical standards. Artificial intelligence may not make important decisions on its own. Basically, it is a program designed to carry out complex calculations and provide a person with information for decision-making. We may not exclude a person from decision-making process by making artificial intelligence responsible for every aspect," said Steve Crown, Microsoft's Vice President.

Andreas Steininger, Professor of the Wismar University of Applied Sciences, has noted that when investing resources in artificial intelligence development at the state level, it is important to clearly determine the desired results and proceed from clearly set goals.

Speakers have stressed out the need for centralized regulation of artificial intelligence dissemination process at the state level.

Oksana Tarasenko, Deputy Minister of Economic Development of Russia, has reminded that Russia has an approved national artificial intelligence development strategy: "The strategy aims primarily at increasing the welfare of citizens. We are considering the introduction of artificial intelligence, in particular as a means to increase labor productivity."

The session was attended by Arkady Dvorkovich, Chairman of the Skolkovo Foundation, Valeria Zabolotnaya, Dean of Sberbank Corporate University, Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning at World Economic Forum, and Ryan Chilcote, Co-Founder of Ryan Chilcote Productions Ltd.