On August 20, 2021, discussion of the Code of Ethics in the field of artificial intelligence continued at the Analytical Center. The event was moderated by Sergey Nakvasin, deputy head of the Analytical Center, head of the Center for expertise for the implementation of the federal project “Artificial Intelligence”.
National guidelines on the ethics of artificial intelligence are being developed on behalf of the President of Russia and in accordance with the provisions of the National Strategy for the Development of Artificial Intelligence until 2030, as well as the Concept for the Regulation of Relations in the Sphere of Artificial Intelligence Technologies and Robotics until 2024. The Analytical Center is working on the document together with the Ministry of Economic Development of Russia and the Alliance in the field of Artificial Intelligence. Experts are currently working on arguments for a specific framework that will ensure the ethical development of AI technologies.
“We want Russia to be among the top ten countries that develop ethical standards in the field of artificial intelligence. We cannot wait for our developers to be imposed on the standards by which they will have to work. It is better to develop your own standards and promote them at the global level,” explained Sofia Zakharova, an assistant at the Office for the Development of Information and Communication Technologies and Communications Infrastructure of the Presidential Administration.
She added that the Code should fulfill, among other things, an educational function. “The most important challenge is to build public confidence in technology. We will be able to achieve this if we openly disclose on what principles and rules artificial intelligence technologies will be developed and implemented,” Zakharova explained. In addition, according to her, in the absence of legislative regulation, the Code will become a guarantee for the developers themselves and protect them from unfair competition.
According to Andrey Neznamov, managing director of the AI Regulation Center of PJSC Sber, the Code will regulate the work of regulators, developers, customers, operators, users of artificial intelligence systems, AI experts and operators, and other persons potentially having the ability to influence the AI. The norms of the code will be advisory in nature and apply to all market participants, regardless of their nationality. “When developing the norms of the Code, we relied on the human-centered principle, according to which technologies should contribute to the realization of all potential human capabilities, as well as on risk-oriented principle – any technology should be considered from the point of view of not only benefits, but also risks to the interests of human and society, and also on the principles of precaution and responsibility. Our task is to do so as to prevent harm as much as possible and ensure the safety of developments,” Neznamov explained.
The expert listed the key principles of the AI Code of Ethics. So, according to him, the main priority in the development of AI technologies is to protect the interests of people. In addition, the Code speaks of the need to be aware of the responsibility for the creation and use of AI, which always lies with humans. “AI technologies can and should be implemented immediately where it will benefit people. The interests of developing AI technologies should be higher than the interests of competition,” Neznamov listed. “Maximum transparency and truthfulness is important in informing about the level of development of AI technologies, about the opportunities, risks, successes and failures of application.”
As the participants of the round table noted, the principles of ethics should evolve as new knowledge, challenges and opportunities emerge. The experts drew attention to the fact that, unlike the norms of law, the provisions of the Code will not have legal significance. Therefore, the key mechanism for the application of the Code should be public attention to the topic and censure of those who violate ethical norms. A company that does not comply with the requirements of the document will incur reputational and economic losses, not legal ones.
In the speech of Yuri Hohlov, chairman of the Board of directors, Institute of the Information Society, several theses were presented.
First, the target audience of the Code must be clearly defined. On the one hand, it is developed by the professional community, so the text is replete with “professionalisms” and special terms. On the other hand, the Code should be understandable to any citizen of Russia, because one of the goals of its creation is to overcome the population’s distrust of artificial intelligence. It is one thing to talk to professionals about how artificial intelligence systems should be designed so that they follow certain principles of ethics, and another thing is to explain to citizens how to ethically use a service or solution based on artificial intelligence technologies. These are two completely different tasks, so it is recommended that you first determine the target audiences and, possibly, make different documents for each of them.
Secondly, not all the main stakeholders are identified in the proposed Code. Everything related to developers of artificial intelligence systems is spelled out in great detail, down to subcategories. But only the expert part has been singled out from the scientific and educational community. In fact, it is necessary to talk about four main groups of stakeholders: government, business, civil society and the already mentioned scientific and educational community.
Thirdly, in the proposed text the principles of ethics for the development of artificial intelligence systems are formulated immediately, but it is not said which ethical system is taken as a basis. It is clear that the ethical system of the Western world is radically different from the ethics of Confucianism or Buddhism, and we need to understand well what we will use in Russia, what ethical principles should guide the developers and consumers of AI-based solutions and services. Without this, it is very difficult to imagine the limitations that should be laid in the artificial intelligence systems being created.
Fourthly, it is necessary to decide on the sites where the discussion of the main provisions of the Code will be held. Legislative authorities, the Public Chamber and others were mentioned, but there was no talk about the Commission of the State Council of the Russian Federation in the direction of “Communications and digital economy”. And this is also a very important platform.
The Artificial Intelligence Code of Ethics is slated to be presented in the fall.