On October 26, Moscow hosted the International Forum “Ethics of Artificial Intelligence (AI): The Beginning of Trust”. Yuri Hohlov, chairman of the Board of directors of the Institute of the Information Society, head of the Coordination and monitoring department of the Center for expertise of the federal project “Artificial Intelligence” of the Analytical Center under the Government of the Russian Federation, project leader for big data monitoring and standardization of the NTI Competence Center on Big Data of Moscow State University.
The forum became the first specialized platform in Russia where representatives of government, business, civil society, academia and developers of AI systems discussed steps to effectively implement AI ethics in priority industries. The key event of the forum was the presentation of the AI National Code of Ethics.
The discussion “Ethics of AI in ensuring security” was moderated by Natalya Kasperskaya, President of the Infowatch Group, and Alexander Osadchuk, acting head of the Main directorate of research activities of the Ministry of Defense of the Russian Federation.
As part of his speech, Yuri Hohlov noted that issues related to the development and use of ICT should be resolved by all stakeholders: government, business, academia and civil society.
The ethical use of AI technologies should not be an exception. The presented code was developed mainly by business with the involvement of representatives of the scientific and educational community and the government. Ideally, you need a platform where all stakeholders present and defend their interests. The conflict is inevitable, but it must be resolved in a civilized manner, the expert said. “The Analytical Center under the Government of the Russian Federation can really become such a platform,” Yuri Hohlov supported the idea previously expressed by Alexander Osadchuk.
Then he voiced another thesis: it is important to determine with the help of certain metrics what can be (self-) regulated by ethical norms, and what should become part of the normative legal regulation. In addition, the expert recalled the existence of an intermediate, but no less important level of regulation: standardization. “It is possible (and necessary) to lay principles, including those reflecting our ethical ideas about what is good and what is bad, in international and national standards, which represent the recorded knowledge of humanity,” he said.
Yuri Hohlov also named one of the main problems of AI systems and big data systems: the goals of creating an information system precede the development, operation and completion of the functioning of the system itself and at the same time depend on the ethical principles that guide the developers. Ideas and business requirements that are initially laid down in the development of a system are outside the system itself. If you look at the life cycle of working with data in such systems – from the generation and acquisition of data sets to their preprocessing, processing and visualization, then ethical norms “live” outside this cycle, but indirectly strongly influence it.
At the international level of standardization, there is a concern about solving this problem: work is beginning on a new standard describing the life cycle of data in AI systems and big data analytics systems.
The discussion was also attended by:
- Alexander Losev – general director of Sputnik Capital Management
- Elena Bryzgalina – head of the Department of philosophy of education, Faculty of philosophy, Moscow State University
- Yuri Tsvetkov – advisor to the president of Skoltech, an expert in the field of international regulation of AI and cross-cutting technologies
- Alexander Mokhov – head of the Department of medical law, Moscow State Law Academy
- Andrey Neznamov – managing director of the AI regulation center of Sber.