Together with the development of Artificial Intelligence (AI) technologies many governments and international organizations started the debate on its various legal, economical and societal consequences. As a part of the European Union arrangements Member States have declared that they will elaborate country specific strategies on the matter up to the end 2019. Some succeed, some others like Poland or Hungary are still on their way to get there. But working on such policy documents is not only limited to EU Member States. More countries have prepared or are in the process of their preparation.
Strategies are set in place mostly to define the role of public institutions in the process of boosting AI initiatives also in the private sector by considering the steps needed to be taken in education, research, employment policies, taxes, legal environment and more. They should also respond to potential risks connected with the implementation of AI in public and private services. The latter goal is the main focus of the ePaństwo Foundation and TransparenCEE Network.
From 2017 we are active in the field of bringing more transparency into Automated Decision Making (ADM) processes which also may rely on AI solutions. We are conducting strategic litigation, advocacy activities and research. Especially in Central and Eastern Europe. In 2019 we have published the report on how public institutions are dealing with the transparency and accountability of ADM tools and how they are preparing for challenges which will occur in the future.
This paper is based on the analysis of AI strategies in selected countries in the CEE region. We were mostly interested in the approach of CEE governments in terms of transparency, accountability, prevention of potential risks of discrimination and projected legal environment regulatin these topics. In other words to what extent the Human Rights principles are important for CEE governments while thinking about preparing the environment for AI development in their respective countries.
Strategies vary in terms of their complexity and size. For example Russian, Slovak or Serbian documents are more narrative documents which describe various aspects of Artificial Intelligence, while others like Estonia and Romania are concentrated on goals and metrics rather than on more in-depth considerations.
We do not formulate an opinion on how the strategy is implemented in each of the countries. Such research will be conducted later. We are only describing the actual wording of the document looking for best practices in drafting AI strategies.
Every researched country’s strategy contains reference to some aspect of ethics and discriminatory risks. However some are only mentioning them by referring to documents which will be prepared as the result of conducting the strategy. Which means that the process of building AI environments is a matter of the permanent revolution.
The transparency of Artificial Intelligence in particular used in Automated Decision Making processes in public sector is crucial to prevent discrimination and other risks for individuals and societies.
Russian strategy implicitly states that transparency is among the principles of developing AI. Authors of the strategy are understanding transparency as “explainability of the work of artificial intelligence and the process of achieving results, non-discriminatory access of users of products that are created using artificial intelligence technologies to information about the algorithms of artificial intelligence used in these products”.
Lithuanian Strategy has implemented transparency as one of its core principles – “Principle to encourage transparency and fairness in ai applications”. One of the mechanisms to achieve this goal is to “establish the safeguarding mechanism that would develop systems that are transparent, and intrinsically capable of explaining the reasons for their results to users.” Authors of the strategy see the need for additional investments to advance AI safety and security, including explainability and transparency, trust, verification and validation (…)
Also Slovakia pointed out as one of its goals the necessity of designing and supporting implementation of principles of transparent and ethical use of AI. Having in mind that “digital transformation must be carried out under supervision of the state sector in order to make sure that artificial intelligence as well as other benefits of the digital technologies are used for the purposes of supporting our social values and legal principles.”
According to the Serbian strategy which is the most descriptive document, “there is a comprehensive understanding of ethical aspects while planning, designing and implementing solutions in the field of artificial intelligence from a perspective of technical characteristics as well as from a perspective of effects of implementation, while taking into account the principles of preserving the freedom of individuals, fairness and equality, avoidance of damage, openness, transparency and sustainability.” Sebian government also noted that “individuals who are subject to the decisions made by the AI model must have the right to an explanation and the right to transparency in connection with the algorithm. That is why it is necessary to enable: prevention of discrimination, enable early understanding and interpretation ofthe model and enable explanation of the decision.”
There are already number of examples which prove that implementation of AI solutions in public and private sectors are carrying the risk of discrimination. This seems to be acknowledged in all of the examined strategies.
Serbian strategy again seems to be the most developed in this regard thanks to broad description of this risk and means which can mitigate it. According to the document “it is necessary to prevent discrimination based on machine learning. Comparative practice has shown cases of unintended discrimination which can have considerable consequences for citizens. This is especially valid for artificial intelligence developed for the improvement of public services. For example, a system may be introduced that determines the level of risk of individuals committing a breach or crime. If the used data is not adequately reviewed, such a system might reproduce a discriminative pattern. There are no clear mechanisms to determine whether a particular artificial intelligence solution has met all the necessary requirements to be applied to a large number of users and whether it has been developed in such a way that the data used for training the model or its application is adequate, representative enough and protecting the personal data of citizens. The sudden increase of artificial intelligence implementation requires a responsible and inclusive development of artificial intelligence for the whole society and consequently the compliance with international guidelines, practice and regulations is indispensable and the implementation of these principles in practice must be guaranteed. (…) The ethical implementation of artificial intelligence should prevent the social exclusion of vulnerable social groups, the marginalization and discrimination of members of these groups like old persons, disabled persons, groups with the risk of multiple discrimination and others. Even if discrimination is not a new category, it can become systemic if it is established via machine learning, which is a risk that needs to be prevented.”
Serbian government is also proposing very concrete steps to prevent discriminatory impact of AI which go by far more in the direction of regulatory measures that it can be seen in other countries:
‘1.Establishment of a set of practical ethical guidelines compliant with the Ethical Code which must be observed by all persons training algorithms and which include the recommendation that the team working on the development of artificial intelligence should be as diverse and representative as possible in order to prevent discrimination;
2.Organization of educational workshops with technical and non-technical personnel working on the development of artificial intelligence as primary target group in order to prevent discrimination;
3.Organization of competitions in the framework of which control and tool systems will be developed that can be used by the industry as preventive mechanisms in order to prevent the use of bias and discriminatory data characteristics for algorithm trainings;
4.Regulation of the discrimination prohibition in cases where discrimination occurs as a result of automated decision-making or assistance in decision-making, and clear definition of responsibilities in cases where machine learning leads to discrimination;
5.Establishment of the regulatory obligation that the data types used by machine learning for decision-making or assistance in decision-making are transparent and based on clear and simple schedules.”
Other countries’ strategies are not so descriptive and direct on the matter. They are are rather referring to phares like ‘ethics’ or “fundamental rights and freedoms”
Czech strategy sees the rapid need of “Establishment of an expert platform and forum for continuous monitoring of legal and ethical rules and instruments at national and international level”. We can also read that by 2027 Czech government wants to achieve that the implementation of AI development and usage tools will be done in accordance with ethical and legal rules (including Ethical Guidelines for Artificial Intelligence Development and Use) and human centric AI.
Russian government will support the “development of ethical rules for human interaction with artificial intelligence.”
Lithuanian AI strategy proposes the establishment of the AI Ethics Committee and will support research to minimize bias in AI systems. On a more practical level it has an ambition to create “quality marks for relevant companies that abide by standards set forth by the AI Ethics committee”.
Slovakian government will “regulate legal liability as well as related insurance frameworks for technological and innovative companies for their innovations so that, in the event of possible errors and risks, they worked on their removal in order to ensure trustworthy use of AI and its responsible deployment”. It will also set the legislation that will “enhance communication and cooperation between state authorities and technological companies in order to better address social and other risks of digital transformation for citizens”
Romanian and Estonian authorities were rather modest in their proposals by leaving anti-discriminatory measures to general ethical or “responsible usage” guidelines.
Believing that all AI strategies described in this paper were elaborated with the best intentions some of them are far too modest in referring to the need of transparency and concrete anti-discriminatory measures. Also in the form of regulation. The biggest reflection on these topics is visible in Slovakian and Serbian strategies, and to some extent also Lithuanian. They are worth to be deeply discussed and used as examples of strategies openly and concretely addressing Human Right concerns in AI technologies development.