HSE University Anti-Corruption Portal
A Report on Corruption Risks Arising from the Use of Artificial Intelligence Published

Transparency International (TI), an international not-for-profit organisation, has released a working paper on the corruption risks of artificial intelligence (AI).

The paper stresses that AI is being ever more actively used in different areas, including anti-corruption. In particular, AI has been used to:

  • forecast corruption offences based on news, police archives and financial statements;
  • gather information on potential violations of prohibitions by parliamentary officials by tracing their posts on reimbursements in social networks with the use of bots, which can be further employed in analysis, for instance, by investigative journalists;
  • detect acts of money laundering, including proceeds of corruption, through analysis of big data on financial operations etc.

However, in spite of its considerable potential, the use of AI can also entail negative consequences:

  • unintended, that stem from biased data, flawed algorithms or irresponsible implementation;
  • deliberate, including abuse of functions of AI systems by officials with the aim to obtain personal gain.

The TI experts also explore another group of negative effects of the use of AI, identifying the following forms of this kind of abuse:

1) Development of AI systems to be used for corrupt purposes.

The risks of misuse can arise as early as at the stage of development of AI systems that can initially be created in such a way as to be subsequently used in illegal activities, in particular, for the purpose of:

  • producing fakes aimed at discrediting and/or intimidating political opponents of officials also with the use of the deepfake technology and its analogues;
  • conducting computational propaganda: in social networks for instance, bots can pretend to be real users, promoting political programmes and manipulating public opinion also with regard to contentious issues related to anti-corruption etc.

2) Manipulation of the code or training data for corrupt purposes.

The AI systems originally created in the public interest can be misused as well. In this case, dishonest officials can exploit their flaws, for instance, by resorting to algorithmic capture, i.e. manipulation of AI algorithms to make systematic selection in favour of certain persons also in exchange of a reward from these individuals. In particular, in forecasting a patient’s survival probability in the case of COVID-19, this approach can be used to prioritise the provision of medical services to selected persons; in recruitment or university enrollment processes the AI systems based on the analysis of CVs, test results, interview transcriptions etc. can be used to select certain candidates; in public procurement, the development of the “right” algorithm will allow a certain company (companies) to constantly win bids in the future etc.

3) Use of existing AI systems for corrupt purposes.

Finally, officials can use the AI systems that already exist to implement corruption schemes.

Abuse of Pegasus, an instrument developed to counter terrorism, but used by officials to spy on, threaten and intimidate political opponents, business competitors and investigative journalists is a good example of this practice.

Additionally, the manipulation of existing AI systems can be exercised through microtargeting targeted at small groups of people based on their preferences: for instance, politicians often abuse office resorting to microtargeted advertising campaigns to promote their parties.

The IT paper highlights that the following factors can stimulate the misuse of AI systems by officials:

1. technical:

  • ability of AI systems to operate autonomously;
  • lack of transparency in AI operations, especially in the case of machine learning;
  • scalable personalization of content (for example, in the microtargeting process);

2. human:

  • diffusion of responsibility by shifting the blame to AI systems;
  • high probability of non-liability due to the difficulties in identifying the person responsible for incorrect functioning of AI systems;
  • psychological distance from the victims.

Consequently, the authors of the paper put forward the following recommendations to counter abuse and corruption in the use of AI systems:

  • develop and adopt legal basis regulating the use of AI systems and guiding principles of ethical implementation and development of AI systems also taking account of recommendations of international organisations such as the documents of the High Level Expert Group on AI by the European Commission, the OECD and UNESCO;
  • ensure transparency and accountability of the training data for AI systems and of their code;
  • conduct independent audits of AI systems, in particular, with the engagement of civil society organisations such as the Algorithmic Justice League or Algorithm Watch;
  • ensure interoperability of programming languages used in the process of machine learning to facilitate checks, in particular, by resorting to the ONNX library containing open format code aimed at creating neural networks of deep learning;
  • train the individuals engaged in developing and implementing AI, as well as in conducting audits also in such matters as compliance with ethics and anti-corruption principles, and/or adopt codes of conduct applicable to these persons;
  • raise awareness about inability of AI systems to independently disclose information about violations and importance of whistleblowing among the general public and, primarily, the employees of organisations that implement AI systems.
Tags
ICT
Tags
ICT

We use cookies in order to improve the quality and usability of the HSE website. More information about the use of cookies is available here, and the regulations on processing personal data can be found here. By continuing to use the site, you hereby confirm that you have been informed of the use of cookies by the HSE website and agree with our rules for processing personal data. You may disable cookies in your browser settings.