HSE University Anti-Corruption Portal
AI as a Means to Prevent Corporate Corruption

The Coalition for Integrity (C4I) has released a report on how to use machine learning to assess corruption risks and develop a corporate compliance system.

The report makes an attempt to find out how the development (acquisition) of artificial intelligence (AI) technology based on machine learning solutions can be useful for preventing corporate corruption and what measures a company should adopt to introduce it. 

Possibilities of using AI

In spite of the fact that neither international documents nor the national legislation usually contain direct requirements to use AI to counter corruption, the employment of such technologies at the corporate level may be appropriate for a number of reasons.

In particular, the C4I experts highlight that the use of AI can positively affect the analysis of a corporate corruption prevention system. The updated version of the 2020 manual Evaluation of Corporate Compliance Programme published by the US Department of Justice (US DOJ) recommends that prosecutors check whether the compliance and control officers have the necessary direct or indirect access to the relevant sources of information to conduct timely and effective monitoring and/or verification of policies, mechanisms of control and transactions of the company, as well as whether there are any obstacles impeding the access to the relevant sources of information and, if so, what the company does to overcome such them. A recent deferred prosecution agreement between the US DOJ and the Goldman Sachs Group (one of the major cases of violation of the US Foreign Corrupt Practices Act, more detailed information about which is available in our infographics) stressed that the company did not have electronic surveillance of correspondence or activities of the head of the organisation which would have made it possible to detect that a third party previously accused of corruption was involved in a specific transaction.

Besides that many organisations, particularly transnational ones, conducting their activities in dozens of countries, have to work with large amounts of data and take into account a myriad of different factors in conducting data analysis. The accumulation and processing of such data without employing information technologies, including AI technologies, may be a daunting challenge to anticorruption divisions. The use of machine learning which is widely spread in the corporate environment to counter different kinds of crimes, including fraud, identity theft and money laundering, permits to more effectively detect unlawful activities (or risks of such conduct).

One vendor that offers fraud detection solutions to banks asserted that it helped a well-known European bank reduce its false positives by 60 percent and increased actual fraud detection by 50 percent. Another global bank that regulators had directed to review some 20 million transactions dating back several years used a machine-learning approach that ultimately not only satisfied the regulator, but significantly decreased the number of alerts being generated and increased the productivity of the remaining alerts.

In the framework of corruption prevention, machine learning solutions can be used by an organisation to:

  • assess corruption risks, both enterprise and client/transactional risks;
  • conduct third-party due diligence and payments procedures;
  • carry out periodic testing of anti-corruption compliance system and continuously improve it in order to keep pace with recent evolutions as businesses constantly evolve due to changes in customers, markets, products, services, and relationships, thus the corporate anti-corruption strategy should be adjusted accordingly.

Assessing the appropriateness of employing AI 

Before incorporating an anti-corruption machine learning solution into the corporate corruption prevention strategy, the organisation should address a number of issues concerning the appropriateness of employing AI.

1. Will AI, based on machine learning, be more appropriate than knowledge-based systems?

The company should assess the volume of data available for analysis and its objectives: if the scale of activities of the organisation implies the need to analyse a large amount of information and to take into account numerous external factors, machine learning will permit to promptly detect links between them.

2.  Is the anti-corruption division of the company adequately staffed? 

However quickly AI may generate outputs, they will be useless if the company has insufficient staffing to make timely and effective use of the output from either system.

3.  Should AI address a broader range of risks than anti-corruption?

Any corporate risk management program needs to factor in a variety of risks that may arise throughout the activities of the company: strategic, operational, compliance, and information security risks, to name just a few. It would make little sense for a company to build a standalone solution addressing only third-party bribery and corruption risks, while using other solutions for other categories of third-party risks. A company may need to consider starting with a smaller and more discrete solution regarding corruption risks and make sure that that solution is working effectively before expanding its capabilities.

4.  What would the potential cost and potential return on investment that a machine learning solution could entail be?

The C4I experts stress that building a machine learning solution may involve significant costs which may range from the six figures to several millions of dollars depending on the complexity of a solution that is crafted to meet a particular company’s unique risk. At the same time the use of AI may provide the company with a long-term return on investment. In this context, the authors believe that such expenditure may be warranted if the company concludes that (a) its current-state systems cost it multiple millions of dollars without reducing the relevant types of risk and (b) a machine learning solution may significantly reduce compliance costs.

Steps towards AI development 

Once the company has determined that there is a business case for upgrading its anti-corruption program with machine learning, the company’s development of that solution should proceed, according to the C4I experts, in a series of five major steps. 

1.  Framing the problem. This stage includes six key steps: 

·               define the anti-corruption problem with reference to machine learning, in particular, determine the output an algorithm should produce and what prediction task should be performed, for instance, in accordance with the anomaly detection approach (also known as outlier analysis), which involves searching for and identifying instances that do not conform to the typical data in a data set, and define the data that could be used in a machine learning solution;

  • see what types of structured and unstructured data the company has that could be used in that solution; in this case, it is likely that the company will have “raw” data, which cannot be employed in a solution; consequently, it will have to allocate resources for collecting, curating, and labeling the data, as well as for designing the architecture of a machine learning solution;
  • design the data for the model, meaning that the company should anticipate how data for the model should be designed to make useful predictions, which data categories are necessary, how easy or difficult it would be to access certain data categories, whether it needs the full volume of data in those categories etc., and create a dataset with the target values for further supervised learning;
  • identify all sources of data, both structured and unstructured, that the company would potentially include in a possible model; here, it is important to know that if a model were to draw on data that have little or no relevance to the specific problem under consideration, it could potentially find patterns that human beings could not, but also could generate predictions that have a high number of false positives and negatives;
  • determine and prioritize, at least at the outset of problem framing, easily obtained inputs; the authors stress that at this stage no company needs to try to identify all potential inputs for the model it is developing; rather, its initial focus can be on one to three inputs that it can easily obtain (e.g., data relating to gifts and entertainment expenses and contracts with prospective and current third parties) and that it believes would produce a reasonable initial outcome;
  • ·determine quantifiable outputs of a model: the company should bear in mind that algorithms and neural networks can generate quantifiable outputs rather than qualitative outputs; this is why at this stage the company needs to decide what kinds of quantifiable outputs (e.g., numbers, labels, or clusters) it needs for specific purposes.

2.  Constructing the dataset. Once the problem is framed, the company must determine the dataset (whether structured, unstructured, internal, or external) of sufficient volumes to begin training the model. As a general proposition, the larger the dataset, the more complex the patterns that an anti-corruption machine learning solution can detect. In this context, when selecting the data the company should determine if the data are relevant to the problem, accessible with reasonable effort and clean (or capable of being cleaned within an acceptable timeframe and cost). 

3.   Transforming the data. Before a company can proceed to train an anti-corruption machine learning model, the next step in its development process is to engage in data transformation. Data transformation is a process in which the company takes data from its raw or normalized source state “and transforms it into data that’s joined together, dimensionally modeled, de-normalized, and ready for analysis”. The steps involved in data transformation can include changing data types, handling missing data, removing non-alphanumeric characters, and converting categorical data to numerical data.

4.    Training the model. While machine learning discussions often refer to the dataset to be used, a company should recognize that there are three distinct datasets that are part of the training process: training set, validation set and testing set. The first set, which constitutes the majority of the total data (around 60 percent), is also known as the historical data set and “is the one used to train an algorithm to understand how to apply concepts such as neural networks, to learn and produce results and includes both input data and the expected output.” The second, validation set is used to select and tune the final machine learning model, while the third one, called testing, is used to test the trained model.

The training phase is the critical point at which a company must ensure that the dataset on which it intends to draw for training, validating, and testing is sufficiently large to contribute to valid predictions. At least two significant problems can stem from a too-small dataset:

1) class imbalance: this is a circumstance in which there is one overrepresented class and one heavily underrepresented class in the dataset, and the task that the company has set for a machine learning solution is to detect a rare event; in essence, class imbalance is a lack of data diversity - that is, there is insufficient breadth and variety in the data labels and related attributes for effective training of the model on the variety of scenarios that it is expected to analyse and understand;

2) overfitting and underfitting that are the circumstances in which a modeling error occurs when the test dataset does not correspond to the training dataset or the model does not do well with either the training dataset or the test dataset.

Moreover, a company developing anti-corruption machine learning must be attentive to another major problem that can affect any machine learning solution, regardless of the size of the dataset. That problem is bias. The company should bear in mind that the data it uses are “always partial and biased”, as they are generated through a process of abstraction, so they are the result of human decisions and choices.

The stage of training the model concludes with the evaluation of the performance of the model with reference to such metrics as accuracy (the fraction of correct predictions by the model), precision (the proportion of true positives), recall (the proportion of actual positives that were defined correctly) and the F1 score (the balance between precision and recall). 

5.  Making predictions and assessing performance. The final step in developing an anti-corruption machine learning solution is to use the model to make predictions, i.e. to analyse the predictions the company’s model is making, to satisfy itself that there is sufficient goodness of fit (i.e., how closely a model’s predicted values match the observed or true values) and that there are no overfitting, underfitting, or bias concerns. This analysis should be conducted on a continuing basis, as no machine learning model is static, and that the company’s model will require retraining as new data become available and as external conditions change. The authors of the report stress that refining the model may take a number of months. The experts also remind companies of the need to document each step of the development, implementation, and revision of the solution, so that they are prepared to demonstrate the operations and effectiveness of the solution for auditors and regulatory agencies as needed.

Issues in using machine learning

Besides the algorithm of implementing anti-corruption machine learning, described above, it is recommended that organisations bear in mind a number of additional issues that will be addressed in its further operation, including:

  • data privacy: the need for the collection, processing, use, and transfer of personally identifiable data by AI systems should comply with relevant data privacy laws;
  • cybersecurity: the company should ensure effective security to mitigate the risk of data theft and other challenges that may not be obvious such as unintentional memorization (circumstances in which models “memorize” certain text sequences and can auto-complete the text sequence after users enter a text prefix. For instance, the text prefix “My Social Security number is …” has caused the model to auto-complete the sequence to the social security number itself;
  • monitoring of machine learning: its implementation, at least at the outset, will require constant control over its outputs to timely detect wrong or undesirable (for instance, discriminating against certain groups of individuals) solutions, and to ensure that the model is in line with the applicable law, including the amendments to it; to ensure such monitoring the organisation will have to employ additional staff or to assign additional duties to its active staff;
  • streamlining the existing governance and compliance structures and processes to ensure that the solution operates correctly; based on the data collected in accordance with similar rules, a dataset of necessary volume is formed to make predictions with respect to all corporate divisions and areas of activity. 

Examples of AI employment

The C4I cites in its report examples of several companies that have developed and successfully introduced in their corruption prevention activities the AI technology, based on machine learning. One of them is Anheuser-Busch InBev (AB InBev), the largest brewer in the world. It operates in more than 80 countries and is consequently subject to numerous versions of anti-bribery and antitrust legislation, each with different requirements and obligations. For that reason in 2015, the AB InBev leadership sought “to more uniformly manage risks that could emerge in different locations and focus on building a centralized compliance program”. To this end, it created a single enterprise-wide repository integrating data from finance, compliance, human resources, and other systems. That platform, eventually codenamed “Operation BrewRIGHT” allows for a more effective identification of transactions and third parties that pose a heightened corruption risk, based on the number of risk attributes (e.g., urgency of payment, payment to a political or state-owned entity, and high-risk vendor type) and greater weighting of these attributes.

BrewRIGHT allows for the detection of a number of different risks, such as money laundering, antitrust violations, conflict of interest, third-party vendors, travel and entertainment (the block “Free Beer” whose name speaks for itself). Risk assessment includes different workflows that can be used either separately or complement each other. For example, a workflow might begin with an algorithm to detect which suppliers are interfacing with the government and conclude with a profile and a risk score assigned to the vendor; that profile can be used to monitor whether certain economic activity is consistent with the purpose of the vendor as identified in the diligence process (i.e., was the vendor approved to perform activity A but ultimately paid to do activity B?); the risk score is then used to weigh and influence risk scoring of each transaction with that vendor

The solution reportedly has reduced the company’s costs associated with investigating suspect payments by millions of dollars: before BrewRIGHT, one investigation into a certain type of third-party vendor in three countries cost AB InBev about $1.8 million; in contrast, another investigation into the same type of vendor in six countries, using BrewRIGHT, cost about $250,000.


We use cookies in order to improve the quality and usability of the HSE website. More information about the use of cookies is available here, and the regulations on processing personal data can be found here. By continuing to use the site, you hereby confirm that you have been informed of the use of cookies by the HSE website and agree with our rules for processing personal data. You may disable cookies in your browser settings.