Artificial Intelligence Against Financial Crime
By Victoria Sztanek
Introduction
While artificial intelligence (AI) may be a prevailing buzzword, its practical application in financial institution’s compliance programmes is gaining significant traction. AI can help create efficiencies across different segments of the anti-financial crime workflow, including Know Your Customer (KYC) and onboarding, customer due diligence, screening, and transaction monitoring. It can also facilitate a risk-based approach, advocated for by regulators and industry leaders, by providing compliance teams with a more accurate view of a client’s risk profile. With a more precise view, financial institutions can better understand the money laundering and terrorist financing risks they are exposed to and prioritise resources towards high-risk scenarios. For these reasons, there is a growing trend of AI adoption for both traditional financial services firms and newer fintech companies[1]. Yet AI’s implementation is not without its controversies, sparking protective measures in some jurisdictions and guidance from leading industry bodies regarding governance. Against the backdrop of AI’s growing relevance, this article looks at how these advanced technologies impact the landscape of financial crime prevention.
The AI-advantage in financial crime prevention
Perhaps the most significant advantage of AI technology in anti-financial crime compliance is its ability to improve risk identification. AI tools excel at processing vast volumes of information and establishing complex networks or connections within disparate data sets that would otherwise go unnoticed by human analysts. Machine learning, a subset of AI that continuously improves itself through adaptive learning, is especially skilled at spotting unusual patterns or anomalies that might indicate illicit activities. For example, it can help locate transactions and networks of individuals that may reveal insights on criminal activities like money muling, which often involves multiple small transactions of equal amounts.
Because of AI’s ability to do this at scale, AI-enhanced tools not only better detect suspicious activity but also increase efficiency and accuracy by automating tasks and reducing the number of false alerts for analysts. False positives still account for a significant portion of anti-money laundering alerts, creating a financial and resource strain for firms[2] and negative customer experiences. By reducing this burden, compliance teams can focus on genuine instances of suspicious activity instead of sifting through false alerts, furthering anti-financial crime efforts.
Additionally, AI helps financial institutions take a more proactive approach. For example, AI-driven transaction monitoring enables firms to respond swiftly to fraud or money laundering indicators, allowing for a real-time approach to risk management. Conversely, traditional rules-based systems are inherently reactive, only flagging risks after they have already occurred or once a transaction fits into pre-established patterns. This AI-enabled proactive stance translates into a more robust risk management programme.
Challenges of AI in anti-financial crime
Despite these enormous potential benefits, AI’s adoption faces significant obstacles. One major concern is the risk of perpetuating biases, especially when a model trains on skewed or incomplete data. These biases can lead to the unfair profiling of individuals based on factors like nationality, ethnicity, age group, or geographic location, potentially leading to financial exclusion or unfair denial of services. For instance, if AI models are trained on data that over-represents a particular nationality as being associated with financial crimes, they might disproportionately flag the financial activities of those belonging to that nationality. Another critical challenge is explainability and transparency. AI models that utilise deep learning can become ‘black boxes’, making decisions that are extremely difficult to decipher. This opaqueness poses a problem as regulators require justifications for decisions and a clear understanding of rationale. Technical complexities and concerns with safeguarding data privacy further intensify the challenges of AI’s use.
As the threat landscape evolves rapidly, bad actors increasingly use AI tools to further their illicit activities. One report by Europol warned how criminal use of large language models like ChaptGPT could further crime areas like fraud and social engineering, disinformation, and cybercrime.[3] Another recent global study of financial institutions found that more than half of respondents saw an increase in financial crimes involving advanced technologies like AI.[4] These trends highlight a notable shift towards more sophisticated and professional criminal methods, marked by the rising use of advanced tools to circumvent controls and enhance criminal operations.
Regulatory perspectives
Overall, industry bodies have supported adopting new technologies like AI and machine learning. The global money laundering and terrorist financing watchdog, the Financial Action Task Force (FATF), released a report in 2021 that noted its ability to make processes “faster, cheaper, and more effective”.[5] They recognised AI’s strengths in better identifying risks and monitoring suspicious activity through outlier detection and improved data analysis while discussing areas of concern like financial exclusion. More recently, the Wolfsberg Group, a non-governmental association of thirteen global banks, published a document outlining its support for AI and machine learning and principles for responsible use, especially regarding governance.[6]
Many national regulators have echoed this positive sentiment. The Monetary Authority of Singapore (MAS) has long supported innovation and already uses machine learning to identify fraud and other suspicious activities.[7] In the United States, the Financial Crimes Enforcement Network (FinCEN) has openly encouraged banks to consider innovative approaches to meeting their anti-money laundering obligations.[8] However, awareness of the potential challenges has led to mitigation measures such as the European Union’s forthcoming AI Act and the United States’ Blueprint for an AI Bill of Rights, which outlines principles for ethical AI use, such as algorithmic discrimination protections. These developments reflect a careful approach towards AI adoption, with industry bodies striving to mitigate risks while encouraging financial institutions to leverage its advantages.
Horizon scanning
Looking forward, AI’s role in combating financial crime presents opportunities and obstacles. While AI enhances risk identification and operational efficiencies, issues like bias and a lack of transparency remain. Efforts are being made to tackle these challenges, evinced by a growing trend towards ‘explainable AI’ and measures to actively counteract bias. It becomes increasingly evident that with criminals exploiting sophisticated technologies, incorporating AI as a defence against new types of crime becomes more critical. The challenge for financial institutions is finding ways to harness AI’s capabilities while proactively reducing the potential harms.