By Marina Antoniou | Risk Management Expert | United Kingdom
Integrating Gen AI in regulatory compliance and risk management has been a transformative shift. Organizations are constantly looking for opportunities to leverage this technology.
Gen AI use cases in Risk Management
There are a number of Gen AI applications in finance such as to support compliance monitoring, fraud and anti-money laundering (AML) investigations.
Compliance monitoring: Gen AI can compare, combine and extract regulatory requirements across jurisdictions. Also, GenAI can be used to update internal organisational policies and procedures to comply with new or changed regulatory requirements.
Fraud: GenAI can help combat credit card fraud. For example, GenAI can produce synthetic data, such as fake card numbers, and in this way monitor for discrepancies with actual fraudulent transactions and upgrade risk strategies accordingly.
AML: In today’s digital age, there is a surge in financial crime, particularly money laundering. Money launderers become more and more sophisticated and traditional AML methods could struggle to keep up. The evolution of GenAI could introduce more sophisticated capabilities.
Machine Learning vs Gen AI
Machine Learning models rely on predefined features, rules and specific training data. Large language models (LLMs) are pre-trained on extensive datasets allowing them to generalise across various tasks. As such, LLMs are more flexible and adaptable, they can learn from their own mistakes, inaccuracies and new data. They are so much more scalable and overall could have higher success rates.
Humans versus LLMs
Gen AI models can be used to empower human expertise. A collaborative approach is needed where the integration of LLMs provides powerful tools for professionals to perform their jobs faster and more efficiently. The key is to combine the strengths of both Gen AI models and humans to create a holistic robust risk management framework.
Gen AI and AML
– LLM and Big Data
LLMs work by analysing vast amounts of text data, learning patterns and structures in language, and relationships within data. LLMs can process and generate AML-related content in response to user prompts or predefined criteria. In order to produce reliable output, LLMs need to be trained. The LLM training involves feeding the models with massive data sets from various sources of unstructured data.
Data sources of relevance to AML include:
Web/social media: LLMs analyse vast amounts of data from news sources, identifying potential risks.
Regulatory requirements: AML regulations firms need to comply with.
Customer data and transactions: The model develops a deeper understanding of customer behaviour by analysing transactions and their correlations.
Sanctions lists: Scanning data for links to sanctioned entities.
Internal data: Customer data received by relationship managers.
Before being able to use the data sources, we need to ensure that high quality, bias free data is used. If the dataset is incomplete or biased, the LLM output will also be. Therefore, it’s very important to work on data pre-processing and prepare data to be used to train the LLM. The aim is to train LLM to think like money launderers and scan datasets, identify potential vulnerabilities, simulate various laundering scenarios, and ideally, act as a preventative measure to anticipate and block illicit activities.
LLM Output and Limitations
The LLM can result in diverse output, including items such as:
- AML alerts: Historically, a common issue in traditional AML screening processes has been the big number of false positive alerts. Gen AI can distinguish between false positives and genuine alerts with a higher degree of precision. In addition, LLMs could identify patterns indicative of fraudulent activities and generate real-time alerts.
- Complex networks and visualisation: Money laundering can involve complex networks, structures and relationships. The LLM looks into the data in depth and could identify hidden clues and create visualisations that reveal the complex ML networks. Hence, AML professionals could use this information to investigate.
- Scenario simulation: LLMs can use massive amounts of data to reconstruct different type of money laundering scenarios by creating different combinations or simulations of fund transfers. Then, AML professionals could focus and investigate the higher risk scenarios.
- Streamlining tasks: LLMs could help automate and streamline time consuming tasks in AML operations, such as transaction monitoring, customer due diligence, etc.
- Educational material: The LLM could also generate educational materials for new joiners.
At the same time, the following limitations need to be addressed to ensure a more meaningful output from the LLMs:
- ‘Black Box’ issue: LLMs often lack transparency and interpretability making it difficult to understand how they generate responses. Explainable AI comes into play, implementing AI techniques that provide insights into model behaviour and decision-making process.
- ‘Hallucinations’: A sophisticated model could still generate inaccurate or misleading information, so ongoing LLM monitoring is vital.
- Inputs: LLMs could exhibit unpredictable behaviours, especially when exposed to new inputs. As such, robust testing and validation procedures are required.
- Data: Adherence to data privacy regulations across jurisdictions must be ensured. This can be addressed by applying data anonymisation and encryption techniques. The training data sets can be biased so a detailed review of the data and output is needed.
- Continuous training and evaluation: To be effective, LLM models need to be continuously updated and trained with the latest money laundering techniques, especially in light of recent Gen AI money laundering schemes.
Conclusion
The integration of Generative AI for fraud, money laundering and risk detection purposes has brought about remarkable advancements.
There are challenges and these can be addressed by various techniques and also by utilising the input and expertise of trained staff. Also, there are regulatory standards, such as the EU AI Act, that could lead the way in the ethical and responsible use of AI and address some of the challenges.
With the increasing complexity of modern threats, it’s becoming more challenging to mitigate risks effectively. This is technology-assisted decision-making, and requires the added-value of human oversight over the LLM processes and output. Collaboration between AI experts, data scientists, and AML professionals is very important to ensure increasing threats are effectively addressed via the use of LLMs.