Fraud and money laundering are no longer separate battlegrounds; they’re merging at breakneck pace as criminals exploit the gaps between them. What once seemed like static, well-defined patterns of illicit flow now morph under the influence of AI, social engineering and decentralised finance. In both the UK and EU, regulators such as the Financial Conduct Authority (FCA) and the newly established Anti-Money Laundering Authority (AMLA) increasingly demand that fraud detection and AML systems work hand in glove. This article examines five evolving fraud typologies that threaten to outpace legacy systems. It asks a pressing question: Can our machine-learning architectures adapt quickly enough to stay ahead of criminal innovation?

Authorised Push Payment (APP) Fraud

APP fraud has surged to become one of the most common scam types in the UK, with losses of over £450 million reported in recent years. Under the new Payment Systems Regulator (PSR) in the UK, customers can be reimbursed up to £85,000 for APP losses, with sending/receiving payment firms equally sharing the cost.

Unlike typical fraud, APP scams rely on the victim authorising the transfer, often through impersonation or social engineering. For example, a fraudster might pose as HMRC and demand immediate payment for a “penalty,” or advertise a non-existent designer bag on a marketplace, tricking the buyer into transferring funds. Since the payment is “authorised,” conventional AML systems struggle to flag it as suspicious.

To counter this, leading banks are piloting behavioural biometrics and contextual AI. This could include monitoring factors like typing rhythm, device fingerprinting, navigation flow, or hesitation in a transaction to detect signs of coercion. The goal is to intercept manipulative instructions before the payment is finalised.

In the UK and EU, real-time cross-bank data sharing is becoming essential. Under the EU’s Digital Operational Resilience Act (DORA) and the UK’s Economic Crime Plan, banks may soon be required to collaborate on fraud intelligence and jointly flag suspicious flows, thus helping to spot scams that cross institutional boundaries.

Trade-Based Money Laundering (TBML)

In recent years, criminals have begun abusing the so-called “green economy” to launder funds. One notable example is fraudulent carbon credit trades.  In 2024, a €5 billion carbon trading scam was exposed in Europe, revealing how fictitious credits were used to move money under the guise of environmental investment.

The real challenge for ML systems is that financial transaction monitoring rarely ingests the wealth of trade data, including customs records, bills of lading, shipping manifests or IoT sensor data from ports. Without that context, over-invoicing or under-delivery schemes slip through.

True innovation requires an integrated ML approach that fuses financial flows with logistics intelligence, potentially involving mapping container movements, reconciling declared weights/volumes to invoices, spotting low-probability trade routes. A recent study proposes anomaly detection models combining supervised and unsupervised learning to flag suspect letters of credit and trade guarantees.

On the regulatory front, the EU’s AMLA is expected to centralise trade-finance data sharing across member states. In the UK, the Joint Money Laundering Intelligence Taskforce (JMLIT) already runs dedicated cells focusing on TBML, providing cross-industry alerts and intelligence sharing.

Money Mule Schemes

Criminals are becoming far more sophisticated in recruiting money mules, especially via gamified social media campaigns or micro-influencers targeting younger people under cost-of-living pressure. Many offers pitch it as “easy money for helping with payments” or “just forwarding funds as a favour.” In the UK alone, banks detected over 39,000 accounts showing mule-like behaviours in 2022.

Traditional transaction-monitoring systems often flag only larger or unusual transfers. Criminals have responded by fragmenting funds, in other words moving them in small amounts across multiple digital wallets, challenger banks and instant payment rails, making detection far harder. The mules’ accounts may show perfectly ordinary inflows and outflows, individually benign, but harmful in aggregate.

To evolve, banks are piloting AI-driven network analysis, looking across institutions to detect clusters of mule behaviour. Firms that collaborate in real time can spot “mule chains” spanning banks before funds exit the system. In the UK, the National Fraud Strategy 2023 elevates mule detection as a priority. Meanwhile, in the EU, AMLA plans to create a “single access point” database of high-risk accounts to support cross-border intelligence on mule networks.

Crypto-Enabled Fraud and Laundering

Cryptocurrencies are no longer niche but central to fraud and laundering networks. Criminals increasingly exploit DeFi platforms and NFTs not just to move funds but to post them as collateral for synthetic loans, obscuring origins. One real case involved wash-traded NFTs being pledged for loans, enabling illicit proceeds to be “cleaned” through repayment cycles that appeared legitimate.

This highlights blockchain’s transparency paradox: while every transaction is visible, pseudonymous wallets hide the ultimate owner. Traditional AML tools, built for bank transfers, struggle in this new environment.

Innovation is coming through graph AI and chain analytics, which map wallet clusters and link on-chain flows with off-chain identifiers such as IP addresses, exchange records or behavioural signals. These techniques are helping compliance teams untangle webs of micro-transactions that once looked impenetrable.

In the UK, the FCA’s extended registration regime demands that crypto firms meet AML standards. In the EU, the Markets in Crypto-Assets (MiCA) regulation now requires crypto-asset service providers to adopt equivalent AML safeguards. Together, these rules ensure crypto faces the same scrutiny as traditional finance, but only if machine-learning systems adapt fast enough.

Synthetic Identity Fraud

Fraudsters are now leveraging generative AI to mass-produce synthetic identities such as deepfake photos, fabricated documents and manufactured personas that look astonishingly real. In the UK, synthetic identity fraud soared by 60 % in 2024 and now constitutes nearly a third of all identity fraud cases.

Legacy KYC systems, which rely on document checks and basic data matching, struggle to spot high-quality synthetic identities. These identities often combine real bits (addresses, birth dates) with invented components, making them appear credible and exploiting gaps in onboarding and credit systems.

To counter this, many banks are investing in multi-factor identity proofing –  biometric liveness detection, device and user behaviour analytics, and AI tools that detect subtle digital manipulation.

Looking ahead, the EU’s forthcoming eIDAS  2.0 (shorthand for electronic identification, authentication, and trust services) digital identity wallet may help close verification gaps by standardising identity credentials. In the UK, regulators are considering a national digital ID initiative designed to root out synthetic fraud at scale.

Can ML Systems Adapt?

Authorised push payment scams, trade-based laundering, mule networks, crypto-enabled fraud and synthetic identities all highlight one uncomfortable truth, the boundary between fraud and money laundering is dissolving. Traditional monitoring systems, designed for static red flags, cannot keep pace with adaptive, AI-driven criminal strategies.

To remain effective, ML-based detection must evolve into full ecosystem intelligence, drawing not just on transaction data but on a wider net of signals such as behavioural biometrics, verified identity credentials, logistics and trade data, and blockchain analytics. The future of financial crime prevention lies in collaboration rather than siloed monitoring. The EU’s AMLA, the UK’s JMLIT partnership, and regulatory sandboxes run by the FCA are already experimenting with AI-driven models to spot emerging risks.

Yet the challenge remains that criminals iterate faster than compliance teams. If fraudsters are training AI to exploit systemic weaknesses, the question becomes urgent, can banks, regulators and technology providers move quickly enough to stay one step ahead?

And what about you…?   

  • In your own organisation, do fraud prevention and AML teams work together, or do they still operate in silos? what risks might that create?
  • If fraudsters are already using AI to innovate faster, what new collaborations, tools, or approaches would you want your business to adopt to avoid being left behind?