The rapid evolution of artificial intelligence (AI) is ushering in a new era of sophisticated social engineering attacks. As AI becomes more advanced and accessible, the cyber-threat landscape is in flux and creating new challenges and opportunities for defence.
Financial services firms must be acutely aware of these emerging capabilities and start developing resilience strategies to counter threats effectively.
Rising tide of AI-enhanced scams
Even tech giants are not immune to such attacks. A notable example was the case of Evaldas Rimasauskas, who stole more than $100 million (combined) from Google and Facebook. The high-profile case underscored the vulnerability of even the most technologically advanced companies to sophisticated social engineering attacks. As AI technology becomes more accessible and powerful, we can expect existing fraud schemes to be enhanced and new attack vectors to emerge, potentially facilitating even more damaging breaches.
Enhancing existing attacks
Attackers will most likely start by trying to exploit existing scams more effectively. As AI is quite expensive to run, some attacks, such as leaving malware-infected devices in public places (called “baiting”), are unlikely to see significant near-term benefits from GenAI. A good starting point, therefore, is to consider which existing attacks can easily be enhanced by GenAI.
Phishing emails, long a staple of cybercriminals, are becoming more difficult to detect. Gone are the days of easily spotted spelling errors, awkward phrasing and obvious linguistic indicators of fraud. Scammers use tools such as ChatGPT to construct and refine their templates, making detection more challenging. These AI-powered tools generate convincing text in multiple languages, adapt to different corporate communication styles and even mimic specific individuals’ writing patterns.
Spear phishing attacks target specific individuals. To execute such attacks effectively, an attacker must profile the intended victim as comprehensively as possible, including their employees, especially younger recruits who are less prepared to identify a targeted attack.
With AI, the ability to build complete profiles and make good guesses about relationships means that phishing attacks risk becoming ever more convincing. Even the phish itself can be constructed by GenAI, allowing scammers to target more people than is possible through simple automation.
“Whaling”, which targets high-profile individuals such as C-suite executives, traditionally requires significant effort and resources. This extreme form of spear phishing is even more tempting. As whales often produce significant quantities of video, audio and writing online, scammers can use AI to produce convincing communications.
Emerging AI-enabled threats
There are several areas where AI enables new attacks. AI’s ability to clone voices means companies relying on voice-based authentication must seriously consider new defences. At this moment, the author considers voice authentication dangerous and would not feel comfortable relying on it.
Another emerging threat is AI’s capacity to accelerate the process of finding vulnerabilities in both open-source and proprietary software. AI models can analyse code much more quickly and thoroughly than humans, permitting the discovery of previously unknown vulnerabilities.
This capability could lead to more efficient exploitation of security weaknesses in the financial system. For example, an AI system could analyse a bank’s mobile app, identify a subtle flaw in its encryption and generate an exploit in a fraction of the time it would take a human hacker.
Defensive strategies
Financial services firms can protect themselves from these new threats.
Threat actors will not suddenly become invincible using AI, but companies must adapt to shifting risks.
Firms can apply some of the same techniques used by bad actors to help defend against attacks and, importantly, become more resilient. Financial companies would be well-advised to consider the following strategies and tactics.
Enhanced training and awareness
Training is a firm’s best first line of defence. Institutions should implement comprehensive AI awareness training for all employees, from entry-level staff to top executives. Such programs should go beyond traditional cybersecurity training and focus on the unique challenges posed by AI-enhanced attacks.
It is crucial to update training materials regularly to reflect emergent AI-enabled threats. The rapidly evolving nature of AI technology means that attack techniques can change quickly. Establishing a dedicated team to detect new AI-powered threats and update training material accordingly can help keep an organisation ahead of criminals.
Leverage AI for defence
While AI presents new threats, it also offers powerful defensive tools. Financial firms should consider implementing a multi-layered AI defence strategy to protect against sophisticated attacks.
This first defence is obvious but worth emphasising. AI can be used to detect suspicious emails. It would be expensive to use GenAI on every email to and from an organisation, but low-cost saliency detectors can be used to trigger GenAI when needed. If the GenAI thinks a message might be suspicious, it would alert users to “Think!” before responding. GenAI could also provide an interactive Q&A to help users and alert supervisors when an employee is confused.
Additionally, AI can significantly enhance penetration testing scenarios. By simulating a wide range of attack vectors, organisations can identify and address vulnerabilities before genuine threats exploit them.
Another step is to implement AI-driven anomaly detection systems to monitor user behaviour and transaction patterns and identify unusual activities that might indicate a breach or attack.
Rethinking authentication
All financial firms should carefully consider their authentication system. It would be wise to assume that images and audio may no longer be effective ways to authenticate people, especially high-profile targets.
A multi-factor authentication approach that combines methods less susceptible to AI spoofing should be implemented. This could include hardware tokens or SSH keys (i.e., encryption). SSH keys may become easier for people to use, with AI agents helping to set them up and protect them for users. Due to their technical complexity, these techniques are not yet used by regular consumers.
It is also crucial to educate customers about new authentication methods and the reasons for adopting them. Clear communication can help ease the transition and ensure customer cooperation in maintaining security.
Improving code security
As AI enhances attackers’ ability to find and exploit code vulnerabilities, financial institutions must intensify their secure coding and vulnerability management practices. They can repurpose the same AI tools that allow hackers to find vulnerabilities.
Regular audits and security updates, especially for critical systems, are more important than ever. They should include not just in-house software but also third-party libraries and services. AI can continuously monitor for new vulnerabilities in deployed code and related dependencies, alerting security teams to potential risks immediately upon discovery.
Public education and communication
Firms are also responsible for educating their customers about emerging AI-enabled threats. Developing comprehensive public education campaigns can help create a more security-conscious customer base and reduce the overall attack surface.
Awareness campaigns should describe new AI-enabled threats in clear, non-technical language. It is crucial to provide clear guidelines for official communication channels and authentication procedures. Firms should regularly remind customers that the institution will never request sensitive information by email or phone and that all official communication will follow specific, verifiable patterns.
Best incident response practices
Despite the best preventive measures, breaches will still occur. Companies must plan for resiliency and how to recover from an attack.
Communication is vital during a security incident. Firms should have pre-prepared communication templates and channels to quickly and transparently inform affected parties and regulators about a breach. Messaging should be clear and factual and provide actionable steps for customers to protect themselves.
Thorough post-incident analysis is crucial for improving defences against future AI-enhanced intrusions. In addition to examining an attack’s technical features, firms must also assess contributing human factors. Were there gaps in employee training? Did authentication procedures fail? Understanding these elements can help refine both technical defences and organisational policies.
Embracing the challenge
The rise of AI-enhanced social engineering attacks demands a radical rethink of cybersecurity strategies for financial services firms. To stay ahead, companies must proactively invest in employee training, robust security measures and a culture of security awareness.
By embracing this challenge, financial institutions can protect their assets, maintain customer trust and thrive amid intensifying cyber threats. The key is to stay vigilant, continuously update defences against emerging risks and transform the power of AI into a security advantage.
This article was originally published on Thomson Reuters and you can access the original version here.
By Oliver King-Smith | smartR AI