AI, social engineering driving fraud risks for accounts payable teams
Cyber security specialists have warned that AI and social engineering tactics, including impersonation and manipulation, are driving fraud threats for accounts payable teams.
Social engineering tactics such as impersonation and manipulation are among the most common methods used by fraudsters to steal funds from businesses, cyber security and payment fraud experts have warned, while AI is increasingly part of scammers’ toolkits.
Jon Soldan, chief executive of payment fraud prevention firm Eftsure, told Accounting Times that impersonation tactics were often used by cybercriminals to extract funds from accounts payable teams.
“What we see the most of is vendor impersonation, executive impersonation, where fraudsters are impersonating their suppliers or executives to get accounts payable teams in particular to make fraudulent payments,” Soldan said.
“A lot of supplier communication happens over email, which isn't a particularly secure channel.”
The Australian Signals Directorate (ASD) found that phishing – a type of email and text-based social engineering – was recorded in 60 per cent of cyber incidents reported in the 2024-25 financial year.
“Social engineering techniques are used by malicious cyber actors to direct individuals or staff into performing specific actions such as opening an attachment, visiting a website, revealing credentials, disclosing sensitive information, or transferring funds,” the ASD noted in its Annual Cyber Threat Report 2024–25.
Speaking at a Q&A session with Eftsure last Thursday (20 November), hacker-turned-cyber security advocate Bastien Treptel noted that robust procedures could mitigate the risks of social engineering attacks.
“If you can stop social engineering by putting controls in place or by making it so that another person has to approve a payment or it has to go through a gateway like Eftsure, for example, then that makes it all too hard for me as a hacker,” Treptel said.
“And then I would move on to the next easy target because it’s guaranteed there won't be many organisations doing what I've just suggested that you do.”
He noted that AI had enabled hackers to gather detailed information about targets much more quickly, improving the sophistication of social engineering attacks.
Previously, hackers would need to manually compile information to learn about a target business’s structures. Now, AI could be used to compile a ‘one-pager’ on a business or individual, detailing information, including personal details.
“Back in the day when I was hacking [a Big Four bank], I had to learn all the structures, I had to teach myself to code, I had to teach myself all this stuff about the business and it took a long time,” he said.
“Now I [could] just get an AI tool to get a one pager on you guys [including] your pet’s names, your partner's name, where you went to school.
“The thing about the Internet is it's like footprints on the moon. If you've locked your Facebook profile today, but it wasn't locked a year ago, I [could] just go and get a snapshot of the internet a year ago.“
The progression of deepfake technologies has also given attackers fresh avenues to impersonate trusted personnel and manipulate staff, Treptel said.
To mitigate the risk of fraud through social engineering, Eftsure’s Jon Soldan recommended firms set up formalised systems to ensure cyber criminals could not take advantage of trusted relationships and informal arrangements.
About the author