By Marelise Gilbert, Entersekt's VP: Marketing
I saw it on Facebook and Instagram. She saw it on Facebook and TikTok. Born and bred in South Africa, and now the richest man in the world, Elon Musk not only holds a lot of credibility for fellow South Africans but also epitomizes success and business acumen to investors across the globe.
When I saw the ad where Elon Musk was offering a lucrative crypto-investment opportunity online, I was lucky. I was immediately suspicious – thanks to the continual security training we receive here at Entersekt. But not everyone was that lucky. Unfortunately, for one unsuspecting 62-year-old woman, the AI-generated deepfake video "from Elon" resulted in her investing more than $10,000. And that was the last she saw of her hard-earned savings.
The problem is, AI technology is getting so good, it’s really easy for cybercriminals to create convincing deepfake scams. And, it’s really hard for a lot of us to spot them!
When I saw the ad where Elon Musk was offering a lucrative crypto-investment opportunity online, I was lucky. I was immediately suspicious – thanks to the continual security training we receive here at Entersekt. But not everyone was that lucky. Unfortunately, for one unsuspecting 62-year-old woman, the AI-generated deepfake video "from Elon" resulted in her investing more than $10,000. And that was the last she saw of her hard-earned savings.
The problem is, AI technology is getting so good, it’s really easy for cybercriminals to create convincing deepfake scams. And, it’s really hard for a lot of us to spot them!
GenAI is making social engineering scams cheaper and easier
Today’s fraudsters have better technology at their disposal, like Generative AI (GenAI). With that in their toolbox, they can quickly and easily create convincing deepfakes for these social engineering scams.
"Deloitte’s Center for Financial Services forecasts that GenAI content fraud losses could reach $40 billion in the U.S. by 2027, from $12.3 billion in 2023."
I’m sure that most of you also remember the Hong Kong deepfake incident where a firm sent $25 million to fraudsters because the employee was instructed to do make a transfer by the company's chief financial officer on a video call. At least, they thought it was their CFO... Clearly the deepfake was that good!
What’s scary is that as these tools become better and more readily available, these scams become easier and cheaper for fraudsters. And that’s not great news for banks or their customers.
What’s scary is that as these tools become better and more readily available, these scams become easier and cheaper for fraudsters. And that’s not great news for banks or their customers.
AI-enhanced digital banking fraud is growing
I can see why fraudsters are diving feet first into this new type of fraud. That's also why banks can’t rely on old-school fraud defenses anymore. Their systems will probably miss the latest fake videos, voice recordings, and documents that impersonate a genuine customer or trick a customer by impersonating a bank official.
In a Deloitte report that I read on AI and deepfake fraud in banking, they suggest fraudsters will up their focus on fraud types like business email compromise. Another snippet I picked up was from the US Treasury Department’s Financial Crimes Enforcement Network (FinCEN). They recently issued a warning directly to banks about an increase in deepfake schemes targeting financial institutions, with fraudsters using GenAI to manipulate customer identity and verification systems.
In a Deloitte report that I read on AI and deepfake fraud in banking, they suggest fraudsters will up their focus on fraud types like business email compromise. Another snippet I picked up was from the US Treasury Department’s Financial Crimes Enforcement Network (FinCEN). They recently issued a warning directly to banks about an increase in deepfake schemes targeting financial institutions, with fraudsters using GenAI to manipulate customer identity and verification systems.
Risk-based authentication: Using AI to protect customers
Here at Entersekt, our expert teams are always a few steps ahead of fraudsters’ sneaky tactics. I love that we’re continually innovating new ways to keep consumers’ transactions safe, but without making the user experience clunky and frustrating.
And, while I know Entersekt safeguards digital payments with our world-class authentication solutions, what really impresses me is how our technology uses advanced AI tools and risk intelligence to proactively detect and prevent fraud.
Ok, let me use an example to explain. Say I’ve gone away on holiday with some of my family, and I need to make an instant payment to a family member to cover my portion of the accommodation. But I’ve never sent money to them before – let alone requested to clear the funds immediately.
As I set the payment in motion, the technology starts to collects risk signals, like the device being used, the context of the transaction, and location data. In other words, it uses risk intelligence to make a decision about whether a transaction is legitimate or likely fraudulent. In this case, the risk scoring mechanism flags my payment as suspicious. Why? Because I’m trying to make a rather large immediate payment to a new recipient, from a strange location, and the risk data suggests that something might be wrong. So, it wants to verify that it’s actually me making the payment, and that I’m aware of the context of the transaction.
Active authenticators kick in to verify my identity, and I receive a step-up challenge – in this case, I have to scan my face to prove it’s actually me. I do, and then confirm I want to make the payment.
Ok, let me use an example to explain. Say I’ve gone away on holiday with some of my family, and I need to make an instant payment to a family member to cover my portion of the accommodation. But I’ve never sent money to them before – let alone requested to clear the funds immediately.
As I set the payment in motion, the technology starts to collects risk signals, like the device being used, the context of the transaction, and location data. In other words, it uses risk intelligence to make a decision about whether a transaction is legitimate or likely fraudulent. In this case, the risk scoring mechanism flags my payment as suspicious. Why? Because I’m trying to make a rather large immediate payment to a new recipient, from a strange location, and the risk data suggests that something might be wrong. So, it wants to verify that it’s actually me making the payment, and that I’m aware of the context of the transaction.
Active authenticators kick in to verify my identity, and I receive a step-up challenge – in this case, I have to scan my face to prove it’s actually me. I do, and then confirm I want to make the payment.
"Even though it’s an extra step, I know that my financial institution has got my back and is proactively protecting me from cybercriminals."
The next day, I need to pay the same cousin back for my half of the restaurant bill. The risk score is now much lower as I’m still in the same location and paying a smaller amount of money to a confirmed recipient. And this time the payment goes through frictionlessly thanks to silent authenticators at play in the background. I don’t have to do anything... love it!
At the end of the day, I love how technology makes payments easier. But I also hope that all banks have advanced banking and payment fraud prevention tools to prevent social engineering ‘fires’ before they even start!
At the end of the day, I love how technology makes payments easier. But I also hope that all banks have advanced banking and payment fraud prevention tools to prevent social engineering ‘fires’ before they even start!