Skip to content

A look at how fraudsters are using AI to perpetuate payments fraud | Episode 51

Updated on May 13, 2025

Fraud in the payments industry is evolving rapidly, and AI is playing a significant role in shaping its future. In a recent episode of PayFAQ: The Embedded Payments podcast, host Ian Hillis sat down with Gary Pontecorvo, Senior Director for Financial Fraud Investigations, Operations and Intelligence at Worldpay, and Brian Rust, Deputy Chief Information Security Officer at Worldpay. Their conversation provided a deep dive into how fraudsters are using AI to refine their tactics and what businesses, particularly software platforms engaged in payments, can do to protect themselves. 

Payment fraud and the role of AI 

Card-not-present fraud, skimming, direct deposit account takeovers, and stolen terminals are common types of fraud that have persisted for years. What has changed, however, is how these attacks are executed. AI has significantly enhanced fraudsters’ ability to manipulate and deceive individuals and organizations. For example, business email compromise—one of the most prevalent fraud types—has become more sophisticated. With AI-generated text, scammers can now craft highly convincing emails that lack the usual red flags like poor grammar or awkward phrasing. 

Brian Rust explained that AI has made fraud more effective, rather than fundamentally changing attack vectors. Cybercriminals still use tried-and-true methods, such as impersonating authority figures to create a sense of urgency. However, AI-generated voice and video deepfakes are making impersonation even more convincing. Fraudsters can now mimic executives or vendors in real time, requesting fraudulent transactions or account changes with an authenticity that was previously difficult to achieve. 

One striking point discussed was how cybercriminals operate like legitimate businesses. Fraud groups are highly organized, well-funded, and adaptable. They keep an eye on the news and respond quickly to exploit financial shifts. For example, when Silicon Valley Bank collapsed, fraudsters swiftly moved to impersonate affected companies, trying to redirect payments to fraudulent accounts. AI enables them to automate and scale these efforts more efficiently than ever. 

Advice for platforms navigating payments fraud  

For software companies navigating this new fraud landscape, the implications are significant. AI has lowered the barriers to entry for fraud, making it easier for bad actors to launch attacks without deep technical knowledge. Brian and Gary emphasized the importance of robust cybersecurity measures, including secure coding practices, regular threat modeling, and strong policies and procedures. Businesses must also prioritize cyber hygiene—basic security practices such as securing credentials, enforcing multi-factor authentication, and training employees to recognize fraud attempts. 

A key takeaway was that sometimes the best defense against high-tech fraud is a low-tech approach. Simple measures, like setting up verbal code words with family or using daily passphrases in finance approvals, can be highly effective in thwarting AI-driven fraud. Additionally, continuous intelligence gathering is crucial. Businesses should stay informed about emerging fraud trends, collaborate with industry peers, and scrutinize anomalies in financial transactions. 

AI is making fraud more sophisticated; businesses can fight back by strengthening their defenses and staying vigilant. Fraudsters rely on deception and urgency, but organizations that prioritize security, follow strict verification processes, and educate their teams will be far better positioned to protect themselves in an AI-driven fraud landscape. 

Transcript

Ian Hillis: Hi, everyone, and welcome to PayFAQ: The Embedded Payments podcast brought to you by Payrix and Worldpay. I'm your host, Ian Hillis, and today I'm talking with Gary Pontecorvo, Senior Director for Financial Fraud Investigations, Operations, and Intelligence at Worldpay, and Brian Rust, Deputy Chief Information Security Officer at Worldpay, about how fraudsters are using AI to perpetuate fraud in the payment world. Audience, we're in for a special treat today. Out of all the titles that I've read out loud across this, these have to be two of the most interesting ones. I'm excited to introduce you to them in just a second. 

Ian Hillis: Brian, Gary, welcome to the show. 

Gary Pontecorvo: Hey, thank you. 

Brian Rust: Yeah, thank you for having us. 

Ian Hillis: All right, this is going to be an exciting conversation. Brian and Gary are both prominent experts in their field, so I want to take a minute to give an overview of each of their backgrounds before we jump into today's topic, starting with Gary. Gary has over 30 years of law enforcement experience, 27 of which were spent as a special agent with the FBI investigating narcotics fraud and domestic and international terrorism. During this time, Gary was deployed to Afghanistan for two years to conduct intelligence operations with the US and foreign military, US and international intelligence agencies and Afghan local authorities. He also spent five years of his professional life working with Citibank with a focus on financial fraud. He's been with Worldpay for three years and his team leads all complex fraud investigations for the organization. Turning to Brian. Brian is a seasoned cybersecurity executive with a proven track record in leading and transforming information security programs. With over 15 years of experience, he has held leadership positions at major financial institutions and corporations. His expertise spans a wide range of cybersecurity disciplines, including incident response, threat management, vulnerability management and risk assessment. Brian is a CISA and Sigma Green Belt holder, and his work has been published in IEEE Security and Privacy. Audience, I don't know what more you want for experts as it relates to this topic. This is going to be a fantastic conversation. So, let's start with the level set. I know people could be coming into this conversation with various levels of understanding and fraud. My first question is what are the types of fraud that exist today and how should we start to bucket this for the audience? 

Gary Pontecorvo: I could start with that, Ian. Thank you. These fraud types, unfortunately, have been out there a long time. They're the same fraud types that are out there right on your card, not present. Right. Skimming. E-skimming, right, and Brian can go into a lot more details about that. Right. Utilization of compromised cards. There's more activity going on in that regard for your card present tap-to-pay fraud that's going on out there with some new apps that are being utilized by the threat actors. And still, you still have you know, unfortunately your DDAs, your direct deposit accounts, right. Account takeovers of that. Through social engineering. The portals. Much more is going on there. I think the AI piece we'll talk about, I believe today, you know, really is unfortunately enhancing that. Plus, we're still having stolen terminals out there. Right. if you have ah, you know, a tap-to-pay, you know, card present piece of hardware, there's still a lot of stolen terminals out there. and I think that's increasing as other mitigative efforts take place. So, your standard fraud is still out there. I think it's just being enhanced with the AI and other capabilities I think we'll talk about. 

Brian Rust: Yeah, hey, this is Brian. I just want to chime in here. You know, Gary covered a number of different avenues, but one I wanted to make sure we touched on is, you know, business email compromise is still a very prevalent way that threat actors are getting into your systems to then, you know, kind of have the takeover from the business side, not necessarily, your traditional kind of you fraud on the consumer side. That is something that artificial intelligence has definitely increased, the sophistication of those attacks. From the time that ChatGPT really got going, the things that you typically see around, like poor grammar and formatting of the emails, almost disappeared. So that's where artificial intelligence has, has really reduced the barriers of entry for the fraudsters to be able to perpetuate, these business email compromises and things like that, which is frankly increased the number of people that are able to deliver an effective lore to try to get people to click. 

Ian Hillis: That's really helpful, Brian. And so, I'm hearing two things here. First, Gary, from your end you kind of gave us an overview of these different types of fraud. There's been the same types of fraud over time. Maybe there's been slight evolutions over time, but the same buckets remain of where you're exposed and how this happens. And you may have new avenues or channels, email, as technology evolves there. And Brian, you're starting to steer us in the conversation I'd love to jump into, which is how has AI impacted each of those pieces. And you start off with this example around you know, from email you've evolved language communication where you used to be able to look at poor grammar. Where else has AI impacted some of these traditional buckets of fraud? 

Brian Rust: Yeah, and as I said, it really comes down to their effectiveness. So, they're not necessarily changing their attack vector. If you were to look at the, so there's a concept when it comes to cybersecurity attacks. It's called the cyber kill chain. And if, if you were to look at the cyber kill chain, their tools, techniques and tactics, you know, they're really the TTPS and procedures or whatever haven't changed dramatically based off of artificial intelligence, except for in, in one area they've gotten more sophisticated. And we have seen where there are, and it's not just hypothetical, but where people are using video, augmented video and voice to be able to, to try to create that sense of urgency to get someone to do something that they wouldn't do. Right. Like if the, if the CEO of the company is not typically text messaging you, asking you for, you know, to go get procured gift cards, they're probably not going to want you to do that. Now whether they text you or they Facetime you, like, that's where we're seeing is they are, they're using video chat to be able to, with augmented reality through the, through the AI, to be able to, to make that attack vector more sophisticated than it previously was so, it's not that they're doing things very drastically different. They're just the approaches is easier right there. They've always wanted to impersonate someone to try to create a sense of urgency to get you to go do something that you wouldn't do. They use certain key phrases of you need this now this is imperative. You know, I've got a position of authority and I'm telling you to do this. They're trying to get you to, to create that sense of urgency of that's where the mistakes are made. Like that's where, where people who typically wouldn't fall for this if they had a moment to just sit and think. These fraudsters are very, very good at social engineering people, and they are using AI to reduce the barriers of entry, to allow them to convince more people that they're who they say they are. I hope that helps answer the question you had there. 

Ian Hillis: It does. You touched upon two themes there as it relates to the kind of common traits that we see within this. So, you mentioned one around, impersonation of authority. And the second, creating a sense of urgency. Are there any other themes or things to look out for where you'd give best advice and say this doesn't feel right? Any other attributes that you'd point to? 

Brian Rust: Yeah. Gary, is that something you want to talk towards? 

Gary Pontecorvo: Yeah, no thanks, Brian. I think, you know, Brian's spot on what he's saying. And it's the attributes to look for, once again is the urgency, but also just smaller anomalies, right? This goes to your everyday client, your customer coming in or online. But think this also for your supply chain, right? Your partners, that impersonation piece that Brian's speaking of, they're impersonating your vendor, that provides X to you. They want to create invoices; you start paying them at a different DDA. Here's this urgency. Oh, guess what, like Brian's saying, you know, we had a big flood in our warehouse now, you know, we had to switch banks, this and that, so, and locations. So, here's this new DDA and here's our new email, right? Those indicators of, hey, anything that you would take as granted as unfortunately trusting a vendor or partner, a client, I think you have to scrutinize a little more now. Not that you have to be to the paranoid level, right? You're going to go out of business doing that. But yet if there is some anomaly, something that doesn't seem right, I think you need to dive in deeper to it today than you ever did before. 

Brian Rust: Threat actors definitely trade on the news, right? So, Silicon Valley bank, we remember when they, they failed, what was it, almost two years ago now it feels like the number of people who tried to impersonate companies and say, hey, our banking information's changed. You know, we used to use Silicon Valley Bank, now we use this bank. They were using information that they had obtained via previous breach or through public information source that they found out that, hey, you know, we've got this information that XYZ Company uses Silicon Valley Bank now we can try to redirect legitimate payments into our account instead of their account. By impersonating accounts payable or things of that nature, they're looking to trade on the news. And artificial intelligence, as I said earlier, reduces their barriers of entry, makes it so they can respond faster to those types of news events to take the intelligence and stuff that they have and act on it. These threat actors work as a business, right? When the Conti Malware Group got kind of had some division with the, with the Russian invasion of Ukraine because they had people in Ukraine, they had people in Russia. One of the Conti employees leaked their internal files and playbooks and, and these threat actor groups have the same issues that you guys have, right? If, you know, this person got a promotion before I did, I can't hire. They are running as a business, they are well organized, they are well funded. This is not, this is not a couple of people working out of their basement or out of their garage. These are, you know, these are well organized, well formulated companies that are looking to find a way and you know, their ultimate goal is how do I extract money from folks and defraud them. Artificial intelligence really just kind of helps reduce that, those barriers to entries in many ways, in the same ways that our companies are using them and other companies. And you hear the conversations in, you know, you turn on the news of what companies are talking about in their earnings release, of how they're trying to, you know, figure out what's the right way to use AI, just like you probably are in your own business. How can I make this better? How can I use the tool that is available to me to make certain tasks easier or faster? The threat actors are doing the same thing because they are operating a bit like a business, just like the rest of us. 

Ian Hillis: Brian and Gary, if you think about the primary audience listening here with us today, software platforms that are engaged with payments, what are the implications here for software companies in particular as they're navigating through this new era of AI and fraud? 

Gary Pontecorvo: I would just say Brian's expertise, but from my side of it, not being a cyber expert, it shortens the learning curve, right? It reduces that learning curve. I don't need to know SQL or Python, I can just go buy that, right? If I have some crypto or some money they'll take, I think that just makes that vector and attack radius much deeper and wider for our clients here, right? They can come in from anybody. For me that knows a little coding maybe, and I can understand I don't have Brian's expertise, but like I said, I can buy that stuff, and I get to start, you know, attacking with a bot. And all I need to know is a couple things. Now I think in the past you had to know all those things, right? You worked independently. You know, your basic fraud first service, your subcontractors, I think the implications are unfortunately increased based on that. Now everybody that wants to be, if you will, on the bad side can do that activity. I think that's the implications that I see. 

Brian Rust: So, for software platforms and things, it comes down to make sure you have secure design principles, make sure that you are sanitizing your inputs and make sure that you are doing things like threat modeling and really doing. They call it cyber hygiene. Like for lack of better thing to call it. It is, you have got to make sure that you're doing the basics well. If you take so Verizon every year they do a data breach report, for a number of years now. And if you go back, I don't know, 15 years when they did the first one, so the one they do now, you think that you would see some significant movements on, you know, the main trends. But unfortunately, there's, there's not because the end of the day, compromised credentials and social engineering and things like that, the same themes persist through the time. It's just as the defenders get better, the attackers get better, so that cat and mouse game continues. But at the end of the day, you got to do the basics well. You got to have solid policies, procedures and things. In the example I used earlier around the accounts payable stuff and vendor management, like, that's where having really solid procedures and processes that you follow is really important. Sometimes it just comes down to do you have a process, do you have a policy, are you following it? Are you doing your secure coding practice? Are you doing your SAS and DAS scans? Are you doing your software composition analysis? Do you understand what's in your software bill of materials and how that affects things that might change on you without you actually making an update to your code. Right. Because of open-source libraries that you, you may have implemented. All of these things are little, but added up, they are quite significant. 

Ian Hillis: That's fantastic and I think practical, actionable advice. Gary and Brian, if you think about kind of parting advice for software companies navigating some of these complexities, you mentioned having clear processes and procedures in place. Any parting thoughts related to that? Just clear examples of what that could look like when done well. Or any parting advice you'd have for the software platforms out there that are navigating this? 

Brian Rust: So, my parting advice to folks is sometimes with the higher tech stuff you actually have to defeat it with lower tech type of approaches. An example of that is when I interact with my parents to defeat like the AI voice stuff. Right. I've got a code word that we've set up with each with them to ensure that I can verify who they are. And I've done this with my parents, I've done this with other loved ones, to be able to have a way that is not stored written down anywhere that is just, “hey, mom, you know, what's our word?” And then she gives it to me and then I know that I've got verification that I'm actually talking to her. Right. I know it seems super low tech, but sometimes, you know, you don't have to get overly fancy to defeat these things. And that's where I get back to the cyber hygiene stuff is you got to be good at that stuff, right? You got to follow procedures, you got to do the training, you got to practice it, you got to work through it. And that's really how you, you stay on top of these things. As a developer of software is you continue to do the little things well and that'll make it. So, you're a hardened target. that, that you know, is more. It's, it's harder to get in. And the more, as I said, they see these folks are working like a business, they're going to go for the softer targets. So do your best to set up procedures and policies and follow them. And stick to them. 

Gary Pontecorvo: Yeah, agreed, Brian, 100%. And also, about the low-tech piece. Right. I've advised people, also friends and others in the business, like the BEC piece or anything, something very simple like your finance person or your accounts payable person. If you need authorization, you do by email, which I would assume most do. Or within a system. Do you have a spreadsheet there per day? I mean what's your favorite thing? A flower per day? Sports team if, that word's not in there, it's. It's not you. Right? The authority to provide that. Just simple. Something simple like Brian saying, that's off the grid, that's not stored. I think that goes a long way. And the other piece is awareness. And I spoke to this before, and that really comes down to intelligence. What's going on out in the sector that you can determine through associations, consortiums, your business partners, you trust what's going on out there, what's changing, what's the latest rave? And as we all know, they come back cyclically. Right? They tried this six months ago. Now they're coming back. But maybe a little more, with AI, it's going to be a little more difficult, but I think you have to. It's difficult. You're running a business. But now you also have to be the collector of intelligence. You have to conduct analysis on intelligence. You have to make it actionable. What are you going to do with that? What's currently happening? I think that's your best defense, along with the things Brian said. And unfortunately, like I said before, just don't take anything for granted. I think you have to look into every little anomaly that comes through. And if it's benign, that's great. You should feel better about it. But you just did it. So, I think, overall, it's more action that you have to take and be more proactive. 

Ian Hillis: Fantastic. And I love that this is actionable advice across all of us rather than just theoretical. Gary, Brian, thank you so much for being on the show. This has just been a wealth of wisdom you've been able to share today. 

Gary Pontecorvo: No, thank you. It was fun. 

Ian Hillis: Excellent. We want to be a trusted resource for software providers who are out there trying to make sense of embedded payments and finance to help them get the education they need to make the business decisions their customers and investors will thank them for. Thank you to everyone joining us today, and I look forward to continuing the conversation in our next episode.  

Share:

Explore more podcasts