Smart Money

5 minute read

Artificial intelligence is revolutionising tech–and payments with it

Ralph Schneider’s big idea came to him back in 1950. The lawyer was struck to hear his client, Frank McNamara, regale an embarrassing incident when, in a Manhattan restaurant, he had the dreaded realisation that he had forgotten his wallet. McNamara had to wait for his wife to drive in from the suburbs to pay. There should be a system where people could pay for their meals later, Schneider thought. So together the pair created Diners Club, the world’s first credit card. In exchange for $5 a year, Diners Club members received a cardboard “credit identification card” enabling them to put meals from participating restaurants on a tab, and then settle up by check at the end of the month.

Schneider’s idea anticipated a shift that would liberate the idea of money from physical cash. It was the beginning of a series of innovations that have revolutionised payments over the past 70 years, taking them digital, international and, more recently, mobile. “The payments industry evolves in waves,” says Tony Craddock, Director General of trade group The Payments Association. And now, he says, we are on the cusp of a new wave that is set to reshape our experience of payments all over again: artificial intelligence (AI). “It is going to be a bigger wave than we’ve seen before.”

AI is an umbrella term that describes machines simulating the capabilities of human intelligence. AI has been part of computer science for decades, but in the last 10 years it has made particular strides. This has been fueled by the rise of machine learning (ML). In simple terms, ML involves an algorithm ingesting vast amounts of historical data to discern patterns. These patterns then allow the algorithm to make sense of new data.

ML has been a boon for the payments world, as it helps address a number of core problems. One major use case is routing money around the planet’s patchwork system of “payment rails”, the dedicated networks that make electronic transfers possible, and automating the authorization and completion of those transactions. Another is credit scoring— crunching often disparate data points to judge risk. The ability to do this on the fly, especially with non-traditional data sources, has powered the recent wave of “buy now, pay later” credit offerings. But ML has also added value in a multitude of smaller ways. It is the force behind business tools that can analyse transaction histories to model future scenarios; it is the reason that payment errors are more readily detected and more easily resolved; it is the functionality that lets accountancy software read digits on invoices for automatic reconciliation. Moreover, ML is critical in detecting and preventing fraud.

But a new type of AI is beginning to gain a foothold. For years, ML has mostly been about “predictive” tasks, in the technical sense of predicting the correct classification of new data. This year, however, the world’s attention has been drawn to staggering advances in “generative” AI (gen-AI). These are models that can produce new content. When OpenAI’s ChatGPT burst onto the scene in November 2022, it sparked particular interest in the capabilities of “large language models” (LLMs), a class of gen-AI algorithm that can understand and generate text. Users were suddenly confronted with the range of tasks that an LLM could help them do—summarise vast amounts of information, debug code, or write emails. And it could do these things well. “I’ve been in AI for 30 years,” says Manuela Veloso, Head of AI Research at J.P. Morgan. “This is a major advancement.”

With their natural language interfaces, impressive output and ability to wrangle large, unstructured datasets, gen-AI tools have caught the imagination. They not only represent a new, conversational way to interact with machines, but a way for machines to perform tasks that were previously thought to be the preserve of humans. This has spurred the digital economy to embrace gen-AI with gusto, prompting entrepreneurs to launch new startups, and tech giants to rapidly introduce new software features. It has, in turn, put the AI field as a whole centre stage.

This new era of gen-AI is set to bring yet more changes to the world of payments. On the one hand, it will ratchet up the speed of innovation, because LLMs can function as a “copilot” to help write computer programs. “Developers will spend less time writing lines of code and more time designing new statistical models and mathematical tools for actuarial challenges,” says Daragh Morrissey, Director of AI at Microsoft Worldwide Financial Services. This should shorten prototyping and deployment cycles for pay-tech developers. It may also assist merchants in integrating those new products into their own systems. An LLM trained on the developer’s support documentation, or on a merchant’s own documentation about past implementations, could enable a chatbot to field specific technical queries.

But what new ideas could AI unlock for payments themselves? Here are three nascent concepts that may have an impact in years to come...

“Generative AI can move fraud forward at an industrial pace.”

David Britton, Experian

Artitficial intelligence is unlocking new opportunities for payments


Virtual assistants were helping banking customers long before anyone had heard of OpenAI. Gen-AI may usher in a new generation of these tools, allowing users to have more personalised, conversational interactions.

On the one hand, this means better customer service. On the other, it could mean enabling customers to get more insights out of their payments data—and it’s arguably this that would be the more profound shift. Being able to query their data via a natural language chatbot would mean customers wouldn’t have to know how to use data manipulation tools, or limit themselves to an app’s pre-set data analysis features. Instead, they could simply type “What categories am I spending more money on this year compared to last year?”, for example, and in response receive a natural language answer.

That’s a vision that Microsoft believes could become reality. The Microsoft 365 productivity suite is already introducing an LLM-based “Copilot” to tools such as Excel and the business intelligence product Power BI. “We are enabling you to have a conversation with data,” says Microsoft’s Daragh Morrissey. “With Copilot in Excel, you can ask natural language questions to identify key trends and insights, generate visualisations, even explore ‘what if’ scenarios. With Copilot in Power BI, we can do all this and more with real-time analysis of data—with text summaries that update in real-time as the data changes.”

To those familiar with LLMs, this might seem counterintuitive. They are not designed to understand numerical data and solve mathematical problems. Anybody who has tried to use ChatGPT to do even basic arithmetic will know this all too well. The workaround for data analytics tasks is for the LLM to take the user’s natural language request and turn that problem into an instruction that can be passed to a dedicated analytics engine—or converted into ad hoc computer code. This separate entity can then perform the required task and return an accurate result that can be expressed to the user in an easy-to-understand format, whether that’s a graph or LLM-generated prose. 

Microsoft is helping organisations bring this kind of LLM-enabled analysis to their payments data by allowing them to integrate Copilot with their systems. “We do this through ‘plug-ins’, and these enable new use cases where you can bring transactional data says Morrissey. “This will make it easier to embed Copilot capabilities into your mobile apps, and enable your customers to plan finances, ask questions about their account, and identify trends and patterns.”

LLMs could allow chatbots to become more proactive, initiating tailored, context-aware conversations to help users make better payments-related decisions at the right moments. “Intelligent spending advice could remind consumers how much of their budget they’ve spent in areas such as dining or retail, so they can adjust their spending habits accordingly”, says J.P. Morgan’s Manuela Veloso.

For businesses, this kind of smart, accessible data analytics could be an especially powerful tool. Payments data could be released from its silo and more readily used to inform decisions in an accessible way across departments, drawing on real-time insights that aren’t limited to what happens to be displayed on fixed data dashboards. Analysis of buying behaviours and transactions, for example, could help improve customer loyalty or influence marketing campaigns.

Financial institutions are already experimenting. “For our first wave of use cases and proof of concepts, we see banks innovating with generative AI inside their organisations and we are working with banks to help them leverage this capability to help understand patterns and trends in the payments transactions,” says Morrisey.

“Generative AI can move fraud forward at an industrial pace.”

David Britton, Experian

The potential direction of travel is obvious: Could we one day have a chatbot that can independently offer nuanced, sophisticated financial advice? There is a challenge to overcome. LLMs have a notable error rate—they have a tendency to “hallucinate”, to use the jargon—which is a problem for high-stakes fields such as finance. Tackling this is a priority for AI companies, and the impact of gen-AI will likely be determined to a large degree by the extent to which this problem can be solved. There are a number of approaches, but one way to reduce hallucinations is to “fine-tune” an LLM for a specific task. This involves feeding it high-quality, diverse, domain-specific data, as well as adjusting the model’s own parameters and establishing the prompts that lead to the best outcomes. LLM-based software that performs legal drafting tasks, for example, will have been fine-tuned in this way.
For Greg Davies, Head of Behavioral Science at Oxford Risk, a consultancy that builds software to help people make better financial decisions, gen-AI’s potential for financial decision- making is not just about accessibility—it’s also about efficacy.

His company sells software that uses conventional algorithms to offer financial advice that’s tailored to users’ personality types as defined by a psychometric questionnaire. But the hard part comes next, he says. “The big problem for most people when it comes to financial decision-making is not that they don’t know what the right thing to do is—it’s that they don’t get around to doing it.”

For Davies, good financial advice that works in the long term is crucial. That’s why the way in which companies communicate that advice is so important, he says. “And a huge part of that is around personalization, not necessarily the advice I give you, but how I portray that advice to you; how I talk to you.”

That might mean sending a reassuring email to someone as they see their energy bill payments rise and encouraging them to keep contributing money to their savings pots, even as their income is squeezed. “If I’ve got a ChatGPT interface I can go, ‘Here are the three bullets that I want to convey’, and at the push of a button, it can generate for me 100 versions of this email that will portray those three bullets to each of my clients in a way that is tailored to their preferences.”


Not only are banks experimenting with AI personalization to help people manage their money, retailers are also testing out ways to help consumers spend it. In April, German shopping platform Zalando announced plans to integrate ChatGPT so users could ask questions such as, “What should I wear to a friend’s wedding in Greece?” The chatbot would then give shoppable recommendations based on factors such as the weather.
Some analysts imagine that, in the future, users may be able to give natural language prompts that automatically fill whole shopping baskets with relevant goods. Perhaps you would type: “I am planning a dinner for four adults, and we want to eat spaghetti bolognese.” Push a button, and there are your ingredients plus a recipe. Think of it as a new form of retail interface.

An obvious evolution of this idea would be to make it possible to pay by natural language, too. Chat-based commerce already exists. US-based Clickatell has developed a chat commerce platform that enables brands to use chatbots to communicate with customers and send them payment links over popular messaging apps, such as WhatsApp. A brand might use WhatsApp to message a customer if an item on their wishlist is available: “Hello, Marian—the yellow high-top sneakers you wanted are back in stock! You’re a size six, right? Would you like to go ahead and order a pair?” If Marian says yes, the chatbot could send her a secure and personalised payment link in the WhatsApp chat.

Chat commerce is innovating fast. In April, Clickatell launched a new feature so users could permit a company’s chatbot to save their payment details using technology called “card tokenization”. Card tokenization is a security measure that replaces card details with a series of algorithmically generated numbers called a token. That token can then be traded through the internet without revealing a person’s real payment details. For Clickatell, this streamlines the user experience, as they no longer need to leave WhatsApp to make their payment. Instead, the chatbot would ask: “Hi, you have left an item in your shopping cart. Would you like to pay for it now using your card ending in 1395, YES OR NO?” If they answer YES, their card will be charged automatically.

For German software provider Serrala, chats are one of several channels through which it enables the online payment of bills and reminders—others include email and SMS. Whenever a bill isn’t paid automatically, companies need to reach people and get them to take care of payment. “That’s a very different paradigm than shoppers visiting your web shop or customer contact,” says the company’s Solution Architect Jeroen Dekker. “So the machine learning here is not about the payment itself, but about deciding the right message, place, and time for each interaction to drive attention and conversion”.


There’s a normal way to behave in a shop. Perhaps you walk through the front door, browse the products, maybe experience a moment of indecision, then make a payment. Online it’s no different. “There are certain characteristics that we can identify as harmless and likely to have been authorised by the individual,” says David Britton, Vice President of Strategy, Global Identity and Fraud at Experian, which uses AI to detect fraud on behalf of its retail or banking clients. It’s therefore potentially significant when standard patterns aren’t followed.
“Perhaps they don’t come through the front door, they enter the store through the equivalent of a side window,” says Britton. “And then instead of doing some shopping and putting some things in their cart and taking things out of a cart, they go right to the most expensive item, grab three of them and then go right to the checkout.”

To identify suspicious behaviour such as this, Experian uses a tandem of supervised machine learning (AI designed to look for characteristics known to be associated with fraud), and unsupervised machine learning (where AI is searching for signs of fraud not yet discovered). To find these patterns, algorithms search through mountains of payments data as well as information on factors such as typing speed and web-page navigation.

The problem is that AI is enabling an army of digital proxies to act on the consumers’ behalf, and this is muddying the definition of what legitimate activity looks like. As we have seen, AI tools can find products we might like, notify us of payment options, or simply tell us what’s going on in our bank accounts. Bank aggregator software, which gathers information from a user’s multiple bank accounts in one place, is also growing in popularity. “Every time those systems go and look at your bank account balance, they’re acting as your proxy and logging in as you,” says Britton. Right now, he is preoccupied with a question: “How do I know that that particular bot or gen-AI entity has truly been authorised to go and act on a person’s behalf?” That unknown makes it challenging to learn what’s normal and use that information to sift out red-flag behaviour.

AI is also going to give fraudsters more tools to beat standard security measures, in turn putting yet more pressure on AI to detect ever subtler anomalies in the data. “Generative AI can be used to develop materials to move fraud forward at an industrial pace,” says Britton, who is already seeing criminals use the technology to generate fake identities, backed up with fake social media accounts and fake ID documents.

Many fraud detection companies are also witnessing criminals quickly adapt to using gen-AI. Fraudsters have been creating 2D and 3D deepfake masks to trick facial recognition security systems into authenticating payments or establishing fraudulent accounts. To try to stop this happening, identity verification company Onfido has been using gen-AI itself. “We use generative AI in the same way as fraudsters do— primarily to generate deepfakes using open source methods that spoofers are already using themselves to generate data,” says Therese Stowell, the company’s Vice President of Identity Verification. These can then be used as training data for anti-spoofing systems. “This helps our AI models distinguish between genuine faces and deepfake spoofs.”

It taps into a wider trend in the anti-fraud world: Using gen-AI to produce “synthetic data” of all stripes to better train machine learning-based tools. This type of work is important to combat overfit, Stowell says. Overfitting is when an AI model relies too much on available training data, and struggles to detect new, emerging anomalies. In deepfake detection, for example, that means some AI models can rely too heavily on characteristics known to be suspicious, such as the position of a fake face within the camera frame. But a position is easy for fraudsters to modify, she says. “So supposedly state-of-the-art defensive AI models often fail. As a result, we are using generative AI to help train our models with broader, more varied data to ensure simple adjustments do not bypass them.”

Fraudsters are using AI to create deepfake facial IDs


As AI excitement grips the fintech world, there are warnings that the technology needs to be deployed in a way that’s transparent about what it is and how it works. “You’ve got this black box problem with AI models,” cautions Davies, of Oxford Risk. This is when a user can see the input and the output of an AI model but doesn’t know how it arrived at its decisions. As a result, you need to be clear to users about the tool’s limitations and— ideally—find ways to show the working behind those decisions.

“We’re starting to go down the route of potentially bespoke financial services,” says Bonnie Buchanan, Head of the Department of Finance and Accounting at the University of Surrey. This poses an ethical challenge for the industry. That bespoke element means it is harder to compare different outcomes. If two people apply to make a payment using a finance product such as BNPL, which uses AI to check their credit score, and one is rejected, how do they know what this was based on? “You just get a ‘computer-says-no’ response,” she says. “But it needs to be explainable.”

The payments industry has come a long way—from the Diners Club cardboard credit card to biometric security. For years, AI in payments has been most associated with efforts to fight fraud, such as the techniques used by companies like Experian to detect suspicious characteristics in huge mountains of data. But for consumers going about their legitimate business, these innovations have mostly been invisible. Thanks to recent advancements in AI, however, they will likely start to have a more visible impact. And the increasingly consumer-facing role of the payments industry means that it is now at the forefront of AI experimentation. For the moment, this is less about automating jobs, than it is about providing new tools to allow humans to do their jobs better.

“When I was in payments 20 years ago, it was seen as a back office function, we were the plumbers of financial services,” says The Payments Association’s Tony Craddock. “But payments are seen as where the money is now, because it’s the enabler of the movement of money. It’s the point at which the value exchange between two parties takes place. And when you overlay artificial intelligence onto payments, then this makes for a very, very exciting future.”