AI is Transforming Fraud Detection in the Financial Services Industry

AI is Transforming Fraud Detection in the Financial Services Industry
The age of artificial intelligence (AI) is here and, fortunately, Hollywood’s worst nightmares have yet to materialise - no time-travelling killer robots, wholesale enslavement of the human race or conversational bombs. Its impact on the financial services industry, however, has been substantial and beneficial. Indeed, this particular industry has been leading the application of AI in business, regularly finding new and innovative applications for the ever-developing technology.
One of the most important applications, particularly for customers, has been in fraud detection. While developments in digital banking and payments have made financial dealings faster and easier than ever, they have also brought their own new risks. Not least of these is the fact that there are now so many transactions taking place so quickly that it would be impossible for any analogue system to keep up. Even if such a system did catch some fraudulent activity in your account, traditional processes of review and resolution typically take months to complete - far too slow for the modern pace of finance and life generally.
But what are the ways in which AI is making a difference in fraud detection? That’s what we’re going to look at now.
What exactly is artificial intelligence?
It’s worth noting, at this point, that there is a very good reason why the computer programmes running in banks and the financial services have not yet overthrown their human overlords - they are simply not that sort of AI. These are not brains in boxes and they are certainly not sentient. Instead, it would be more accurate to say that they are examples of machine learning.
By scanning huge quantities of banking data, an AI can learn what is and is not typical behaviour for a specific customer. For example, it may notice a large expenditure each week at grocery stores as the customer in question does their weekly food shopping. They might regularly make large purchases at clothing stores or only use ATMs in a specific part of a specific country. Through this data, the AI gets a general impression of the customer’s typical spending habits.
With this impression formed, it becomes much easier for the AI to detect activity that is out of the ordinary - perhaps there’s a huge cash withdrawal in a different country without the customer having alerted their bank that they are going on holiday. Perhaps there’s a sudden huge expenditure on something that the customer has never previously purchased or from a shop they don’t generally use, delivering to an address that is not their own. These would indicate that the account has been compromised and has been the victim of fraud, at which point the AI can automatically freeze the account to prevent further loss.
Rapid responses
It is possible for the process outlined in the previous paragraph to backfire and cause inconvenience to the customer. Perhaps they went on holiday without telling the bank first and urgently need to withdraw cash, or maybe they were buying a birthday gift for a friend or family member whose tastes are very different to their own. More powerful algorithms have made even these false positives less likely to occur.
As an example of this, start-ups bank Monzo had a bit of a PR disaster at the start of 2020 when they started freezing customer accounts in response to their AI detecting unusual transactions, a huge amount of which turned out to be genuine. Perhaps the system was a little too sensitive and failed to adapt to the ‘new normal’ that COVID-19’s emergence created.
One way to prevent such inconvenience is by limiting the response. While freezing all transactions will certainly prevent significant loss from a compromised account, it will also cause the most disruption in cases of false positives. Companies like Datavisor provide software that tracks fraudulent transactions and, while they claim 90 per cent accuracy and 30 per cent greater effectiveness than traditional fraud detection solutions, they also offer real-time alerts. This not only means that genuine transactions can still be processed, but can also mean that fraudulent transactions may be stopped before they have even been completed.
Alternatively, companies like Accenture offer an approach that inserts a team of human analysts between the fraud detection and the commencement of preventative actions. Those detected actions that are highly likely to be fraudulent (like money being withdrawn from a country on the other side of the planet at the same time as the account being used in its home country) will get an immediate response (the account being locked) but those that are more nuanced and subtle will be analysed and checked. Those deemed to be fraudulent can then be escalated while those deemed to be just abnormal can be ignored.
Other kinds of fraud
Card skimming and hacked accounts are not the only forms of fraud. Insurance fraud costs up to $80 billion per year through a number of scams. Machine learning can help here too, in broadly the same way as it does for fraudulent transactions - by spotting anomalous claims that are likely to be fraudulent and passing highlighting these for human review.
Turkish insurer AK Sigorta claims to have improved their fraud detection by 66 per cent through the use of such technology. Their predictive analytics tool can determine whether or not a claim requires investigation within about eight seconds of receiving it - far quicker than an all-human claims team can.
AI has still more applications in the world of financial services, helping to reduce the risk of fraud. Plenty of such companies have started using biometric security like fingerprint scans, facial recognition and retina scans in their apps to help keep customer data secure. Barclays bank is going even further by partnering with Hitachi in deploying finger vein scanner systems.
Data is the key
As stated above, machine learning systems require data to form an impression of each customer’s typical routines in order to spot activity that does not fit the pattern. While much of this will inevitably be financial data, it can also include other data sources including social media activity, credit agency data and other open sources. In some cases, AIs are using data like the time it takes to complete a transaction on an online store as one completed by a bot would be instant while humans take time to fill in all the details. The challenge for the developers of the technology comes in making it possible for the AI to make use of such a wide variety of sources but, once that’s achieved, the results are more likely to be more accurate.
Initially, the AI will need human interaction to determine what data is significant and how to interpret it. However, over time, they can start to flex their own intellectual ability and establish their own rules and contexts. Eventually, they can determine which issues represent what level of risk to the company, prioritising them and sending those of most significant concern immediately to be investigated while those of least concern can be either dealt with autonomously or saved for later consideration.
AIs cannot claim to be able to make fraud entirely a thing of the past and banks will inevitably have to deal with some level of it, for the time being, it can at least significantly reduce it. The shift towards digital banking has created its own challenges and new opportunities for criminal activity but has also created the technology to counter such activity. As an added bonus, AIs can often reduce the cost of checking for and investigating fraud by reducing the need for huge teams of humans checking each transaction and insurance claim, leaving only a handful of investigators to check those issues the AI deems suspicious.