Generative AI has been making waves since 2018, following the release of OpenAI’s first Generative Pre-trained Transformer (GPT). Users have been playing around with AI for years, perhaps without realising that it would one day turn serious.

With the launch of the more advanced ChatGPT-4 in 2023, professionals began to explore its applications across various fields, including compliance. At Salv, we quickly recognised its potential and want to share our insights with you.

This blog will cover the main practical use cases for generative AI in AML compliance, discuss some challenges we encountered, and how we addressed them. We want to share our experiences with using generative AI at Salv. Let’s get started.

AI in AML compliance

Long before generative AI & ChatGPT became popular, AI had been used in AML compliance as an ultimate value proposition. It promised to deliver solutions that weren’t quite there yet in the real world. The level of accuracy everyone hoped for wasn’t there, but the potential for automation and gradual improvement through model training would offset these early shortcomings, even if it meant dealing with a few mistakes along the way.

AI algorithms rely on high data quality, but providing that data was often a challenge. Poor data could lead to consistently inaccurate outcomes. Despite automation easing the compliance teams’ workload, there was still a clear need for human judgement to interpret AI’s conclusions. A bit of online research will show you examples of high-street banks getting into trouble over failures in anti-money laundering processes, such as insufficient transaction monitoring or failing to report suspicious activities, despite adopting AI-powered compliance systems.

Read about AI/ML pros and cons for AML Compliance, written down by Jeff, our COO.

But it’s not all bad news. With the arrival of advanced generative AI models there are more ways to save you time and effort without compromising on quality of the output. The challenges are still there but now we know how to overcome them. Let’s not overlook the multiple applications of generative AI in compliance that are already available to you.

Generative AI use cases in AML compliance

Managing transaction monitoring alerts

Going through lengthy notes post-investigation can be cumbersome. Use Generative AI & ChatGPT to summarise the notes, prioritise and review alerts, and assign classifications efficiently. This way, you can speed up the review process and make sure that critical alerts are addressed promptly.

Fine-tuning monitoring rules for fewer false positives

Custom monitoring rules can significantly reduce false positive alerts. Generative AI can enhance this process by using your existing rules, along with the alerts they generate, to train the model. Generative AI can spot the frequent patterns and help you analyse the reasons behind the recurring false positives. Through continuous improvement and recommendations, Generative AI adds context to these scenarios, demonstrating how they can be refined for better accuracy.

Using this method can save you a lot of time because editing and improving AI work is much quicker than starting from scratch. This is one of the most evident applications of generative AI in compliance; we’ve tried it ourselves and can say – it really works. 👍

Advanced name matching // AML name screening

Generative AI can significantly improve name matching by leveraging vector databases. Unlike traditional relational databases, vector databases assign a unique “fingerprint” to each name, enhancing the precision of matches. This includes name characteristics, cultural connotations, popularity variations across regions, and more.

For instance, the Estonian conductor “Anu Tali” would not be mistakenly matched with “Abu Talib” using vector-based matching. This method can potentially reduce false positives compared to more simple fuzzy matching logic, integrating deep cultural and contextual understanding.

Anomaly detection

Anomaly detection involves identifying patterns that deviate from the norm. Using classical machine learning techniques, you can segment your customers based on their transaction behaviour, including true positive alerts or at least a SAR. Generative AI can provide an explainable analysis of how certain activities diverge from those of a similar cohort. This deviation indicates a potential anomaly worth investigating.

Feature engineering

Generative AI facilitates feature engineering by brainstorming potential indicators of money laundering risk. This way, you don’t need to start from scratch, but rather generate and iterate on ideas using AI, thus enhancing the effectiveness of your AML compliance systems.

Challenges of AI in compliance & how to address them

Challenge: inaccurate output

If you’ve ever used ChatGPT to generate SQL-based monitoring rules, you know that even if the SQL query is syntactically correct and looks like SQL, the business logic can be wrong. This issue occurs, and its good to be aware of it. Thankfully, a solution is near. This is very critical in AML compliance, because incorrect rules will likely lead to wrong business decisions.

Solution: validating output

Adopt a mindset where validation is one of the most important steps. Always validate the output, checking if it still aligns with the business logic. Two steps are crucial: generating and validating outputs for reliable results.

Challenge: hallucinations

Hallucinations can create elements that aren’t real, leading to mental confusion. The risk of hallucinations is high because the generated text may seem logical yet be completely unfounded.

It’s worth noting that communication should be a dialogue and involve mutual experimentation. Don’t expect the perfect answer immediately.

Solution: test & adjust your prompts

More coherent results are achieved by being strict with your prompts. However, everything is constantly changing; what worked last week may not work this week. Test, iterate, and constantly adjust your prompts.




But there is one challenge you can’t do much about, for now…

The reality we face with Open AI’s technology is that its architecture remains a mystery to many, leaving us without clear oversight of our data and how it may be used in the future. The potential for unintended data leaks is an ongoing concern, given the somewhat opaque nature of data handling within this technology. It is possible that some of the generated outputs could be inadvertently accessible to others. There is a current legal dispute involving Open AI and the New York Times, highlighting concerns over the use of the newspaper’s content in training AI models, possibly causing the company “billions of dollars in statutory and actual damages”.

The advice? Proceed with caution and prioritise data privacy, especially with sensitive information like customer names. As we venture into this emerging field, we are likely to face challenges that we have never seen before. We are paving the road as we walk on it.


In many ways, generative AI & ChatGPT operate like a black box. The challenges remain the same: the quality of input and output, the need to test, iterate, and constantly adjust the prompts, and keeping a human in the loop to validate generated outputs and seem how well they align with business logic.

Even given the challenges, there are many applications for generative AI in AML compliance, that can’t wait. Because criminals never wait, and they are usually early adopters of new technologies.

We’ve discussed using generative AI to produce more accurate monitoring rules and reduce false positive alerts, prioritise and review alerts, and improve name matching in screening results. These use cases are not exhaustive, and with time, we learn more applications as new problems arise and we seek better, more efficient solution.

It feels like building a plane while flying it.

If you like what you just read, and want to know how we make screening and monitoring processes more effective, let’s talk.


×
ISO/IEC 27001 logo
Aicpa logo
GDPR compliant logo
OWASP logo

We build security to our products and organisation from the start. We use security best practices (incl. ISO 27001, CIS etc.) to ensure that our security management system meets the highest standards.

Salv has an ISO/IEC 27001: 2022 certificate, as well as ISAE 3000 compliant SOC 2 Type 2 report.