Financial Services Compliance Blog - Thistle Initiatives

Building Confidence in Sanctions Screening: Testing, Assurance, and Synthetic Data

Written by Jessica Cath | Oct 29, 2025 12:00:27 PM

Thistle Initiatives recently hosted a webinar (watch the session on-demand here) on conducting effective testing of sanctions screening systems, tools, and processes. Jessica Cath (Managing Partner at Thistle Initiatives) was joined by an incredible panel of speakers to cover everything from test scenarios to synthetic data:

This article summarises the key findings and takeaways:

1. Why testing and assurance matter

The Financial Conduct Authority (FCA) has increased expectations around sanctions screening assurance. Under its Sanctions Modular Assessment Proactive Programme (SMAPP), over 90 firms were tested using a synthetic dataset of more than 100,000 names.

The aim was to assess whether firms understood how their tools worked and whether configurations met regulatory expectations. Thistle has supported various firms subject to the SMAPP (some of which are still engaging with the follow-up process). Key learnings from the SMAPP set out the FCA’s expectations and the risks of getting it wrong. Understanding the system:

Firms must demonstrate a full understanding of why their system produces particular results, even if they use a third-party tool. Firms should be able to explain differences in results, thresholds, and match logic, in line with their sanctions risk exposure. For example, if you have switched off vessel matching because you only service retail individuals, then you must document the evidence and explain it.

“The FCA really expects firms to know what they were screening and, unfortunately, many simply couldn’t explain why the tool was producing certain results.”

Jessica Cath, Thistle Initiatives

Assuring the system:

Testing of your screening solution is essential to evidence that your configuration, tuning, and governance align with your stated risk appetite, both at the time when the tool was implemented and on an ongoing basis. Even where you use a third-party solution, you own the sanctions risk.

“Ultimately, when it comes to the difficult situations, it’s you stood in front of the regulator, not your vendor.”

Will Monk, Napier AI

Fines if you get it wrong:

Where sanctions screening is not implemented correctly, failures have been costly. Examples include Starling Bank’s £29m fine for incomplete list coverage to Office for Financial Sanctions Implementation (OFSI) penalties for breaches due to process gaps.

“The expectations have been raised in this area, and as a result, the risks are also high – fines, reputational damage, all of it.”

James Dodsworth, Thistle Initiatives

2. Approaches to testing and the role of synthetic data

With a clear understanding of the need to conduct testing and assurance of sanctions screening tools and configuration, the panel discussed practical approaches to building effective assurance plans.

Getting the basics right:

Before testing begins, firms need a solid foundation, including a defined policy, a clear sanctions risk appetite, documented control ownership, and clean datasets. The panel outlined the following as essential to a strong foundation:
 
  • Document your sanctions policy, clear risk appetite, and your sanctions risk assessment. Through the assessment of your sanctions risk exposure, you will set your screening list coverage, decide your screening frequency, and set basic standards for how screening will be executed.

“Do you have a screening policy? Do you know your risk appetite? What lists are you screening, how often, and for which products? That’s pillar number one.”

Will Monk, Napier AI

  • In building your sanctions procedures, you should map end-to-end processes to avoid operational duplication of tasks or inconsistency in the review of matches.
  • Finally, you should assess your data quality. No matter how good the screening tool is, it relies on the cleanliness of data inputs. If customer data is not captured clearly in appropriate fields, this may require a cleansing exercise to ensure names and identifiers are pulled into any screening tool effectively.  

Designing test scenarios:

Once you are ready to start testing, your test scenarios should be risk-based and realistic, reflecting the firm’s sanctions risk exposure, but also broad enough to capture emerging risks and meet FCA expectations (we know from the SMAPP testing that the test dataset was very broad).  

A few things to consider when designing your test scenarios:

  • Who are your customers and where are they based?
  • Who do your customers transact with, and where are they located?
  • What common naming structures and characters do you see in your customer base?
  • Do you onboard or transact with corporates and vessels, or just individuals?

“Think about your footprint — your client base, your geography, your products. You can focus on where the exposure really is but keep the data set broad enough for what might come tomorrow.”

James Dodsworth, Thistle Initiatives

Using synthetic data effectively:

To conduct testing, firms should use synthetic data, crafted in alignment with their test scenarios. Synthetic data allows firms to generate controlled, labelled datasets without using real customer information, enabling scalable and safe testing.

The benefits of synthetic data sets:

  • Enables precise manipulation of data (typos, transliterations, OCR errors, swapped characters).

“Real data is messy. People mistype, systems corrupt characters, and bad formatting creeps in. Synthetic data lets you control those variables and really see what the tool is doing.”

Martyn Higson, FinCrime Dynamics

  • Supports repeatable, automated, and scalable testing programs.
  • Allows measurement of how systems handle known variations and stress conditions.

“Most firms are still doing testing with ten names in a spreadsheet. Synthetic data means you can scale that safely and automatically.”

Martyn Higson, FinCrime Dynamics

Rules-based systems vs AI:

Screening engines will be set up with rules-based logic, AI-driven matching, or a combination of both. AI is often used as an overlay to reduce false positives, improving outputs, but often reducing explainability when considering whether the outputs are accurate and appropriate.  

“With older systems you can get your pen and paper out and show the regulator why it was 85% confidence. You can’t do that with AI fuzzy matching.”

Will Monk, Napier AI

To test these systems, the basic approach of putting ten names into the system with name variations will not be sufficient. Instead, carefully crafted synthetic datasets with larger volumes will allow you to assess the model more effectively.  

“Synthetic data lets you test those AI systems holistically — lots of controlled examples so you can see how the model behaves across different types of variation.”

Martyn Higson, FinCrime Dynamics

3. Practical engagement with vendors

Whilst firms may often use a third-party solution for sanctions screening, accountability for complying with regulatory requirements remains with the firm. Vendors can support, but they cannot own testing or assurance – this must be undertaken by the firm itself.

The panel discussed some practical tips on how to engage with vendors to ensure you have everything you need to conduct effective testing.

  • Ensure you have access to configuration settings and thresholds. Firms must ensure they understand and set these in alignment with risk appetite, even when using third-party solutions. Whilst some vendors enable you to configure settings easily through no-code dashboards, others may require code and firms will need more support to understand, set and document.

“If you can’t see how your vendor tool is configured, you must ask. You need that information to test it properly.”

Jessica Cath, Thistle Initiatives

  • Confirm how frequently vendor lists are refreshed and how these are reconciled to official sources. The vendor will be able to share refresh schedules with you.
  • Consider independent third-party testing to assure your vendor performance, in alignment with regulatory requirements and the documented configuration settings you have set.

“Don’t let the vendors mark their own homework.”

Will Monk, Napier AI

4. Ongoing testing, governance & assurance

Testing should not be a once-a-year exercise. We have seen fines from incorrect data flows (and therefore not all screening is taking place), as well as clients where back-end configuration settings have been changed without management knowledge, producing incomplete screening. Assurance needs to be continuous but risk-driven and proportionate, with clear oversight and governance, to ensure the system continues to operate as expected.

Key elements:

  • Run regular synthetic data tests after list or system changes (even if small).
  • Run comprehensive system testing on a periodic basis to assess configurations, thresholds, data inputs, processes, governance and ownership.
  • Feed results of testing into senior management reporting, with management information on alert trends, backlogs, emerging risks and tuning impacts.
  • Maintain a clear audit trail of all tests, outcomes, and remediation actions.
  • Ensure sufficient analyst and IT capacity to handle alerts or remediation tasks if they arise.

“Historically, reviews were annual, tied to large framework reviews. Synthetic data allows much more ongoing monitoring.”

Martyn Higson, FinCrime Dynamics

Closing takeaways from the panel

Will Monk: “Work through your checklist – policy, risk appetite, quality data, and know where you need help.”

Martyn Higson: “Be forward-looking. Geopolitical and tech change means your sanctions controls must keep evolving.”

James Dodsworth: “Resources, skills, and reporting are the three pillars of a strong operational framework.”

Jessica Cath: “Understand your system, test it rigorously, and be ready to explain it — that’s what the regulator expects.”

Meet the Speakers

Jessica Cath  |  MODERATOR
Managing Partner, Thistle Initiatives

Jessica is a financial crime leader, working with a range of firms to build, scale and assure all elements of the financial crime framework. She has worked with start-ups to Tier 1 Banks to transform controls through growth phases or when facing regulatory enforcement. Jess has also conducted multiple US monitor and s166 Skilled Person reviews globally, and has a Masters Degree in Intelligence and International Security and holds an ICA diploma in Financial Crime Prevention.

James Dodsworth
Senior Manager and Sanctions Lead, Thistle Initiatives 

James has worked in financial crime compliance across a range of sectors and firms for over 20 years. As a certified fraud investigator, James has experience in all three lines of defense: conducting investigations, designing and delivering fraud controls and risk assessments, aswell as creating and reviewing policies and procedures.


Martyn Higson 
Chief Technology Officer, FinCrime Dynamics 

Martyn leads the engineering and delivery functions at FinCrime Dynamics as Chief Technology Officer. Prior to joining FinCrime Dynamics, Martyn ran implementation at the leading anti-fraud and anti-money laundering AI vendor Featurespace (acquired by Visa) where he worked for over 7 years helping the business grow from 60 to over 400 people. He deployed complex Machine Learning platforms into some of the world’s most complex banking and processor infrastructures and scaled the Implementation Engineering team to over 30 engineers globally. Martyn holds a Masters in Engineering from The University of Cambridge.

William Monk
Chief Product Officer, Napier AI

Will has 20+ years of experience leading global financial crime operations, transformation programmes, and product strategy and development across financial services, both as a Managing Director within banks and as an advisor/consultant. He has previously worked as Global Head of Financial Crime Quality & Standards at Natwest, and his experience comes from a breadth of leading global financial institutions including HSBC, Barclays, and UBS.