Model Risk in Anti-Money Laundering Programs

How the principles of SR 11-7 can help banks strengthen BSA/AML compliance

 

This thought leadership article was written by an external contributor, whose views and opinions may not match those of Chartis.

Beliefs drive actions that lead to results. I learned this way back when in my bank’s leadership training school, and its truth has stayed with me. What I think determines what I do.

Similarly, what my anti-financial crime (AFC) peers think about the tools they use to detect illegal activity determines how they govern the tools. Many believe that automated systems that comply with Bank Secrecy Act (BSA) and anti-money laundering (AML) regulations should not be called models.

Why do the nuances of the term ‘model’ matter? Tools classified as models have higher compliance standards than non-model queries and reports. This has been true since at least April 2011, when the Governors of the Federal Reserve Board (FRB) issued Supervisory Guidance on Model Risk Management (MRMG) as an attachment to Supervisory Letter 11-7 (SR 11-7). Concurrently, the Office of the Comptroller of the Currency (OCC) issued Bulletin 2011-12 (OCC 2011), also attaching MRMG.1 Banks that use models have had to increase model compliance costs tenfold to meet the heightened standards.

Why do my peers believe that AFC applications are not models? For one thing, BSA/AML tools are relatively simple. Many are based on rules; a rule is no more than an ‘if/then’ statement that flags certain transactions as potentially suspicious. These criteria could be as simple as the amount or type of transaction. These rules are not based on statistical formulae, and don’t predict anything. Another difference is that AFC models are built on human intuition, unlike statistical models built on mathematical theories. Bankers decide the rule’s logic based on their expert judgment of what is suspicious. Bankers, not machines, decide whether to take action. My peers wonder why these qualitative systems should be subject to the same model governance requirements as complex structured finance tools like collateralized debt obligations.

Nonetheless, MRMG’s broad definition of a model means that many AFC tools are labeled as such. According to the guidance, a model could be any tool that transforms data into a business decision, even if the transformation relies on expert judgment.

How could the same model guidance apply to such different uses? While the guidance does not say exactly how MRMG applies to AFC models (in fact, BSA/AML compliance as a model use is not even mentioned), here’s my best answer to why MRMG applies. It’s because all models – the ones using ‘if/then’ statements and the ones using the Greek alphabet, the ones predicting default and the ones identifying suspicious transactions that have already occurred, the ones matching a name to a list, and the ones correlating risk across assets – all of these models, as statistician George Box famously said, “are wrong, but some are useful”.

Therefore, the guidance can be relevant to all models because the guidance understands this fundamental truth about models and provides universal principles for managing model risk.

I get concerned when I hear my AFC peers argue whether our tools are models. A belief that they are not models could prevent them from understanding that these tools are wrong and they might misidentify their weaknesses.

This inaction carries both model and regulatory risk. Indeed, the frequency of the word ‘model’ in consent orders related to BSA/AML non-compliance has gone from 12% in the 2010s to 32% since 2021, according to a recent analysis I did on enforcement actions issued by the OCC, FRB and Financial Crimes Enforcement Network (FinCEN) (see Figure 1). More on this later.

Given the increasing reliance on models in BSA/AML programs, understanding model risk is as important as ever. The principles outlined in MRMG can help us think about model risk in a way that reduces the risk of regulatory criticism and increases the effectiveness of these tools. These principles can help make our wrong models useful.

Models in BSA/AML programs

But first, how to tell whether we’re using models. Consider MRMG’s definition. Models have three components: inputs, processing logic and outputs. If the output is used to make decisions, you may have a model on your hands.

Here’s an example of a simple transaction monitoring model – a report flagging cash transactions structured to avoid BSA/AML reporting requirements (this activity is known as ‘structuring’). Such a report has inputs (cash transactions), processing logic (‘if, on a single day, a client deposits three cash transactions of $9,000 each then flag’) and output (a report of flagged transactions). Since this data is used to decide whether to file a suspicious activity report (SAR), it could meet the definition of a model.

Another example is the Office of Foreign Assets Control (OFAC) add-on service in credit bureau screeners (such as Equifax). This add-on service could be a model if it uses fuzzy logic to assess whether your applicant’s name is reasonably, if not perfectly, similar to one on an OFAC sanctions list. These credit bureau tools have inputs (name), processing logic (matching algorithms) and output (an OFAC flag on the credit report). Since the OFAC flag prompts the bank to reconsider opening an account, this add-on service could be a model.

Footloose model risk management

Before the widespread adoption of SR 11-7 and MRMG, model risk management at most financial institutions was as footloose as Kevin Bacon in 1984. Model developers, users and validators were often the same person, which hindered the objective identification of model weaknesses. If model flaws were identified, the risks were not escalated and bank boards were not informed.

Another problem was a lack of scrutiny of models’ assumptions. In the years when the assumptions held – including the conviction that house prices would never decrease – bankers made tons of money. When the assumptions crashed, so did their models and the global economy.

But model risk practices were not the only problematic practices before the crisis. Model risk guidance itself, including that issued by the OCC in 2000, also needed revision. That guidance focused on model validation. This narrow focus missed several aspects of model risk, including how model stakeholders should think about using the tools.

The 2011 guidance added several guiding principles. Below are seven of them. These principles help banks understand the fundamental truths about models and their risk. From these beliefs, actions can be taken to build a strong model governance program.

Model risk management principles

1. ‘Effective challenge.’

The guiding principle of model risk management, according to MRMG, is “effective challenge”.

Effective challenge is “critical analysis by objective, informed parties”.2 The guidance does not limit “informed parties” to those with PhDs in statistics. Any model stakeholder with information about a model is an informed party. The problem is that most models have many informed parties, and these parties don’t always communicate. They may not even know who to communicate with.

For example, model developers should know how a tool is built, but they may not know how it’s used. Model developers may not even know how it was implemented into the bank’s information technology (IT) systems. In terms of model implementation, the bank’s IT department may be the informed party, and not the model developer.

The principle of effective challenge empowers stakeholders to be informed about aspects of the model beyond the parts they see. Effective challenge means asking questions. How do we know that the tool model development built was implemented successfully and is being used as intended? How do we know that the transaction monitoring rules monitor the right activity? What evidence do we have that the data ingested into the model is of sufficiently high quality?

Effective challenge is the antidote to blindly believing everything that comes from a model. As Felix Salmon suggested in an article in Wired in 2009, investors’ blind faith in the Gaussian copula’s ability to assess the risk of mortgages was one of the causes of the 2008–2009 financial crisis. The constant inquiry performed by those who have internalized the principle of effective challenge allows stakeholders to see the tools as they are and avoid blindly believing them. This awareness helps to identify potential risks.

2. ‘Business units are responsible for the model risk.’

SR 11-7 broadened the scope of model risk management from the validator, as was implied in prior guidance, to the business units that own, develop, implement and use the tool. This means model risk may be owned by multiple business units.

The model owner decides whether a model is needed and is accountable for its use and performance; the model developer creates the tool and owns the methodology; the implementation partner ensures that it is properly integrated into the bank’s IT systems; the model user leverages the output to make decisions; the internal auditor independently assesses the effectiveness of model governance, and – last but not least – the model validator independently verifies that the model is working as intended.

The principle of responsibility empowers all stakeholders to be engaged. Engaged stakeholders are more likely to provide effective challenge.

3. ‘Models are never perfect.’

Faulty thinking about faulty models was common before MRMG. The updated guidance informs us that models will never be perfect, as they are “simplified representations of real-world relationships”.

The principle that models are never perfect helps banks engage in conversations around their risks. Through these conversations, stakeholders can develop the necessary controls to mitigate the weaknesses. Banks that adopt this principle are much more likely to make their imperfect models useful.

4. ‘Documentation takes time and effort.’

The resistance to developing meaningful documentation is so common that SR 11-7 warns us about it: “Model developers and users who know the models well may not appreciate [documentation’s] value.” And yet, documentation is so valuable! The words ‘document’ and ‘documentation’ are used 39 times throughout MRMG.

In my experience, the first question regulators ask when beginning an exam is to see the model development documentation (MDD). They evaluate the model’s effectiveness by how well they can understand it through the MDD.

It may be humbling to realize that such an ancient technology – the written word – is so important in assessing today’s modern marvels. And yet, if it is not written, it does not exist. And if it is written poorly – using lots of jargon – reviewers may assume that the bank’s understanding of the model’s risk is also poor. To write clearly about these tools means to think clearly about them. This takes time and effort. Banks that have internalized this principle make documentation a priority. The result is an MDD that describes the model in a way anyone unfamiliar can understand. Those unfamiliar may very well be your examiners. If they can understand the model through its documentation, they will be less inclined to find fault.

5. ‘A strong governance framework.’

Even if a bank does everything right – picks the best system, confirms the data, validates the model independently, and engages all stakeholders – without an effective model governance framework, it could all be for naught. Just because a model works today does not guarantee it will work tomorrow.

The principle of strong governance encourages bank leaders to invest in establishing procedures and implementing controls that support model performance over time.

SR 11-7 says that strong governance starts from the top. Policies defining model risk management activities should be approved by the board. Staffing should be appropriate to execute them, and procedures detailed to perform them. Accountability checks must be in place to ensure that the policies and procedures are carried out as specified.

6. ‘Banks are expected to validate their own use of vendor products.’

While many big banks have their internal model development groups, most others rely on third-party vendors to build and implement automated systems.

So often, I hear from bankers that one of the benefits of using third-party models is that the vendors will also own the model risk. That is a misconception. Since banks own the models, they also own the risks. SR 11-7 says vendor products should “be incorporated into a bank’s broader model risk management framework following the same principles as applied to in-house models”.

The principle of treating vendor products as in-house tools forces banks to own the model risk, which encourages actions to reduce it. At a minimum, have vendors provide in writing, preferably in the contract, how they will provide model governance support throughout the model lifecycle.

7. ‘A degree of independence.’

Prior to MRMG, independence between model developer, user and validator was not common. This lack of independence meant that the validator had a stake in the model’s performance, and was incentivized to emphasize its strengths, rather than identify its weaknesses and limitations.

Model weaknesses cannot be mitigated until they are identified. The principle of independence empowers all model stakeholders to identify model weaknesses free from repercussions when weaknesses are found.

Models enforced

Model mismanagement can impact a bank’s ability to execute BSA/AML compliance, according to the enforcement actions I reviewed. In 2021, FinCEN blamed “critically flawed” models for preventing a bank from properly assessing “customer risk” and failing to “identify high-risk accounts requiring EDD [enhanced due diligence]”.3

Where banks rely on model output, non-compliance with MRMG could be its own line item in an order. In 2022, the OCC mentioned failures to implement “processes for developing adequate [model] documentation and prompt reporting of validation findings and prompt resolution of deficiencies identified during model validation”. It refers to guidance in OCC Bulletin 2011-12.4

Because of MRMG, models have unique requirements that are separate from other information management systems. In its orders, the FDIC separates ‘systems’ from ‘models’ throughout (although both require proper validation and documentation).

In the data that I reviewed, the OCC was more likely to name model weaknesses in consent orders than either the FDIC or FinCEN. Since 2021, however, the likelihood that models will be mentioned in FDIC orders has increased (see Figure 2).

Model mischief can be managed

In 2018 remarks on the use of machine learning tools in the financial services industry, FRB member Lael Brainard questioned whether new regulatory guidance was needed to address the risk of this new type of model. She suggested: “The policy discussion should start by considering whether the existing regulations already adequately address the risk.” Ms Brainard concluded that the risk of machine learning systems is adequately addressed ‘within the bounds of [the] existing regulatory regime,’ including SR 11-7.

In the Interagency Statement on Model Risk Management for Bank Systems Supporting Bank Secrecy Act/Anti-Money Laundering Compliance, released in 2021, the agencies appear to have come to a similar conclusion. The risk of models used in BSA/AML compliance do not require new guidance, as these models are covered by the principles in SR 11-7.

These principles – beliefs – transcend a particular model or even a model use. The principles even transcend time, and can be applied to new models that were inconceivable when the guidance was issued, such as generative artificial intelligence, quantum computing or whatever the future holds in computer-based tools.

As the financial crisis recedes into history, AFC models are in the fore. These powerful tools get more so with each passing day. Regardless of their power, they will never be perfect. To ensure they are useful, regulators apply MRMG in BSA/AML examinations. This means banks must apply it, too. Better thinking about models will lead to better thinking with models. We use these tools, after all, to help us think – to make decisions. And our decisions are important. AFC professionals keep our financial system safe and so too our world.

Notes

1.  In 2017, the Federal Deposit Insurance Corporation (FDIC) formally adopted the guidance.

2.  Federal Reserve. April 4, 2011, SR 11-7: Guidance on Model Risk Management.

3. Department of the Treasury, Finance Crimes Enforcement Network. October 3, 2022, In the Matter of: USAA Federal Savings Bank, Number 2022-01, page 9.

4. Department of the Treasury, Office of the Comptroller of the Currency. October 3, 2022, In the Matter of: ICICI Bank Limited, New York Branch, AA-ENF-2022-56, page 12.

Cara Wick is a financial crimes professional, with more than 20 years of experience in the banking industry.

The views expressed are hers alone, and do not necessarily reflect the views of any other entity or organization. She can be reached at: cjoany@gmail.com.

Jump to top

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@chartis-research.com to find out more.

You need to sign in to use this feature. If you don’t have a Chartis account, please register for an account.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here.