Banking » Fintech » Generative AI will radically overhaul banks’ manual compliance responsibilities

Generative AI will radically overhaul banks' manual compliance responsibilities

The Bank for International Settlements completed proof-of-concept testing of Project Gaia, an initiative enabling analysis of climate-related risks in the financial system. Neil Vernon, Chief Product and Innovation Officer at global fintech firm, Gresham Tech believes the project will serve as a model for future AI-enabled applications.

It’s rare to see generative AI (gAI) mentioned in the same sentence as compliance. In fact, it’s rare to see mentions of gAI in the mainstream outside of its capabilities in creating mind-bending images, songs and videos.

More specific industry use cases for gAI are emerging, however. Just recently, the Bank for International Settlements completed proof-of-concept testing of Project Gaia, an initiative enabling analysis of climate-related risks in the financial system, and it’s easy to believe the project will serve as a model for future AI-enabled applications.

With an increasing number of regulators and central banks looking to optimize their climate data reporting through automation and control solutions, this will undoubtedly have an impact on how companies report and feed that information back to the regulators. This development will likely lead to more advanced automation to progress ESG measurements.

These rapid advancements can be overwhelmingly positive for the financial services sector at large, but will need to be closely monitored, and introduced delicately. Speeding ahead to incorporate these technologies could have far reaching ethical implications, creating unfair outcomes based on inscrutable evidence and biases.

Pathway to automating data-driven compliance

Could the streamlining of processing based on AI set us further back? Could AI’s unverified findings influence the outcomes of ESG reports resulting in some companies being fined when they shouldn’t be? I think we don’t know. As Google found recently, dealing with bias in gAI models is far from easy. In an attempt to remove the bias it is all too easy to remove the truth.. While gAI is indeed revolutionizing content creation, within the realms of gAI’s capabilities lies a massive opportunity for the financial services sector. Specifically, financial firms stand to gain in terms of automating data-driven compliance, if implemented correctly.

When we consider data integrity – namely the accuracy, completeness and quality of data as it is maintained over time – firms most often risk losing data integrity due to the complexity of their systems infrastructure.   It is all too easy for data repositories which should be consistent to become inconsistent. In order to meet compliance standards, firms must ensure their data is both accurate, timely and complete..

By spotting gaps in the quality of data, combining AI with common domain models(CDM) and digital regulatory rules (DRR)  has the potential to totally revolutionize compliance management.

We are already seeing gAI help manage regulatory compliance and monitoring procedures, this in turn is enhancing due diligence and risk assessment via advanced data analysis and pattern recognition. Leveraging machine learning capabilities is enabling the detection and prevention of fraud. AI has also helped clients in the FinTech sector to manage and assess risks while allowing banks to better train and educate employees thanks to frequently updated adaptive learning algorithms.

Meanwhile, combining DRR, CDM and AI can help future proof compliance with regulatory changes, enabling financial firms to proactively adjust their compliance strategies and policies. By analyzing historical data and market trends, AI systems can help anticipate potential compliance challenges and guide decision-making processes within the organization.

There’s one major problem,  the availability of sufficient data both for the training and ongoing maintenance of AI models.  Using an existing LLM works extremely well for essentially language based problems.  Extracting meaningful ESG data from unstructured data is a case in point where we have had success with gAI.  This is a language based problem and gAI is highly suitable. If your problem is less language based e.g. predicting the root cause of a trade settlement failure then you’re left to train your own model with (probably) your own data and so far it looks like this is a very expensive endeavor with questionable returns for all but the very largest of organizations.

T+1 provides a good testing ground

Research shows many firms in both the US and EU markets are not yet ready for T+1 requirements, due to come into effect in the US market this May. Though only on the horizon, vast numbers of firms have yet to align with T+1, which will itself bring a plethora of problems.

Some commentators, myself included, predict a significant increase in trade failures as organizations existing batch based T+1 data quality controls fail in the face of the need to undertake those same controls on T in real-time.

This is not just about “moving” those Controls from T+1 to T.  It is about the availability of the data on T and the ability to control it in real-time and the ability to handle an ever changing picture. Batch based controls are “one shot”,  the data generally doesn’t change.  Real time controls are continuous against a stream of data that carries not just new data but cancellations and amendments to existing data.

The role of gAI in this world is far from clear.  We’ve had some success in simplifying onboarding using gAI and undoubtedly new Controls and existing Controls will require change as the world moves to T+1.

For larger organizations, with significant amounts of data it may well be possible to train AI models in break allocation,  root cause analysis and automated repair tasks.  Solutions that may be in much demand post North America’s adoption of T+1.  If you’re a smaller organisation then it is unlikely that you’ll have enough data to produce the models for this type of AI organisation. You’ll be left with a good old-fashioned rules based approach.  I’m not sure it is much solace but you can at least console yourself in the knowledge that your rules are always repeatable,  “Rules don’t hallucinate” may become your mantra and rules for all their complexity are explainable to an auditor. Explaining a gAI model built on 1Bn parameters that occasionally hallucinates and produces completely incorrect outcomes to an auditor is never likely to be a productive conversation.

Share
Was this article helpful?

Comments are closed.

Subscribe to get your daily business insights