Digital Transformation » Big Data » UK CFOs and regulation around AI, ML and data

UK CFOs and regulation around AI, ML and data

Regulation around the use of AI, ML and data has been a focus for a considerable time. What does this imply to the UK CFOs?

Back in June this year, Minister for Implementation, Oliver Dowden, set out plans detailing how new technologies, like artificial intelligence (AI) can revolutionise businesses in the UK. What’s more, the Government Digital Service (GDS) and the Office for Artificial Intelligence (OAI) published joint guidance on how to build and use AI.

The new AI Guide covered how to assess if using AI will help individual businesses meet the needs of their customers. It also also helped in determining how businesses in the UK can best use AI and implement AI ethically, fairly and safely.

Since then, and perhaps even before this, the ethical use of data, and the extensive use of AI in day-to-day customer interactions have been areas of increased focus for politicians and regulators globally, as well as for news and media channels, over the last couple of years. New centres of research, such as the UK Centre for Data Ethics and Innovation, have been established and sit alongside the research developed by ’big tech’ companies across the world.

Many businesses are currently developing and operationalising Robotic Process Automation (RPA) solutions and are beginning to experiment with true AI.

In its ‘Hype Cycle for Emerging Technologies in 2017’ Gartner has identified that AI, as a transparently immersive experience and digital platform, is a trend that will enable businesses to survive and thrive in the digital economy over the next five to ten years.

The dangers of data

According to one of Oracle’s Data Science blog, 2.5 quintillion records of data are created every day, people are being defined by how they travel, surf the Internet, eat, and live their lives. This data is therefore a valuable commodity. At the same time, the scale of the potential liability for organisations that do not act now to mitigate risk is simply too great, spanning regulatory failure, contractual dispute and the loss of customer trust.

Ethical use of customer data in a digital economy, a paper by UK Finance and KPMG, discusses the key ethical challenges facing financial institutions today.

The paper states: “If financial institutions lose their status as trusted custodians of customer data, they may well lose their licence to operate. In mainstream financial services, all forms of institutions are increasingly coming to understand the liabilities associated with data ownership and the use of autonomous technologies. While the amount of coverage in these areas has increased recently, for financial institutions the reality is that the ethical use of customer data has been a focus for some time. A good example is the ‘Principles of Reciprocity’ which were developed as a basis for sharing customer data with third-party providers in order to better undertake credit checks.”

The paper has outlined the following five principles to both mitigate potential liability and secure priceless customer trust:

  • Respect human agency
  • Safeguard equality and fairness
  • Deliver transparency
  • Sponsor organisation-wide approach
  • Establish accountability

The paper concludes: “It is logical that the use of customer data combined with the autonomous Machine Learning and AI systems must be subject to challenge and be open to legal and regulatory scrutiny in order to establish much higher ethical standards.”

Regulation of machine learning

Machine learning (ML) is increasingly being used in UK financial services according to a recent report from the Bank of England (BoE). The report notes that the application of ML methods in the financial services sector could enhance routine business processes and improved software and hardware and increasing volumes of data have enhanced the pace of ongoing ML development.

One of the highlights for the report were around regulation of ML. Regulation is not considered a barrier; however, some companies emphasize the need for clearer guidance on how to apply current regulatory guidelines. Companies do not believe regulation will prevent or adversely affect ongoing ML deployment. Legacy IT platforms and data limitations could slow down the adoption of ML-based systems. Companies said that additional guidance on how to apply current regulation could help ML deployment.

ML-based systems are increasingly being used in UK’s financial industry since two thirds, or 66%, of 300 organisations responding to the survey said they currently use some form of ML. Also, one third, or 33%, of ML-based programs are used for many different activities in specific business areas.

Other findings of the report were:

  • Application of ML: ML deployment is most advanced in the banking and insurance industry and used in front and back-offices across a wide range of business areas. The technology is also used in anti-money laundering (AML), fraud prevention, and customer-facing applications (customer services and marketing). Some companies use ML for improving credit risk management, trade pricing and execution, and general insurance pricing and underwriting.
  • Risks associated with ML: Firms believe ML does not always create new risks; however, it could amplify existing risks. For example, ML applications might not work properly, which may occur if model validation and governance frameworks do not adopt the latest technology. Companies use safeguards to manage risks associated with ML, including alert systems and “human-in-the-loop” mechanisms. These are useful for flagging if the model does not work properly.
  • Validation: Firms validate ML applications before and after the are deployed. Validation methods include outcome-focused monitoring and testing against benchmarks. Many companies say that ML validation frameworks must evolve as ML applications begin to scale and become increasingly complex.
  • ML application development: ML development has entered the advanced stages of deployment in certain cases. Companies usually develop ML applications in-house. They may also rely on third parties for the underlying platforms and infrastructure (e.g. cloud computing). Most of the users apply existing model risk management frameworks to ML applications. However, many note that these frameworks must evolve as ML techniques have become more advanced.

UK’s tryst with digital regulation, data and AI

Four months ago, UK’s Business Secretary Greg Clark announced that the new regulation rulebook for the Fourth Industrial Revolution will include a Smart Data Review to offer consumers greater control over their data – keeping the UK at the forefront of innovation. The proposals from the Smart Data Review also included measures to protect vulnerable customers.

In April, Home Secretary had unveiled tough new regulation for tech companies by introducing the world’s first online safety laws geared towards the UK’s online safety.

As part of the UK’s online safety, social media companies and tech firms will be legally required to protect users or face tough penalties if they do not comply.

Discussions around data: Critical for the CFOs in the UK

According to the recent FinTech Barometer survey, one-third of UK finance industry falling behind on digital transformation. Data-driven digital economy is the way forward for the UK, since another research showed that over half of consumers saw information sharing as essential to the modern digital economy, especially if they received personalised digital content in return.

The change to a real-time, digitally-sound system also benefits CFOs directly, with 54% CFO respondents to the Fintech Barometer survey indicated that they make decisions based on data and 41% using that data to make predictive analyses.

Share
Was this article helpful?

Comments are closed.

Subscribe to get your daily business insights