Digital Transformation » AI » Lucanet’s Sören Labrenz on making AI adoption audit-ready

Lucanet's Sören Labrenz on making AI adoption audit-ready

Lucanet’s Sören Labrenz on making AI adoption audit-ready

From the leap to reasoning models like GPT-4.1 and the rise of AI agents, to emerging technologies such as the Model Context Protocol, accelerated AI development is freeing up finance teams for higher-value work. But it is also creating governance challenges that come with rapid adoption.

How can CFOs address bias mitigation, infrastructure gaps, and regulatory compliance to establish trust, transparency and oversight – preparing for a future where AI agents are embedded members of finance teams?

In this Q&A, Sören Labrenz, Head of Product AI at Lucanet, sets out a pragmatic path: start with contained, low-risk use cases; build the data foundation; and design for explainability and human sign-off from day one. We discuss where variability is acceptable, how to measure adoption and fairness, and the skills mix finance will need as AI agents move from pilot to production.

AI is evolving faster than most teams can adapt – how can CFOs govern solutions that don’t yet exist, but are just around the corner? 

CFOs don’t need to wait for the perfect AI solution to emerge, they need to start now with what’s available. For example, they can begin by applying AI to practical, low-risk use cases that can streamline the preparation of presentations, accelerate decision-making and automatize reporting. These early wins build confidence and lay the groundwork for broader adoption. 

At the same time, CFOs must think ahead. Just as today’s finance teams include experts in ERP systems, tomorrow’s teams will need professionals who understand how to harness AI effectively. That means investing now in upskilling, recruiting for emerging capabilities, and creating roles focused on AI governance and integration within finance operations. 

What new risks should CFOs and their teams be preparing for today? 

One of the most fundamental shifts with generative AI is its probabilistic nature. Unlike traditional systems that produce consistent outputs, AI may generate different results from the same input. CFOs must understand where this variability is acceptable, and where it isn’t. 

This raises critical questions: Which processes can tolerate probabilistic outcomes? Which demand deterministic precision? In finance, reliability is non-negotiable, so leaders must design AI use cases that align with this principle. 

Another growing risk is data quality. In an AI-driven environment, poor data doesn’t just lead to inefficiencies—it can lead to flawed decisions. Building a robust, clean, and well-governed data foundation is now a strategic imperative. 

How do you ensure auditability and explainability when AI models are becoming reasoning engines—far less predictable than today’s automation tools? 

Human oversight remains essential. Every AI-driven action, especially those with financial or operational impact, should be subject to human review before execution. This is particularly critical for high-risk tasks like data deletion or journal entries. 

Equally important is maintaining a transparent audit trail. Just as ERP systems log human actions, AI systems must record what was done, why, and by whom—including which human configured or triggered the AI. This ensures traceability and accountability, which we see for example, in blockchain technology.  

Finally, CFOs should demand explainability from their AI tools. If a model recommends a course of action, it must be able to articulate the rationale behind it, so finance leaders can confidently communicate with stakeholders and auditors. 

What should CFOs be measuring now to prove that their AI adoption will stay compliant, fair, and trustworthy—particularly as new classes of AI enter the market? 

Start with the foundation by using certified, enterprise-grade AI solutions from trusted vendors. Most finance teams won’t have the in-house expertise to vet every technical detail, so working with reputable partners ensures compliance and builds auditor confidence. 

From there, track adoption metrics. Are your teams using AI tools effectively? Are they gaining the skills needed to work with AI? Set clear capability goals and measure progress against them. This not only demonstrates responsible adoption but also helps to build a culture of continuous learning and trust. 

How can finance leaders build trust with employees so AI is seen as workload reduction, not workforce replacement? 

Trust is built through transparency and tangible outcomes. Start by deploying AI to eliminate the repetitive, manual tasks that employees find most frustrating. When people see AI freeing them up to focus on more strategic, value-added work, they begin to view it as an enabler, not a threat. 

Be honest about the fact that roles will evolve. That’s not a negative, on the contrary, it’s an opportunity. Finance leaders should demonstrate to their teams how their careers can grow in an AI-enabled environment. Offer training, support, and a clear vision for the future. When employees see a path forward, they’re far more likely to embrace the change. 

Share
Was this article helpful?

Comments are closed.