FundApps Blog | Our Latest Articles & Insights | FundApps

Governing AI in Compliance: 5 Takeaways from Industry Leaders

Written by Liam Driscoll | Sep 26 2025

In the last few years, Artificial Intelligence has moved from theory to practice across industries globally, changing the way we work, how we prioritise tasks, and where time is spent to generate the most output for our efforts. But few places is the debate on its introduction as nuanced as regulatory compliance - and for good reason. 

In an area so closely aligned with the reputation of that of the individual and firm, a balance in its use must be found, and delineations clearly established. And as more and more firms adopt AI, its impact will take many forms, ranging from making filings and screening holdings, to forcing a dialogue with regulators on shaping policy in the future.

So how do we separate hype from substance? When does automation become intelligence? How do organisations weigh innovation with accountability?

To answer these questions, FundApps invited industry experts from law, technology, and compliance to discuss the realities of AI adoption at Compliance Connect 2025. The annual event sparks debate through panels on the most pressing issues in compliance. The result? A candid look at the possibilities and potential negatives of AI in compliance, and five key takeaways that compliance professions should consider in developing their own AI strategy.

Cutting Through the Hype: From Automation to Real AI

Not all “AI” is created equal.

In our conversation, panellists drew a distinction between simple rules-based automation and true machine learning. Much of what’s branded as AI in compliance is really enhanced process automation, including scanning documents, managing workflows, or normalising data. While valuable and important to compliance teams, the consensus was this: organisations should be wary of “AI washing”, the marketing of traditional software or simple automation as artificial intelligence. Put simply, firms need to ensure what they’re evaluating as a solution or considering taking onboard isn’t just a repackaged version of legacy tools with new labels.

“Replacing your filing cabinet with a digital one isn’t intelligence, it’s just storage with a new label. True AI should move beyond convenience and actually reshape how decisions are made.”

Understanding where automation ends and genuine predictive or adaptive AI begins is the critical assessment compliance teams need to make in order to position themselves for long-term success and to avoid falling behind.

Innovation vs. Regulation: A Necessary Dialogue

Regulators are notoriously behind the curve on technology, but the SEC and others are under pressure to embrace AI themselves. Our panellists made the case that, under the current administration, compliance teams and outside counsel should proactively share their AI experiments with regulators rather than waiting for punitive action at a later date. This kind of transparency allows regulators to provide early feedback and helps firms avoid nasty surprises during examinations or enforcement. 

The key message was that compliance leaders should not fear engagement with governing bodies like the SEC. The more regulators see practical examples, those fully fleshed out and those in development, the more fair and realistic the eventual rules will be.

“The SEC has a mandate to embrace AI itself, which is a good thing. AI is one of these technologies where it is very hard to regulate it or tell people how to use it if you're not using it. We can all access AI. We can all use it. So we really, and we didn't feel this way during the prior administration, but we feel like it's a moment to go in and say, ‘You know, we've had three years of working on this. We're not close to perfect, but why don't we show you what we've done?...Why don't we get feedback now, rather than waiting until you come down really hard on us in an exam or we get an enforcement action’. That's [enforcement] not the only way to dialog with industry. It just, unfortunately, has become the only way to dialogue with industry.”

Oversight and Accountability Still Matter

When considering the use and implementation of AI, and what that would look like in practice, a debate centred around the concept of “human in the loop” models. Our panel was of two minds, with one side advocating for oversight to prevent the rubber stamping of outputs of AI, and the other suggesting that too much caution risks slowing innovation over red flags that don’t provide meaningful value. Equally sound arguments that led to common ground - if AI makes a mistake, it will be firms, not the individual, that remain responsible for the outcomes. 

Organisations must treat AI the same as any outsourced service by implementing governance, establishing rigorous testing and retesting protocols, ensuring cultural readiness to question outputs, and creating a mindset of continuous improvement. Cutting corners is simply not an option for responsible adoption.

“If you are kind of thinking about it from an outsourcing perspective, any time that you outsource, be it a machine, be it another human being, if you are not providing the right level of oversight, and you're just rubber stamping the output, you're still under the same amount of risks. And you know that humans in the loop have that obligation to make sure that they're not just rubber stamping.”

“I tend to liken it to the transition we've seen over time with cloud adoption, and how long it took us as software professionals like myself to figure out how to do cloud safely, cheaply, and performantly. It took time, and we came up with whole new design patterns for that. What we hear a lot right now with humans in the loop is a similar sort of thing to the early days of cloud. It's a control because we're not really sure how far to push, or how far we can push these things. I think that we're still sort of establishing those organisational design patterns, those points where the humans and the machines interact, figuring out how we want to use AI.”

The Workforce Will Change, But Not Disappear

AI is nibbling away at the repetitive. Filings, reconciliations, coding boilerplate are all within scope as the first tasks to be automated away entirely. And our panel agreed - that’s good news for compliance professionals. The real value is in the knock-on effects, where people are freed up to focus on higher-value, judgment-driven work. 

Rather than eliminating jobs, AI will demand new skills: prompting AI effectively, managing outputs, and embedding ethical judgment into workflows. Just as automation reshaped industries like farming and manufacturing, AI will reshape compliance by removing the dull, repeatable tasks and creating space for intellectual problem-solving. The professionals who thrive will be those who adapt quickly, embracing AI as a partner rather than fearing it as a replacement.

“We are looking at recruiting developers that have a background in knowing how to prompt AI. And this is one example. That will happen with paralegals and risk and compliance teams - learning how to interpret regulations and use innovation to enhance compliance. It is not going to replace the intellectual work. It is going to replace the mundane.”

Building a Strategy Starts Small

One of the strongest calls to action from our discussion was for leaders to “start experimenting” with AI. 

As our panelists discussed, experimenting can, and likely should, be simple - and some advocated for closed environments, away from sensitive information and the internet. But experimenting can be as basic as automating a spreadsheet task or using AI to draft snippets of code can demonstrate quick wins and build confidence. Automating the tasks that “suck the life out of our staff”.

The critical point then becomes combining technical experimentation with governance and cultural adoption so that AI use is intentional. Welcomed and observed, as opposed to some shadow practice. Firms that hesitate may find their people already familiarising themselves with AI tools on their own, leaving leadership playing catch-up, waiting for a sweeping enterprise solution that may not come. As prevalent as it is in our everyday lives, people are naturally inclined to use it with or without their employer’s blessing. 

“If you’re not experimenting with AI, your people already are - you just don’t know it yet. Strategy means channeling that curiosity into safe, governed innovation.”

Looking Forward

By 2030, AI won’t be a differentiator, but instead serve as the bare minimum to operate in compliance and an array of industries alike. Our panelists agreed AI adoption is inevitable. The question is instead how it will enter the compliance workflows, how quickly, and under what governance structures. 

AI offers tremendous opportunities to reduce inefficiencies, free up employees for value-add work, and create additional transparency and consistency in compliance. Likewise, it presents the risk of disjointed adoption, reliance to a fault, and regulatory uncertainty if implemented without care. The organisations that succeed will be the ones that recognise both of these realities and plan accordingly. 

As much as AI represents powerful technology, leadership and culture will define its responsible use.