Why You Need Good AI Governance

Nate Buchanan, COO & Co-Founder, Pathfindr

t may not be everyone's favorite corporate function....but it's very necessary.

No corporate buzzword elicits as many reactions - most of them negative - as “governance”. Whether it’s a Forum, Committee, or Tribe, anything governance-related is often perceived as something that gets in the way of progress, even if people acknowledge that it’s necessary.

When it comes to AI, this dichotomy is particularly stark because of the nature of the technology. It’s exciting, it changes quickly, and people want to play around with it…but it’s also unpredictable and can put companies and their customers at risk.

Hence the need for governance - whether you like it or not.

But AI governance doesn’t have to be cumbersome and overbearing. In fact, there are some simple ways to integrate it into your existing governance processes so that you don’t need a standalone set of meetings, reports, or templates that nobody wants to attend or deal with.

First, start by understanding what governance is “for” at your company. Most teams think of governance as a system of controls and checks that ensure that goals are being achieved in the right way. This might include considerations such as:

Cost-Effectiveness - is the project on budget?

Quality - is the work product being created at a high level of quality?

Speed - are milestones being met as scheduled?

Risk - is the team or company exposed in some way?

These elements are usually discussed in a forum to ensure that they are within tolerances; if they’re not, corrective action is usually required.

AI projects can be included in these discussions, as cost, quality, speed and risk are all important for them as well. But the tolerances may need to be different due to the unique nature of the tech. For example, outputs from LLMs can be unpredictable, so holding a generative AI system to the same quality standard that you would an e-commerce website is unreasonable. Similarly, it might be easier to run over-budget when building a chatbot because the gap between technical capability and desired customer experience may be wider than expected, resulting in the need for more experimentation and trial and error. And risk from unproven AI applications has been well-documented but suffice it to say that it needs to be a central topic of conversation at any governance forum that’s addressing AI.

So you can use existing governance processes to manage AI projects - great. You might be wondering, is there anything unique about AI that requires a new process or capability to be created in order to govern it effectively?

I’m glad you asked. The answer is yes.

Testing is simultaneously the most important and least understood element of AI governance. Most organisations have things they’d like to improve about their current testing capabilities (to put it mildly). But asking test teams to take on the additional challenge of learning how to test AI applications - with their experimental nature and unpredictable outputs - can be quite daunting. Yet without a good testing framework that is specific to AI, it’s difficult to get the inputs you need to participate in a governance forum. You need to be able to articulate the current state of quality in an AI solution in order to participate in the conversation.

Unpacking the approach for testing AI applications is outside the scope of this week’s edition - we’ll cover that in a future post. Here are a few approaches to AI testing from a governance perspective to consider in the meantime:

  1. Rethink Requirements - requirements for AI applications may not be as binary as test teams are used to, and that’s OK. Wanting a chatbot to give the correct answer 100% of the time isn’t feasible with today’s technology - but if you set the threshold as “provide a usable/acceptable response 80% of the time”, that can be achieved and tested against with a sufficiently large sample set of users. When you’re updating a governance forum on whether or not requirements have been met, it’s easier to have the conversation if you’re able to provide that type of context.
  2. Deconstruct Defects - AI applications “in the wild” will be used by lots of different people who may be expecting them to behave a certain way, and they might raise defects if they get an answer they don’t like. It’s important that testers are able to evaluate each one and determine which is a true defect and which is one that can be explained by the unpredictable nature of AI. Governance forums will need to be taken on the journey to understand the difference.
  3. Surprise Scripts - because you usually won’t have straightforward requirements when working with AI, your test scripts won’t always have a set of predefined steps and expected results. Exploratory testing - session-based, unscripted testing that focuses on exercising capabilities along a wide range of user journeys - is particularly suited for AI because it allows teams to explain application behavior in a more comprehensive way that can give those responsible for governance confidence in the health of the solution.

AI governance isn’t something to afraid of - in fact, it's necessary, and it needn’t be invasive. The more teams can adapt their processes to the unique needs of AI projects, the more they’ll be able to manage cost, speed, quality and risk without sacrificing progress.

Other Blogs from Nate


AI for Quality Engineering

Continuing our AI series that we began in last week’s edition with our deep-dive on how AI can make a difference in private equity, this week we’ll focus on a capability instead of an industry.

AI for Private Equity

Occasionally at The Path, we like to take a break from our regular, Pultizer-worthy content to write a deep dive on how AI can make a difference in a particular industry. This week we’re focusing on private equity and how GPs and their management teams can use AI to manage risk, optimize performance, and seize opportunities that others might miss.

It's not too late

Specifically, we’re going to unpack a particular finding in The State of Generative AI in the Enterprise, a report based on data gathered in 2023 and published by Menlo Ventures. Over 450 enterprise executives were surveyed to get their thoughts on how Gen AI adoption has been going at their companies.

Great AI, Great Responsibility

For every article, post, or video excitedly talking about the potential of AI, there is another one warning about its dangers. Given the press and hype around each new AI breakthrough, it’s no surprise that governments, business leaders, and academics are closely tracking the development of the technology and trying to put guardrails in place to ensure public safety.

AI for CFOs

For those who think about corporate financials all day, it’s tough out there right now. That won’t come as a surprise to CFOs, or people who work in a CFO’s organization, but it was certainly a wake up call for me as I started learning on the job at Pathfindr.

Bang for your AI Buck

In this blog, we will show you how to put together a value framework that will help your team decide where to invest in AI capabilities and how to maximize the return on that investment.

Build vs. Buy vs. Wait

In this blog, Nathan Buchanan explains why strategic decisions around AI implementation can be so difficult to make.

Know What You're Signing Up For

Previously, we talked about different ways to calculate value from AI implementation. We focused on the different types of value, where it could be found across an organization and the things to keep in mind when you’re trying to track it. What we DIDN’T focus on was the other side of the discussion.

Righting the AI Ship

In this week’s edition of the Path we’ll talk about some ways that AI efforts go wrong, and what teams can do about them.

AI for Purpose

If you're a Not For Profit, you've probably heard that AI can help you address these needs, but you’re not sure where to start, or how to afford it even if you did. What can you do?

AI - The only thing to fear is inaction

Why you shouldn't be afraid to take a stab at AI in your business

AI Revolution

Spoiler Alert: No.