Your Board Is Asking the Wrong AI Questions
Every board is asking 'are we using AI?' It's the wrong question. CTOs who win this conversation reframe it: not adoption rate, but what the business can now do.
Every board in 2026 is asking the same question: are we using AI? I've been in that room enough times to know that both sides of the table feel good about the exchange. The board feels like it's exercising governance. The CTO feels like it's an easy win. And nothing of value gets said.
"Are we using AI?" is an activity metric. It tells you whether something is happening. It tells you nothing about whether that something is producing outcomes worth caring about. The board asks it because it's the question they know how to ask. CTOs answer it because it's easy to answer. Both sides are, without realising it, colluding to avoid the harder question.
The CTOs who are winning this conversation have made a simple reframe: not adoption rate, but what the business can now do that it couldn't before. That shift changes every downstream decision, from where to invest to how to evaluate return.
"Are We Using AI?" Measures Activity, Not Value
There is a reason this question feels satisfying even when the answer is hollow. Activity is visible. Tool licences show up in a budget line. Adoption dashboards show percentage of engineers using Copilot or Claude. A CTO can walk into a board meeting with a slide that says "73% AI tool adoption across engineering" and everyone nods.
The number is real. The implication, that the business is getting meaningful value from AI, is not. Activity metrics tell you about inputs. They say nothing about outputs. A team where 73% of engineers use AI tools to move slightly faster on existing work is not in a meaningfully different position than a team where 30% use them. If the work being done is the same work, faster is incremental, not transformational.
The research bears this out. Only 29% of CTOs report being able to measure AI ROI with confidence. That gap is not a capability failure. It is a measurement failure: teams are tracking the wrong things and arriving at the board table with data that cannot answer the actual question.
Both Sides of the Table Are Avoiding the Harder Question
The harder question is: what business outcomes did AI enable this quarter that were not possible before?
It is a harder question because the answer requires real work to produce. You need to know what the team shipped, what the team could not have shipped without AI, and what the operational constraints looked like before and after. That is not a dashboard metric. That is a piece of thinking that takes time and honesty.
The board avoids asking it because most board members do not know how to frame AI in outcome terms. They are not technical. The "are we using AI?" question is a proxy for "are we not falling behind?" It is a risk-reduction question dressed up as a strategy question. Board members have read enough headlines to know AI matters. They do not have the technical vocabulary to ask a better question than the one they are asking. So they ask this one, get a satisfying answer, and move on.
The CTO avoids asking it because the honest answer is often uncomfortable. 56% of AI investments show zero measurable financial ROI. If the CTO is sitting with an AI spend line that has not moved the business outcomes, surfacing the right question means surfacing a hard truth. It is much easier to talk about adoption rates. And the board, not knowing what else to ask, will accept it.
I've watched this dynamic play out across a number of technology organisations. The pattern is consistent: the AI budget grows, the adoption metrics improve, and the business outcomes do not move. Nobody raises it because nobody wants to be the person who questions the AI investment. By the time the ROI gap becomes impossible to ignore, the organisation has spent two or three years on the wrong conversation.
This is where I've seen the most capable technology leaders separate themselves: they go to the board with the uncomfortable analysis rather than the comfortable dashboard. That builds the kind of credibility that changes how boards engage with technology conversations. A board that trusts its CTO to surface problems, not just wins, will give that CTO more latitude and more investment, not less.
The Three Questions That Actually Measure AI Investment Returns
I've landed on three questions that actually predict whether an AI investment is generating return. They are not easy to answer, which is precisely what makes them useful.
First: what delivery constraints did AI remove? Every engineering organisation has a set of constraints that limit throughput. Context-switching between tasks. Review bottlenecks. The time it takes to bring a new engineer up to speed on a codebase. The slow cycle of writing a feature, writing the tests, writing the documentation, and shipping the change. If AI is working, it should be removing specific, named constraints. If you cannot name the constraint that was removed, the AI investment has not yet produced structural change.
Second: what did the team ship that it could not have shipped without AI? This is the hardest question to answer honestly. Teams often reach for velocity improvements: "we shipped 30% more features." That is not what the question is asking. The question is about capability expansion, not speed. Did you ship a feature that would have required a headcount addition you did not have budget for? Did you build something that would previously have taken six months in three? Did you enter a product area that was previously inaccessible because of engineering cost? If the answer is yes, you have an AI investment story worth telling. If the answer is "we moved a bit faster on existing work," the honest characterisation is efficiency, not transformation.
Third: what is the marginal cost of the next unit of output, and is it falling? This is the business-model question buried inside AI adoption. One of the genuinely transformational effects of AI in engineering is the potential for marginal cost reduction: each additional feature, each additional integration, each additional market served costs less to produce than it did before. If your engineering team is operating this way, the unit economics of your product development are changing. That is a board-level story. If the marginal cost is not falling, something in the system has not changed structurally.
These three questions cannot be answered with a dashboard. They require the CTO to know the business well enough to connect the engineering work to the outcomes. That is the capability that separates the CTOs who lead this conversation from the ones who respond to it. It also requires the intellectual honesty to answer them truthfully when the answer is "not yet." A CTO who can walk into a board meeting and say "our AI investment has improved efficiency but has not yet expanded capability, and here is what we are changing" is doing more for board confidence than one who presents adoption charts that nobody knows how to challenge.
How to Bring Your Board the Right Conversation
The framing shift is straightforward to describe and genuinely difficult to execute. Instead of walking in with adoption metrics, walk in with an outcomes narrative.
The structure I've seen work: start with the constraint that was removed. Name it specifically. "Our AI investment removed the context-switching bottleneck in our release pipeline. Engineers were losing 40% of their productive capacity to switching between tasks and the cognitive overhead of context recovery. That constraint is substantially reduced." That is a sentence a board can engage with. They understand bottlenecks. They understand cost.
Then move to the outcome. "We shipped four features in Q4 that required a level of personalisation we could not have built without AI-assisted development. Those features are now in the hands of X customers and driving Y." The board does not need the engineering detail. They need to see the chain from investment to outcome.
Then close with the forward-looking unit economics. "Our marginal cost per feature shipped has decreased 35% since Q2. Our roadmap for H1 is more ambitious than anything we would have planned at this time last year, with the same headcount." That is a business case. That is the conversation boards want to have and almost never get.
The format is: constraint removed, outcome produced, unit economics improving. Three things. No adoption dashboards.
What Good AI Progress Reporting Actually Looks Like
Most AI progress reporting fails because it is structured to answer the question that was asked rather than the question that matters. The board asked "are we using AI?" and the CTO built a report that answers it. The report is accurate and useless.
Here is what that report looks like: a slide with a tool adoption percentage, a chart showing lines of code generated, a note that the team has attended AI training. It is activity all the way down. A board that sees this report is no better equipped to govern AI investment than it was before the meeting.
Good AI progress reporting is built backwards from the business outcomes the technology organisation is accountable for. Start with the business metric: revenue per engineer, features delivered per quarter, time to market for a new capability. Then show how AI investment moved that metric. Then show the quality signal, incidents, change failure rate, review velocity, that confirms the movement is sustainable rather than borrowed from future stability.
Here is what that looks like in practice: "Our AI investment in automated test generation reduced our change failure rate by 18% while deployment frequency increased by 30%. The unit cost of a stable release is lower than it was six months ago. Here is what that means for our H1 roadmap commitments." That is a report a board can use. It connects investment to outcome, shows the quality signal, and bridges to the forward-looking plan.
The reporting rhythm matters too. Boards that get an AI update once a year are being asked to make capital allocation decisions on stale information. The best technology leaders I've seen have introduced a lightweight quarterly AI performance narrative: one page, three outcomes, the constraint removed this quarter, the capability gained, and the number that shows unit economics moving. It takes less time to prepare than a slide deck of adoption charts and it generates better conversations.
86% of leaders admit uncertainty about which AI tools are providing real benefit. That uncertainty exists because the measurement is pointed at tools rather than outcomes. Repointing the measurement fixes the uncertainty and it produces reporting that boards can actually use to govern and allocate.
The shift from "are we using AI?" to "what did it produce?" is not a communications reframe. It is a strategic and measurement discipline. The organisations that institutionalise it in their board reporting cadence will have a fundamentally different board relationship around technology investment. Not because the board becomes more technical, but because the CTO brings them the right analysis and the conversation becomes genuinely useful.
If you are walking into a board meeting in the next quarter with an AI adoption update, spend thirty minutes on these three questions before you build a single slide. What constraint did you remove? What could you now ship that you could not before? Is the marginal cost of output falling? If you can answer all three, you have a board AI conversation worth having.
I help engineering teams close the gap between "we use AI tools" and "AI actually changed how we deliver." Book a 20-minute call and I'll tell you where the leverage is.
Working on something similar?
I work with founders and engineering leaders who want to close the gap between what their technology can do and what it's actually delivering.