← All posts
29 January 2026·11 min read

What Engineering Managers Actually Do Now

The EM job was written for a world where engineers were the bottleneck. AI changed that. Most EMs are still optimising for the role that no longer exists.

engineering-managementai-nativeengineering-leadership

The engineering manager job description was written for a world where engineers were the execution bottleneck. AI took over execution. Most EMs are still doing the old job, and the people above them are still measuring the old metrics. That gap is where a lot of engineering organisations are silently losing ground right now.

This is not a post about tools. It is about what the role actually requires now, why the transition is harder than it looks, and what VPs and EMs can do about it. If you manage engineers, or manage people who manage engineers, this is the job redesign conversation you probably have not had yet.

The EM Job Was Designed for a Bottleneck That No Longer Exists

The classic EM role has three core functions: unblock engineers, track delivery, manage relationships. All three of those functions were designed around a single assumption: that engineers were the constraint in the system. If you could keep engineers productive, focused, and unblocked, the team delivered. That was mostly true for a long time.

The job codified around this assumption. EMs ran sprint ceremonies. They tracked velocity. They managed stakeholder expectations around delivery timelines, because delivery timelines were a function of engineer hours. They escalated technical decisions up when they were above the team's confidence level. The role had real leverage because the constraint it managed was real.

That constraint is no longer the dominant one. A single engineer with capable AI tooling can produce what a team of five produced before. That number will continue to move. The constraint has shifted from engineering throughput to something harder to measure and harder to manage. EMs whose mental model of the job is still centred on throughput are managing the wrong thing.

What AI Actually Changed About Engineering Work

The productivity gains from AI in engineering are real, but they are often misread. The common narrative is: engineers write more code faster. That is true and it misses the more consequential change.

What AI actually changed is the cost of execution relative to the cost of judgment. When execution was expensive, the organisation tolerated a certain amount of vagueness upstream. If the spec was unclear, an engineer would spend two weeks building the wrong thing, and the feedback loop, while painful, was slow enough that bad requirements had time to be caught before they compounded. The system was inefficient but self-correcting.

When execution is cheap, vague context produces wrong output at speed. An AI tool working from an ambiguous brief does not slow down and ask clarifying questions the way a senior engineer might. It produces confident, coherent, incorrect output. The feedback loop compresses and the error surface expands. Speed without clarity is not a productivity gain; it is a liability.

The other change is volume. AI-assisted teams produce more outputs across the board: more code, more tests, more documentation, more pull requests. The review burden increases. Human attention becomes the bottleneck it never quite was before, because the humans in the loop are now selecting which of a larger set of outputs actually make it through. That selection process has always mattered. Now it is the primary constraint on output quality.

Both of these changes have direct implications for what the EM job needs to look like. Neither of them is about sprint velocity or team satisfaction.

The Three Things Engineering Managers Are Responsible for Now

The new EM job is not radically different in form. It is different in where the leverage lives. Based on what I have seen in teams navigating this well, it has three components.

Context stewardship. The quality of AI output is a direct function of the quality of context it works from. Requirements that are too vague, architectural decisions that are undocumented, product intent that lives in someone's head rather than in writing: all of these produce AI outputs that are technically plausible but wrong for the system. The EM who treats context as someone else's problem is creating a compounding quality debt that does not show up in sprint metrics until it surfaces as rework, incidents, or a product that does not behave the way anyone intended.

Context stewardship is not about writing better tickets, though that is part of it. It is about ensuring that the team's shared understanding of what they are building and why is explicit enough that an AI tool, or a new engineer, can work from it reliably. That requires deliberate investment. It requires EMs who know what good context looks like and can recognise when it is missing.

Quality judgment. As AI volume increases, review becomes more selective and more consequential. The EM who thinks their job is to ensure reviews happen is solving the wrong problem. The job is to know what to review: which outputs carry the most risk, which decisions are reversible, where the AI is most likely to be confidently wrong. That requires a different kind of attention than tracking whether the team is following the process.

I have managed EMs who thought rigorous review meant reviewing everything. What it actually means is having a calibrated view of where human judgment adds the most value, and spending that attention deliberately. In a high-AI-volume team, you cannot review everything. The EMs who understand this are building judgment frameworks. The ones who do not are either rubber-stamping output or creating bottlenecks.

Agent coordination. This is the part most EMs have not encountered yet, but it is arriving faster than most organisations expect. As teams move from AI-assisted workflows to agentic ones, where AI systems are running autonomously across parts of the development lifecycle, someone needs to own the orchestration layer. What agents are running, on what parts of the codebase, with what guardrails, with what escalation paths when something goes wrong: these are not purely technical decisions. They are operational decisions that sit at the intersection of product, engineering, and risk.

The EM is the natural owner of this layer. Not because they will configure the agents, but because they are responsible for the integrity of the team's output and the system the team is building. An agent that runs without clear guardrails in a production-adjacent context is an EM-level risk, not a developer-level one.

Why Most EMs Do Not Know Their Job Changed

The honest answer is that no one told them. The role description did not update. The performance review criteria did not update. The expectations from their managers, in many cases, did not update.

The EMs who are thriving in AI-native teams are not the ones who got better at sprint ceremonies. They are the ones who reoriented toward output quality and context clarity, usually because they had a manager or an environment that made those things visible as levers. Most EMs do not have that. They are still being rewarded for the old version of the job, which means they have no rational reason to change.

There is also a competence problem that is worth naming directly. Context stewardship and quality judgment are harder to develop than sprint management. They require a deeper understanding of the product and architecture than many EMs have been expected to maintain. The EM who spent five years managing delivery without staying close to the technical and product substance of what the team is building is now in a difficult position, because those skills are no longer optional background knowledge. They are core to the job.

This is not a personal failure. It is a predictable outcome of a role that was defined narrowly and rewarded for narrow execution. But it does mean that many EMs are facing a genuine capability gap, not just a mindset shift.

The other thing that makes this hard is that the new version of the job is less visible. Unblocking engineers is visible. Running a good retro is visible. Ensuring the team's context documentation is accurate and current, calibrating which pull requests actually need deep review, owning the guardrails on an agent pipeline: these are harder to see from above. In organisations that measure what is legible, the EMs doing the right things in the new model often look like they are doing less than the EMs still optimising for sprint velocity. That is a leadership environment problem, and it is the VPs' problem to solve.

What to Do If You Are an EM Navigating This Shift

The transition is not a single move. It is a reorientation that happens gradually, and the EMs who do it well tend to follow a similar sequence.

Start with context. Pick one part of your team's work and ask: if an AI tool were handed this context cold, what would it get wrong? That question exposes the gaps faster than any audit. The answer is almost always the same: the intent behind a decision, the constraints that were not documented, the architectural context that exists only in the heads of two people who were there when the choice was made. Making that context explicit, even informally at first, is the highest-leverage thing most EMs can do in the first few weeks of reorientation.

Then get close to output quality, not just output volume. Spend time in the codebase. Not to manage it, but to understand what good looks like for this system, in this context, given the team's current trajectory. The EMs I have seen make this transition well all did something that looked like going backwards: they got more technical again, not to compete with their engineers, but to develop the judgment they needed to evaluate what was being produced. That judgment is not replaceable by process.

On agent coordination: if your team is not there yet, the question to ask is not "when will we be?" but "what are the governance conversations we need to have before we get there?" The teams that navigate the agentic transition cleanly are the ones whose EMs started those conversations six months before the first agent ran in production. The teams that struggle are the ones that got there first and had to retrofit the guardrails after something went wrong.

Finally, be explicit with your manager about what you are doing and why. The visibility problem is real. If your manager is measuring velocity and you are investing in context infrastructure, you will look less productive unless you name what you are building and why it matters. That conversation is uncomfortable. It is also part of the job.

The Job Redesign Most VPs Have Not Done

For VPs reading this: if your EMs are not operating in the way this post describes, that is partly a job design problem. The role they were hired for, the one they were trained in, the one they are still being measured against, does not match the leverage available in an AI-native team. Changing the people without changing the job is unlikely to produce different outcomes.

The redesign does not have to be radical. It starts with being explicit about what the role is now responsible for. Context quality, review calibration, and output integrity are all things that can be made visible in how EMs are evaluated and coached. They are currently invisible in most engineering organisations, which is why the transition is stalling.

The research on this is directional but clear. A 2023 McKinsey study on software developer productivity found that 40 percent of the productivity gains from AI tools were lost to downstream rework and quality issues, most of which traced back to insufficient context at the start of work rather than poor execution during it. McKinsey That is a context problem. It is an EM problem. It is a job design problem.

The EMs who are going to matter most over the next few years are the ones who understand that their job is to make the system produce good outcomes, not to manage the people producing them. AI has made that distinction impossible to ignore. The leaders who act on it now are building something durable. The ones who do not are managing a role that is quietly becoming less relevant.


I help engineering teams close the gap between "we use AI tools" and "AI actually changed how we deliver." Book a 20-minute call and I'll tell you where the leverage is.

Working on something similar?

I work with founders and engineering leaders who want to close the gap between what their technology can do and what it's actually delivering.