← Services
Guaranteed outcome · 6 weeks · remote

AI Agents in Production

An autonomous agent closes a real ticket in your production codebase. Your engineer reviews the PR and merges it. That's the outcome: not a demo, not a proof of concept, not a strategy deck. Code in production, working on your actual backlog. If it doesn't happen by the end of the engagement, you pay nothing.

Six weeks · remote · guaranteed outcome
Book a call to start
The guarantee

One merged agent PR in your production repo. Or you pay nothing.

The outcome is specific: an autonomous agent generates a PR on a real ticket from your backlog, your engineer reviews it, and it merges into your production branch. Not a sandbox. Not a demo environment. Your codebase, your tickets, your engineer's approval. If that doesn't happen by the end of week six, the engagement is free.

This is for you if

You want an agent to close actual tickets in your codebase, not assist an engineer, but work through a ticket start to PR with your engineer reviewing at the end.

You have an undeniable proof point you need for your board: code in production, generated by an agent, merged by your team.

You want someone working inside your actual repo, on your actual backlog, not a controlled demo environment.

You want the guarantee: either the agent PR ships and merges, or you pay nothing.

Your team already has consistent AI practices and shared context files in place, or you've completed the Consistent AI Engineering engagement.

What gets implemented
Agent workflow: start to PR

Agents working end-to-end on scoped tickets: from ticket intake to code generation to PR submission, with your engineer reviewing at defined checkpoints.

Automated quality gates

Quality checks embedded in the pipeline that catch the specific failure modes AI-generated code produces, before any engineer reviews.

Agent-optimised test infrastructure

Your test suite rebuilt so agents can interpret the feedback, act on failures, and self-correct without human intervention on every run.

Handover documentation

Full documentation of how the agent workflow operates. Your team runs this independently after the engagement ends; no ongoing dependency on me.

Six-week structure
Wk 1
Foundations review and ticket scoping

Audit existing context infrastructure and test suite. Select and scope the specific tickets the agent will work on. Define the human-review checkpoints.

Wk 2
Agent workflow design

Design the end-to-end workflow: how the agent receives a ticket, generates code, runs tests, handles failures, and prepares the PR for engineer review.

Wk 3
Infrastructure and pipeline work

Build the automated quality gates. Optimise the test infrastructure for agent-readiness. Establish the human-in-the-loop review model.

Wk 4
First agent runs

Agents operating on real tickets. First PRs generated and reviewed. Measure against baseline. Iterate on failure modes discovered in production.

Wk 5
Calibration

Calibrate agent behaviour based on early results. Address edge cases. Track toward the guarantee milestone: the merged PR.

Wk 6
Merged PR and handover

The agent closes a ticket. Your engineer reviews and merges. Full handover with documentation. Guarantee confirmed.

Need to build the foundations first?

AI Agents in Production requires codebase context files, a test suite agents can run reliably, and consistent team-wide AI practices already in place. If those aren't there yet, the Consistent AI Engineering engagement is the right starting point. Not sure which applies? Book a call and I'll tell you.