Writing
Thinking in public on AI-native engineering, fractional leadership, and building at scale.
AI Engineering Team Structure: Smaller Teams Win in 2026
Running a 2020 org chart with 2026 AI tools is why delivery still feels slow. The fix: smaller teams, senior-weighted, fewer handoffs. Here is the blueprint.
Incremental vs. AI Transformation: What CTOs Actually Choose
Most CTOs frame this as a technology decision. It isn't. The choice between incremental AI and full transformation is determined by four business constraints, not technical preference.
The Forward Deployed Engineer
Palantir built a role in 2015 that most companies had never heard of. One engineer, full ownership, deployed directly into the client's problem. AI didn't create the role. It made it possible at scale.
Anthropic Built a Three-Person Team. They Called It a Harness.
Anthropic's engineering team published a technical deep-dive on multi-agent harness design. What they actually published was an org chart, and it explains exactly why most AI initiatives produce underwhelming results.
Your AI Tools Are Working. Your Ownership Model Is Not.
Teams frustrated with AI ROI share a common pattern. The tools are working. The work is getting done faster. The business outcomes have not changed. That is not a tools problem.
The AI Adoption J-Curve Repeats at Every Level
Most teams survive one AI productivity dip and call the transformation done. The AI adoption J-curve does not happen once. It happens at every level of maturity, and the teams that know this build differently.
Fractional CTO in Singapore: Technology Leadership for Growth-Stage Companies
Singapore's growth-stage companies are well-funded and technically ambitious. The technology leadership gap they face is not about capability in the market. It is about finding the right engagement at the right stage.
Claude Code vs Cursor vs GitHub Copilot: What I Actually Use and Why
Three different tools with three different design philosophies. Here is an honest comparison from someone who has used all three in production, and the framework for deciding which one belongs in your workflow.
Healthcare Teams Can't Copy Big Tech's AI Playbook
Healthcare engineering teams face constraints that make the standard AI adoption playbook dangerous. The maturity model still applies, but the sequence and risk tolerance are different.
Context Engineering Is the Skill That Separates AI-Native Engineers from Everyone Else
Prompt engineering is about what you say. Context engineering is about what you build. It is the emerging discipline that determines whether AI tools produce consistent, accurate output or confident-sounding nonsense.
How Engineering Teams Win in the Age of Agents
Agents made execution cheap. The bottleneck moved to context and verification. Here's the operating model engineering teams are using to win.
Fractional CTO in Malaysia: What Growth Companies Actually Need
Malaysia's growth companies face a specific technology leadership gap: too early for a full-time CTO, too complex for a part-time advisor. Here is what the right engagement looks like and how to evaluate it.
A Practitioner's Daily Workflow with Claude Code
The default way most engineers use Claude Code leaves significant value on the table. Here is the workflow pattern that consistently produces better output with less friction.
The Agentic Coding Workflow: How to Structure Development When AI Does the Work
Agentic coding is not vibe coding with a longer leash. It is a specific way of structuring development work so that AI agents can execute reliably within defined boundaries. Here is what it actually looks like.
AI Will Not Kill PM. It Will Expose Fake PM.
When building gets cheap, the bottleneck moves to problem discovery and outcome ownership. The PM who writes tickets gets exposed. The PM who owns the outcome gets stronger.
The Next SDLC Is Defined by Terrain, Not Stages
Most engineering teams are running AI on the wrong terrain. The Greenfield, Brownfield, WhiteField framework explains why results disappoint and what to change.
AI Generated Code Broke Your Code Review Process. Here Is How to Fix It.
AI tools increased PR volume by 98% on some teams. Code review processes designed for human-paced output cannot handle that. The bottleneck is now review, not production, and the fix is not reviewing faster.
AI Is Quietly Collapsing the Junior Engineer Pipeline
Entry-level engineering roles have dropped significantly since AI tool adoption accelerated. Most engineering leaders are not paying attention to this. They should be.
How Engineering Leaders Should Evaluate AI Coding Tools
Most AI coding tool evaluations are run by individual developers and optimised for developer experience. That produces a different answer than an evaluation optimised for team outcomes. Here is how to run the right evaluation.
AI Didn't Change Software Engineering. It Exposed It.
For 30 years, writing code was the bottleneck in software. AI removed it. Now teams can see which engineers had judgment, and which had execution.
The Four Layers of Claude Code: A Mental Model for Engineering Teams
Claude Code is not just an AI chat window in your terminal. It has a four-layer architecture that determines how it behaves. Understanding it changes how you set it up and what you get out of it.
What Engineering Leadership Actually Means in the Age of AI
AI changed what execution costs. It did not change what leadership requires. But it did change what leadership looks like, and most engineering leaders have not yet made that adjustment.
AI-Native Engineering: The Complete Guide
AI-native engineering is not using AI tools. This guide covers what it means, the four capabilities required, and the L1-L4 maturity model that separates real transformation from tool adoption.
What Does a Fractional CTO Do? A Founder's Guide
A fractional CTO is not a part-time CTO. This guide explains what the role covers, when to hire one, and how to avoid an engagement that ends with advice and no real change.
AI Maturity Levels for Engineering Teams: L1 to L4 Explained
Most teams using AI tools are stuck at L2. Here are the four AI maturity levels, how to assess where your team sits, and what the move to L3 actually requires.
The Hidden Cost of Low AI Adoption on Engineering Teams
Teams not using AI tools consistently are not just missing productivity gains. They are accumulating a competitive disadvantage that compounds month by month.
Fractional CTO in Southeast Asia: What's Different
Fractional CTOs in Southeast Asia face a different context: a distinct talent market, different AI adoption stage, and a specific advantage most founders miss.
Fractional CTO: Most Engagements Are Just Expensive Advice
Most fractional CTO engagements end with a roadmap in a Google Doc. The ones that work install something durable. Here's how to tell the difference.
The Velocity Trap: AI Is Making Teams Faster and More Broken
PRs are up. So are incidents. The 2026 data shows a pattern I've watched play out across engineering teams, and the fix is not what most leaders expect.
Moving an Engineering Team from L2 to L3: A Playbook
L3 AI maturity is achievable in eight to twelve weeks. Most teams do not get there because they try to do everything at once. Here is the sequence that works.
The First 30 Days With a Fractional CTO: What Should Happen
By day 30, a good fractional CTO engagement produces something tangible, not a strategy document. Here is the week-by-week of what should actually happen.
Why Claude Code Breaks Down on Large Repositories (And How to Fix It)
Claude Code is excellent on small, well-structured codebases. On large repositories, it degrades in specific, predictable ways. Understanding why tells you exactly what to fix.
How to Structure a Fractional CTO Engagement
Most fractional CTO engagements fail not because of who you hire but because of how scope is set. Here is how to structure an engagement for real outcomes.
How to Measure AI Adoption ROI on Engineering Teams
Most teams measure AI adoption with velocity and tool licences. Both metrics miss the point. Here are the six metrics that actually show whether AI is working for your team.
Build App with AI Agents: What Three Days Taught Me
Build app with AI agents and the bottleneck is no longer code. I shipped a production assessment tool in three days. Here's what that actually required.
Fractional CTO Pricing: What to Expect in 2026
The fractional CTO market rate ranges from $2,000 to $20,000 per month. Here is what drives the price, what each tier actually buys, and how to evaluate whether you are getting value.
CLAUDE.md: What It Is and Why Your Team Needs One
CLAUDE.md is the single highest-leverage thing an engineering team can do to make AI tools work reliably. Here is what to put in it, what to leave out, and how to keep it alive.
Fractional CTO vs Full-Time CTO: How to Decide
The question is not which is better. It is which one matches your current problem. Three criteria that tell you clearly whether to hire fractional or full-time.
Vibe Coding Is Over. Here Is What Comes Next.
Vibe coding was a useful first phase. Generate, accept, ship, repeat. It worked until it didn't. The teams that figured out what comes next are operating in a different gear entirely.
AI-Native vs AI-Assisted: How to Tell the Difference
Three diagnostic tests that reveal whether your engineering team is truly AI-native or just AI-assisted, and why the difference determines what you should do next.
Software Agency AI Disruption Is Already Here
Software agency AI disruption is reshaping who wins development work. A solo builder with the right stack out-ships agency teams on most standard projects.
Claude Code Hooks: The Safety Layer Most Engineers Skip
Claude Code Hooks are deterministic callbacks that run before and after every agent action. They are the difference between an AI tool you can trust in production and one that occasionally does something you did not ask for.
Hiring Engineers Won't Fix Your Delivery Problem
Slow delivery is a systems problem, not a staffing one. Adding engineers to a broken system makes it louder, not faster. Here's what actually fixes it.
Claude Code Skills: The Feature That Changes How Your Team Works
Claude Code Skills are reusable, auto-invoked knowledge packs that make AI tools work consistently across an entire team. Here is how they work and how to build them.
AI Agents Don't Fail in Dev. Your Repo Does.
AI agents fail in production codebases because the repo isn't built for them. Four layers separate agent-ready codebases from the rest.
What Engineering Managers Actually Do Now
The EM job was written for a world where engineers were the bottleneck. AI changed that. Most EMs are still optimising for the role that no longer exists.
The Org Chart Didn't Change. The Work Did.
Engineering teams adopted AI into org structures built for slow execution. Coding is no longer the bottleneck. The structure is. Here is where to start.
The Security Debt AI Is Quietly Creating
69% of organisations found security vulnerabilities in AI-generated code. PR volume doubled. Security review capacity did not. The debt is accumulating.
Your Board Is Asking the Wrong AI Questions
Every board is asking 'are we using AI?' It's the wrong question. CTOs who win this conversation reframe it: not adoption rate, but what the business can now do.
AI Makes Vague Requirements Expensive
AI requirements planning doesn't fix vague specs, it accelerates them. Teams now build the wrong thing faster, with more confidence, at a higher cost to undo.
DeepSeek Lowered the Cost. Not the Problem.
DeepSeek R1 made AI dramatically cheaper for engineering teams. Most leaders took the wrong lesson. Cost was never the bottleneck. Here is what actually is.
Replacing an Offshore Engineering Team Without Killing Velocity
The cost-arbitrage era of offshoring is ending. AI changes the math. Here's how to transition from a large offshore team to a smaller, faster onshore one.