← Blog
AIClaude CodeBuild

Another Week That Reset How I Build

An annotated book

This week I came across a blog post by Boris Tane on how they use Claude Code. I'm sharing the link — I like how it is structured, and it reset how I approach building: in Claude Code, in VS Code with Copilot, in pretty much any AI-assisted workflow I have just now.

The blog pushed me to be more structured. But more than that, it got me to name something I keep feeling. Every other week feels like going back to primary school and re-learning how to count. You pick up a new process, you start using it, and you can't believe you lived without it. The way I was working a week ago looks like first-year maths. Now I'm on third year. Next week the syllabus changes again.

The boundaries of what's possible keep shifting — but it's not just what these tools can do, it's how they are best used, which is still being figured out on the fly. I think back to getting good at keyword-based search engines and all the tricks you'd develop around query construction. Those skills were eventually swallowed by the tools themselves. Something similar is probably coming for a lot of what we're doing now. But in this transitional moment, there is a real difference between people who set these tools up well and people who don't.

The Biggest Variable: How You Set Up the Problem

Watching people across work and personal life use ChatGPT, Claude, and other copilots just now, the thing that jumps out most is how they set up the problem. The delta between a well-framed brief and a poorly-framed one is enormous. And the more I look at it, the more it resembles the same principles I've applied when scoping work with teams.

My initial mistake was giving the agent too much creative licence. When a tool knows a lot about everything, that is a very wide set of possible outcomes — and most of them are not what you wanted. The fix is the same one I'd use when briefing a team member: narrow the search space before they start.

The framework I've used for scoping work breaks into three parts: background, vision, and steps.

Background is context — what has been done before, what the analogues are, what best practice looks like. Vision is the end state: what does finished actually look like, and at the end of this where does the value lie? Steps — can you actually map the path from here to there? I used to ask junior team members: between your current state and the vision, what percentage of the steps do you already know how to solve? If they couldn't answer that, the probability of going off-track was high, so start with a proof of concept and rapidly prototype.

There's a version of this that's more iterative — where the vision is deliberately open and discovery is part of the process. Those projects need a different approach: looser briefs, more frequent course-correction, a willingness to drop a whole direction when it's not working. But for many things — improving an existing app, optimising something already running, building something with a fixed spec — you want predictable outcomes (you have a really clear vision), and those only come from a well-defined brief.

The structure Boris lays out — research, plan, execute — maps cleanly onto this. The research phase is about augmenting what the model surfaces, not just accepting it: adding caveats, flagging context it doesn't have, pointing it at things it's missed. The planning phase is where you discuss implementation, push back, and shape it. Only then do you execute. Without that structure, code goes off the rails — it does too much, or heads in the wrong direction entirely. I've lost hours having to restart from zero. It's really no different from giving a team member a brief that's too vague: they go off and build the wrong thing, and by the time you see it, the damage is done.

Documents as a Live Collaboration Interface

The planning document is not just a brief. It's a live collaboration interface.

In past roles I'd scope work in Google Docs or Jupyter notebooks and leave comments directly within the context of what was written — not in a separate email thread, not in Slack, but right there in the place where the idea lived. That specificity matters. It's one of the things that makes documents better than presentations for real collaborative work: the ability to go deep, inline, in the right place, and force a response.

I do the same thing now with AI agents. I get back a Markdown document from a research phase and I annotate it — flagging where I agree, where I don't, what's missing, pointing to links that were missed. During the planning phase I discuss the implementation rather than just accepting the first proposal. That back-and-forth, done properly, is where most of the value lives. It's also where you catch the things that would have cost you hours later. And it gives you a moment to ask: now that I have the research and the plan, is this even still the right direction?

Running in Parallel

The big unlock from all of this is the ability to run things in parallel.

Because I'm planning upfront — and because I've built up a library of well-structured Markdown plans — my mornings now look quite different. I'll sit down with four or five pieces of work I want to move forward: optimisation on something already running, a bug to fix, a thread from earlier in the week, an exploration project I've been sitting on. Rather than doing them sequentially, I'll spin up several instances of a copilot in parallel. Each one gets a brief: go research this, go plan this piece of work, come back with a Markdown. I get notified as each one comes through, I review and annotate, and they go off to execute while I'm already on the next one.

It is genuinely similar to how I used to manage project teams — lots of parallel workstreams, each with their own brief, while you move between them providing direction and feedback. Except the feedback loop is much tighter. The cost of spinning up a new workstream is basically zero. And the iteration speed is unlike anything I've worked with before.

You can make real progress on five things in the time it used to take to make a dent in one.