No severe developer nonetheless expects AI to magically do their work for them. We’ve settled right into a extra pragmatic, albeit nonetheless barely uncomfortable, consensus: AI makes a fantastic intern, not a alternative for a senior developer. And but, if that is true, the corollary can also be true: If AI is the intern, that makes you the supervisor.
Sadly, most builders aren’t nice managers.
We see this every single day in how builders work together with instruments like GitHub Copilot, Cursor, or ChatGPT. We toss round obscure, half-baked directions like “make the button blue” or “repair the database connection” after which act shocked when the AI hallucinates a library that has not existed since 2019 or refactors a essential authentication move into an open safety vulnerability. We blame the mannequin. We are saying it’s not good sufficient but.
However the issue normally shouldn’t be the mannequin’s intelligence. The issue is our lack of readability. To get worth out of those instruments, we don’t want higher immediate engineering tips. We want higher specs. We have to deal with AI interplay much less like a magic spell and extra like a proper delegation course of.
We should be higher managers, in different phrases.
The lacking ability: Specification
Google Engineering Supervisor Addy Osmani not too long ago printed a masterclass on this precise subject, titled merely “ write a superb spec for AI brokers.” It is without doubt one of the most sensible blueprints I’ve seen for doing the job of AI supervisor nicely, and it’s a fantastic extension on some core rules I laid out not too long ago.
Osmani shouldn’t be attempting to promote you on the sci-fi way forward for autonomous coding. He’s attempting to maintain your agent from wandering, forgetting, or drowning in context. His core level is straightforward however profound: Throwing a large, monolithic spec at an agent typically fails as a result of context home windows and the mannequin’s consideration price range get in the best way.
The answer is what he calls “good specs.” These are written to be helpful to the agent, sturdy throughout periods, and structured so the mannequin can observe what issues most.
That is the lacking ability in most “AI will 10x builders” discourse. The leverage doesn’t come from the mannequin. The leverage comes from the human who can translate intent into constraints after which translate output into working software program. Generative AI raises the premium on being a senior engineer. It doesn’t decrease it.
From prompts to product administration
When you’ve got ever mentored a junior developer, you already know the way this works. You don’t merely say “Construct authentication.” You lay out all of the specifics: “Use OAuth, assist Google and GitHub, hold session state server-side, don’t contact funds, write integration assessments, and doc the endpoints.” You present examples. You name out landmines. You insist on a small pull request so you possibly can test their work.
Osmani is translating that very same administration self-discipline into an agent workflow. He suggests beginning with a high-level imaginative and prescient, letting the mannequin increase it right into a fuller spec, after which modifying that spec till it turns into the shared supply of fact.
This “spec-first” strategy is rapidly changing into mainstream, transferring from weblog posts to instruments. GitHub’s AI crew has been advocating spec-driven growth and launched Spec Package to gate agent work behind a spec, a plan, and duties. JetBrains makes the identical argument, suggesting that you just want overview checkpoints earlier than the agent begins making code adjustments.
Even Thoughtworks’ Birgitta Böckeler has weighed in, asking an uncomfortable query that many groups are quietly dodging. She notes that spec-driven demos are inclined to assume the developer will do a bunch of necessities evaluation work, even when the issue is unclear or giant sufficient that product and stakeholder processes usually dominate.
Translation: In case your group already struggles to speak necessities to people, brokers is not going to prevent. They may amplify the confusion, simply at a better token price.
A spec template that really works
A great AI spec shouldn’t be a request for feedback (RFC). It’s a instrument that makes drift costly and correctness low-cost. Osmani’s suggestion is to begin with a concise product temporary, let the agent draft a extra detailed spec, after which appropriate it right into a dwelling reference you possibly can reuse throughout periods. That is nice, however the actual worth stems from the particular elements you embody. Primarily based on Osmani’s work and my very own observations of profitable groups, a useful AI spec wants to incorporate just a few non-negotiable components.
First, you want goals and non-goals. It isn’t sufficient to put in writing a paragraph for the purpose. You need to checklist what’s explicitly out of scope. Non-goals forestall unintended rewrites and “useful” scope creep the place the AI decides to refactor your complete CSS framework whereas fixing a typo.
Second, you want context the mannequin received’t infer. This contains structure constraints, area guidelines, safety necessities, and integration factors. If it issues to the enterprise logic, it’s a must to say it. The AI can not guess your compliance boundaries.
Third, and maybe most significantly, you want boundaries. You want specific “don’t contact” lists. These are the guardrails that hold the intern from deleting the manufacturing database config, committing secrets and techniques, or modifying legacy vendor directories that maintain the system collectively.
Lastly, you want acceptance standards. What does “carried out” imply? This ought to be expressed in checks: assessments, invariants, and a few edge instances that are inclined to get missed. In case you are considering that this seems like good engineering (and even good administration), you’re proper. It’s. We’re rediscovering the self-discipline we had been letting slide, dressed up in new instruments.
Context is a product, not a immediate
One cause builders get pissed off with brokers is that we deal with prompting like a one-shot exercise, and it’s not. It’s nearer to establishing a piece atmosphere. Osmani factors out that enormous prompts typically fail not solely on account of uncooked context limits however as a result of fashions carry out worse whenever you pile on too many directions without delay. Anthropic describes this identical self-discipline as “context engineering.” You need to construction background, directions, constraints, instruments, and required output so the mannequin can reliably observe what issues most.
This shifts the developer’s job description to one thing like “context architects.” A developer’s worth shouldn’t be in realizing the syntax for a particular API name (the AI is aware of that higher than we do), however reasonably in realizing which API name is related to the enterprise downside and guaranteeing the AI is aware of it, too.
It’s value noting that Ethan Mollick’s publish “On-boarding your AI intern” places this in plain language. He says it’s a must to be taught the place the intern is beneficial, the place it’s annoying, and the place you shouldn’t delegate as a result of the error price is simply too expensive. That may be a fancy approach of claiming you want judgment. Which is one other approach of claiming you want experience.
The code possession entice
There’s a hazard right here, after all. If we offload the implementation to the AI and solely give attention to the spec, we threat dropping contact with the truth of the software program. Charity Majors, CTO of Honeycomb, has been sounding the alarm on this particular threat. She distinguishes between “code authorship” and “code possession.” AI makes authorship low-cost—close to zero. However possession (the flexibility to debug, preserve, and perceive that code in manufacturing) is changing into costly.
Majors argues that “whenever you overly depend on AI instruments, whenever you supervise reasonably than doing, your individual experience decays reasonably quickly.” This creates a paradox for the “developer as supervisor” mannequin. To put in writing a superb spec, as Osmani advises, you want deep technical understanding. In the event you spend all of your time writing specs and letting the AI write the code, you would possibly slowly lose that deep technical understanding. The answer is probably going a hybrid strategy.
Developer Sankalp Shubham calls this “driving in decrease gears.” Shubham makes use of the analogy of a handbook transmission automobile. For easy, boilerplate duties, you possibly can shift right into a excessive gear and let the AI drive quick (excessive automation, low management). However for advanced, novel issues, that you must downshift. You would possibly write the pseudocode your self. You would possibly write the tough algorithm by hand and ask the AI solely to put in writing the check instances.
You stay the motive force. The AI is the engine, not the chauffeur.
The long run is spec-driven
The irony in all that is that many builders selected their profession particularly to keep away from being managers. They like code as a result of it’s deterministic. Computer systems do what they’re informed (largely). People (and by extension, interns) are messy, ambiguous, and require steering.
Now, builders’ major instrument has develop into messy and ambiguous.
To achieve this new atmosphere, builders have to develop mushy abilities which might be really fairly laborious. You should discover ways to articulate a imaginative and prescient clearly. You should discover ways to break advanced issues into remoted, modular duties that an AI can deal with with out dropping context. The builders who thrive on this period received’t essentially be those who can kind the quickest or memorize probably the most commonplace libraries. They would be the ones who can translate enterprise necessities into technical constraints so clearly that even a stochastic parrot can not mess it up.
