My AI minions just got another chance to prove themselves, and once again the variable that mattered most wasn’t their capability. It was mine. One step closer to world domination… but first, someone needs to define what world domination actually looks like as an output.
The single habit that has done more to improve how I use AI tools than any prompt trick or model upgrade is this: plan first, execute second. The plan first AI workflow runs directly against how action-oriented people want to work. We want momentum. That instinct will get you something back, but rarely what you needed.
Why Planners Get More Out of AI Than Doers
Large language models are extraordinarily good at executing well-defined tasks and surprisingly bad at figuring out what you wanted when your prompt was vague. If what you asked was fuzzy, what you get back will be fuzzy shaped like confidence.
I learned this building AdaptoMeetings. I fed Plaud transcripts to Claude asking for “useful meeting notes.” Output was technically correct and completely useless. When I wrote out explicitly what good output looked like, outputs changed immediately. Not because the model got smarter. Because I gave it a plan.
The Workflow in Practice
Phase One: Planning
Before any complex AI task, I have a planning conversation. Questions: What does a good output look like, specifically? What constraints does the model need? What failure modes am I most worried about? Is this one task or three tasks I’m pretending is one?
That last catches the most problems. “Analyze this vendor contract and tell me what to do” is three tasks: extract key terms, compare against standards, recommend a position. Running them as one produces mush.
Phase Two: Execution
Once planned, the execution prompt writes itself. Brief, specific, almost boring. Boring execution prompts produce excellent outputs.
The ADHD Dimension
If you have ADHD, plan-first is a wall. The ADHD brain is wired for urgency-based motivation. Planning does not feel urgent. What helps: reframe planning as its own execution task. I don’t tell myself I’m going to “plan.” I tell myself I’m going to write the spec document. That has an end. I also time-box to fifteen minutes. One step closer to world domination… but first, the planning doc needs a second draft because I forgot to define what “done” looks like. Again.
Where People Pay For Skipping
I watched an IT director use Copilot to draft a board update about a phishing incident. He typed “write a board update about the phishing attack” and sent it with light edits. Technically accurate, organizationally catastrophic. Led with attack details instead of response. Implied negligence. Omitted remediation timeline. A ten-minute planning conversation would have caught all of it.
Building the Habit
Start with one category of work where bad AI outputs have cost you the most time. For me it was long-form content. Once I produced a brief before every post, iteration dropped from five passes to one or two. The models are improving. But the gap between people who plan and people who don’t will widen. If you are not getting the outputs you want, do not upgrade your model first. Upgrade your planning habit
My AI minions just got another chance to prove themselves, and once again the variable that mattered most wasn’t their capability. It was mine. One step closer to world domination… but first, someone needs to define what world domination actually looks like as an output.
The single habit that has done more to improve how I use AI tools than any prompt trick or model upgrade is this: plan first, execute second. That sounds obvious. It isn’t, and it especially isn’t if you’re the kind of person who gets things done by jumping in. The plan-first AI workflow runs directly against how action-oriented people want to work. We want momentum. We want to type the thing and get the thing back. That instinct will get you something back. It rarely gets you what you actually needed.
Why Planners Get More Out of AI Than Doers
Large language models are extraordinarily good at executing well-defined tasks and surprisingly bad at figuring out what you wanted when your prompt was vague. This isn’t a flaw you can patch with a better model. It’s the nature of the technology. If what you asked was fuzzy, what you get back will be fuzzy shaped like confidence. It will look authoritative. It will be formatted correctly. It will be wrong in ways that take time to untangle.
The reason this catches people is that AI output looks finished. A bad Word document looks like a draft. A bad AI response looks like a polished answer. That polish costs you time because you have to read carefully enough to identify the gap between what you got and what you needed, and then figure out why the gap exists, and then try again.
Planning eliminates most of that cycle before it starts.
The AdaptoMeetings Lesson I Had to Learn the Hard Way
I learned this building AdaptoMeetings. AdaptoMeetings is my pipeline for turning Plaud meeting transcripts into structured notes, action items, and populated Airtable records. It sounds like a clear problem with a clear solution. It wasn’t, at first, because I kept treating it like one task.
I was feeding Plaud transcripts to Claude with prompts like “give me useful meeting notes.” The output was technically correct and completely useless. It summarized things I didn’t need summarized. It skipped the parts I actually needed. It formatted decisions as context and context as decisions. Nothing was wrong with the model. Everything was wrong with the ask.
When I sat down and wrote out explicitly what good output looked like, including the specific fields I needed populated, the format for action items, what counted as a decision versus a discussion point, and how to handle agenda items that went unresolved, the outputs changed immediately. Not because Claude got smarter overnight. Because I gave it a plan.
That’s the pattern. The model will execute what you describe. If you describe it well, you get something useful. If you describe it vaguely, you get a confident approximation of what you might have meant.
The Two-Phase Workflow
Phase One: Planning
Before any complex AI task, I have a planning conversation. It’s not long. It doesn’t have to be formal. I’m asking myself four questions:
What does a good output look like, specifically? Not “a summary” but “a three-paragraph summary written for a non-technical audience that leads with business impact and closes with the remediation timeline.” The more concrete the success criteria, the less guesswork the model is doing.
What constraints does the model need to know? Audience, format, tone, length, what to include, what to leave out. These are not optional. If you don’t specify them, the model picks defaults, and its defaults may not match your context.
What failure modes am I most worried about? For a client communication, I’m worried about tone that reads as blaming the user. For a technical document, I’m worried about oversimplification. Naming the failure mode lets me add a guardrail in the execution prompt.
Is this one task or three tasks I’m pretending is one? That last question catches the most problems. “Analyze this vendor contract and tell me what to do” is three tasks: extract the key terms, compare them against our standards, and recommend a negotiating position. Running them as one produces mush. Breaking them into three sequential prompts produces something you can actually work with.
Phase Two: Execution
Once I’ve done the planning work, the execution prompt practically writes itself. It’s brief, specific, and almost boring to write. That’s how it should feel. Boring execution prompts produce excellent outputs. If you’re writing a long, complicated execution prompt full of hedges and clarifications, that’s usually a sign the planning phase wasn’t finished.
The execution phase is also where AI shines. Once the task is well-defined, the model’s speed and consistency are genuine advantages. You’re not fighting the technology anymore. You’re directing it.
The ADHD Dimension
If you have ADHD, plan-first is a wall. Full stop.
The ADHD brain runs on urgency-based motivation. Planning doesn’t feel urgent. It doesn’t have a visible endpoint. It doesn’t produce the dopamine hit that comes from executing a task and watching something appear on the screen. Sitting down to “plan” before doing the actual work feels like being asked to warm up before a workout you’re already not sure you’ll finish.
Two things help me.
First: reframe planning as its own execution task. I don’t tell myself I’m going to “plan the blog post.” I tell myself I’m going to write the spec document. That has an end. It produces an artifact. When it’s done, it’s done, and I can move to execution. The framing matters because the ADHD brain needs to know what done looks like. “Planning” doesn’t have a done. “Write the spec doc” does.
Second: time-box it. Fifteen minutes. I set a timer. My job for those fifteen minutes is to answer the four planning questions and write them down somewhere. Not perfectly, not completely. Enough to give the execution prompt a foundation. One step closer to world domination… but first, the planning doc needs a second draft because I forgot to define what “done” looks like. Again.
What Happens When You Skip It
I watched an IT director use Microsoft 365 Copilot to draft a board update about a phishing incident. He typed “write a board update about the phishing attack” and sent the output with light edits. It was technically accurate. It was organizationally catastrophic.
The draft led with attack details instead of the response. It implied negligence by describing what wasn’t in place without context about why those controls were deprioritized. It omitted the remediation timeline entirely, which meant the board had no information about what was happening next. It read like a report from inside the IT team rather than a communication designed for executive governance.
A ten-minute planning conversation would have caught all of it. What is this board’s primary concern right now? What do they need to know versus what do they need to not know yet? What questions will they ask, and does this draft answer them? What tone communicates control without minimizing the severity? None of those questions are technically difficult. They’re just questions you have to actually ask.
The model can’t ask them for you. It doesn’t know your board. It doesn’t know what happened in the last board meeting or what the chair is already nervous about. You do. The planning phase is where you transfer that context into the prompt so the model can use it.
Building the Habit Without Burning Out
Start with one category of work where bad AI outputs have cost you the most time. One category. Not everything at once.
For me it was long-form content. I was writing blog posts with four or five revision passes because the first draft never quite landed the angle I had in my head. Once I started writing a brief before every post, including the argument I was making, the specific audience I was writing for, the examples I wanted to use, and the outcome I wanted the reader to walk away with, iteration dropped to one or two passes. The posts got better. The process got faster. Both things at once.
The right place to start is wherever the gap between what you’re getting from AI and what you actually need is largest. That gap is almost always a planning gap, not a model gap.
The Gap That’s Going to Widen
The models are improving. That’s true and it’s going to keep being true. Prompting techniques are getting shared, documented, and built into better tools. The floor is rising for everyone.
But the gap between people who plan and people who don’t is going to widen, not close, as the models get more capable. More capable models can execute more complex tasks. Complex tasks have more ways to go wrong without a clear plan. The upside of a well-planned prompt against a strong model is significant. The downside of a vague prompt against that same model is a more elaborate version of the wrong thing.
If you’re not getting the outputs you want from your AI tools right now, don’t upgrade your model first. Don’t learn a new prompting framework. Don’t add another tool to the stack. Write better plans. That’s where the leverage is.
If you want a place to start, grab any AI task you did in the last week that produced something you had to significantly revise. Write out the four questions from Phase One as if you were about to redo that task. See how different the execution prompt looks. That’s the habit, right there. Run it once and you’ll understand why it works.
.