My AI minions leveled up again, and now they’re making better decisions than some people I’ve worked with. One step closer to world domination… but first, too many meetings …and way to many emails.
That joke lands differently when you’ve spent six months watching executives use Claude or ChatGPT like a slightly smarter Google. They type a question. They get an answer. They move on. The tool is doing something, so it must be working. Except the ROI they expected never materializes, and eventually someone in a leadership meeting says “honestly, I’m not sure AI is living up to the hype.”
It’s not the AI. It’s the AI prompting techniques. Specifically, there are three habits separating the people getting 10x output from the ones getting slightly better search results. None of them require a prompt engineering course. They require a five-second mindset shift before you type.
The Gap Is Not the Tool. It’s the Gear.
I run AI through almost every piece of client work I do. Budget analysis, board decks, org structure reviews, security assessments, vendor evaluations. I also watch other operators and executives use it. The pattern is consistent: people who feel like AI is underperforming are treating it like a request-response machine. Ask a question, get an answer, move on.
That’s first gear. The engine is capable of much more. You’re just not telling it what kind of drive this is.
The three habits below are the gear shift. They are not complicated. They do require you to change how you open a session, how you frame a request, and how you sequence your work. Once you build them in, you will stop cleaning up AI messes and start directing AI output.
Habit One: Plan First, Execute Second
The single most expensive mistake I see people make with AI is asking it to just do the thing. “Write me the performance review process.” “Restructure this spreadsheet.” “Draft the talking points for the board.”
The AI complies. It produces something. And then you spend thirty minutes rewriting it, redirecting it, or explaining why that approach doesn’t work for your situation.
Here is the fix: before you let it touch anything, ask for the plan.
“Before you do anything, analyze the situation and give me a plan. List what you’ll change, in what order, and what could go wrong.”
Then you read the plan. You catch the wrong assumptions before they compound. Then you say “execute.”
For a finance team running a cash flow model, this is critical. You do not want AI to start recalculating cells and restructuring formulas before you’ve confirmed it understands which accounts roll up to which categories, what the timing assumptions are, and what the board is actually trying to see. Ask for the plan. Confirm the logic. Then let it build.
For a CPO redesigning a performance review process, the same principle holds. “Give me a plan for how you’d approach this before you write anything.” You’ll catch immediately if the AI is assuming a 360-degree framework when your culture runs on manager-only reviews. You catch it in thirty seconds instead of after you’ve already shared a draft with your HRBP.
This habit alone cuts the rework cycle by more than half. The model is not slower for doing it. Your session is faster overall because you stopped handing a scalpel to someone who hadn’t seen the patient yet.
Habit Two: Label the Risk Before You Start
This one is almost embarrassingly simple, and it changes AI behavior in ways most people don’t realize are available to them.
Tag your requests explicitly. Two flavors:
“This is a LOW risk change.”
“This is a HIGH risk situation.”
That’s it. Those words shift how the AI responds. On LOW risk, it moves efficiently. On HIGH risk, it asks more clarifying questions, offers more caveats, surfaces more edge cases, and is more conservative about what it recommends.
It was capable of that calibration all along. You just weren’t activating it. The AI defaults to something in the middle, where it tries to be helpful without knowing whether helpful means thorough or fast. You’re the one who knows the stakes. Tell it.
A CEO walking into an acquisition evaluation should open with “This is a high-risk strategic decision. We are evaluating whether to acquire a company in a space we don’t fully understand yet. I need you to push back, surface risks I’m not thinking about, and assume I’ll see this differently in six months than I do today.” That framing produces a completely different analysis than “help me evaluate this acquisition target.”
A finance analyst reconciling a formatting discrepancy in a low-stakes internal report can say “This is a low-risk formatting cleanup. Move fast, don’t over-explain.” And the AI does exactly that.
The risk label is not magic. It’s communication. It tells the AI what mode to operate in, and the AI uses that information. Most people leave that signal on the table because they assume the AI will figure out the stakes from context. It won’t. Not reliably. You have to say it.
Habit Three: Give the AI the Story, Not Just the Task
Every time you open a new session, you’re talking to someone with no memory of yesterday. No context about your company, your constraints, your political landmines, or the six months of decisions that led to this moment.
Most people respond to that by just… typing the task. “Draft talking points for the board presentation on our Q2 results.”
The AI will do something. It will not do something informed.
The habit is to open with narrative context. Two to four sentences about the situation, history, and what actually matters here. Not a full brief. A few orienting facts.
“We’ve been rebuilding our services margin for three quarters. The board is nervous. Our Q2 numbers show progress but the story is fragile. I need talking points that acknowledge the challenge without triggering a vote of no confidence.”
That’s eighty words. The output shifts dramatically. Now the AI is making judgment calls based on institutional knowledge instead of just pattern-matching on “Q2 board presentation.”
For a CPO handling a retention crisis: “We lost four senior engineers in ninety days. Two went to a competitor, two just burned out. Leadership doesn’t fully believe it’s a culture problem yet. I need to build a retention strategy that addresses the root cause without making the CEO defensive.”
The AI now knows the political constraint. It knows what outcome you’re actually trying to produce. That’s not something it could infer from the task alone.
For a finance leader doing budget forecasting: “We’re a nonprofit with one major government contract that renews every eighteen months. Our board gets anxious when we talk about deficit spending even when cash flow is fine. I need a forecast model that tells the truth and doesn’t panic anyone.”
Same spreadsheet task. Completely different output once the AI understands the audience and the stakes.
Context is the cheapest input you can give AI prompting techniques. It costs you thirty seconds and it returns clarity you’d otherwise spend an hour chasing.
The Habit Stack in Practice
These three AI prompting techniques work independently. They work better together.
Open with the story. Label the risk. Ask for the plan. Review it. Execute.
That sequence is how I run client work, how I’ve built AdaptoMeetings (going into production soon!) around meeting-to-execution pipelines, and how I get AI output I can actually hand to a client without a full rewrite. Not because I found a magic prompt. Because I stopped treating AI like a search bar and started treating it like a capable colleague who needs a briefing before they start.
The race car was always there. You were just never leaving first gear.
If you are building an AI workflow for your team and want to talk through how to implement these habits across your ops stack, start with one session this week. Pick one task you do regularly, apply all three habits, and compare the output to what you’ve been getting. The difference will be obvious within the first try.