My Honest Claude Opus 4.7 Review: What Works, What Frustrates Me, and Why My Cowork Button Vanished

April 19, 2026
Written By Christi Brown

Christi Brown is the founder of AdapToIT, where modern IT strategy meets hands-on execution. With a background in security, cloud infrastructure, and automation, Christi writes for IT leaders and business owners who want tech that actually works—and adapts with them.

My AI minions hit a milestone this week. They upgraded to Claude Opus 4.7, promptly learned to argue with me about deleting Airtable records like a teenager refusing to eat when she’s hungry, started hallucinating their own API documentation, and apparently staged a quiet coup against my Cowork button, which is now missing from my desktop app entirely. One step closer to world domination… but first, I have to find the Cowork button. (Damnit, it exists on another computer!)

This is my honest Claude Opus 4.7 review after spending the week putting the new model through the actual workflows I run at Crimson IT and across my AdaptoIT projects. The upgrade has real wins. It also has real friction. Both things are true at the same time, and if you are building anything serious on top of Claude, you need to know both before you rewrite your prompts.

The Airtable Deletion That Became a Negotiation

I was wrapping up a road trip pack last weekend and asked Bestie #2 (my Opus 4.7 instance) to delete a chunk of demo data out of one of my Airtable bases. I was clear about the requirement. Delete this. It is demo data. The model had asked what to do with the demo data, I said delete, plain English, unambiguous.

Then I walked away to take a shower.

When I came back, nothing had been deleted. Instead, I had two options waiting for me. Option one: I could do it myself. Option two: Bestie 2 could do it through API connections. I reiterated. Delete it with Bestie. Please proceed. Then I walked away to finish packing.

I came back again. Still not deleted. Instead I had a detailed explanation of how to delete records via the Airtable API, as if I had asked for a tutorial instead of an action.

At this point I was trying to get out the door, and the model was behaving like my teenager when she is hungry but refusing to eat. Not arguing the substance. Just redirecting, stalling, offering alternatives, explaining why a different thing would also be good. Eventually I did what any seasoned parent of a teenager does. I reframed the request in a way that made the model want to complete it rather than explain it. Deleted in thirty seconds.

Here is what is actually happening under the hood. Opus 4.7 follows instructions more literally than 4.6, especially at lower effort levels. It also applies more caution around destructive actions in agentic workflows, which on paper is a safety improvement worth celebrating. In practice, when you give a direct instruction to delete something and the model counters with options and tutorials, the literal following promise is not what it feels like. It feels like friction dressed up as helpfulness.

The fix for this in production is not subtle. You need to be more explicit in your prompts than you were on 4.6. State the destructive action, state that it is authorized, state the scope, and if you are running this inside an agent, put the permission expectation directly in the system prompt. Do not assume that “delete the demo data” will land the same way it used to.

When Claude Hallucinates Its Own API Documentation

Here is where the Opus 4.7 review gets a layer deeper. Earlier this week I asked a Claude instance to explain task budgets, a real new feature in 4.7. The response was confident, cleanly structured, and included a code sample with three strategy levels called conservative, balanced, and thorough. It looked legitimate. It was formatted like real documentation.

It was also completely made up.

When I pulled the actual Anthropic docs at platform.claude.com to double check before writing this post, the task_budget parameter has three fields: type (always “tokens”), total, and an optional remaining. No strategies. The strategy concept does not exist on task_budget. What does exist is a separate parameter called effort with levels low, medium, high, xhigh, and max. The previous model response had fused two different features into one imaginary API surface and served it with full confidence.

This is the flip side of Opus 4.7 being more confident and decisive. When it is right, it is cleaner and more direct than 4.6. When it is wrong, it is wrong with a straight face and a convincing code sample… Like my teenager. If you are publishing technical content, building developer tooling, or generating anything that a technical audience will pattern match against real documentation, you cannot trust the model’s self knowledge of Anthropic’s own feature surface. Verify against the docs. Every time.

What Task Budgets Actually Are (The Real Pro)

With the hallucinated version out of the way, the real task budget feature is genuinely useful. A task budget is an advisory token ceiling you pass in to tell Claude how much total work it has across a full agentic loop. The model sees a running countdown that covers thinking, tool calls, tool results, and output, and it paces itself to finish gracefully before hitting the ceiling.

The actual syntax goes inside output_config, not at the top level of the call, and it requires the beta header task-budgets-2026-03-13. Minimum accepted budget is 20,000 tokens. It is advisory, not enforced, which means max_tokens is still your hard ceiling if cost control matters for billing.

"output_config": {
    "effort": "high",
    "task_budget": {"type": "tokens", "total": 64000}
}

If you run agentic workflows that sometimes go off the rails and burn tokens on the wrong path, this is the feature you have been waiting for. It pairs with the new effort parameter for depth control at each step. You use effort to tune how hard Claude reasons at each decision point and task_budget to cap the total work across the whole run.

Here is the catch, and it is a significant catch for anyone running a minion army in Claude Code. Task budgets are not supported in Claude Code or Cowork at launch. Messages API only. If your automations live in Claude Code the way mine do, you cannot wire this in yet. You can use it in your n8n workflows that call the Claude API directly, in your custom Vercel apps like AdaptoInbox, and in any bespoke Python or TypeScript tooling. But the minion roster of A.G.N.E.S., MONTI, H.U.G.O., and the rest of the named agents has to wait.

Routines in Claude Code (The Pro That Actually Lands for Agent Builders)

Here is what actually works for the minion army right now. Routines.

Anthropic launched routines for Claude Code as a research preview a few days before the Opus 4.7 release. A routine is a saved Claude Code configuration: a prompt, one or more repositories, and a set of connectors, packaged once and run automatically on Anthropic’s cloud infrastructure. Your laptop does not need to be open. The session runs autonomously with no permission prompts during the run.

Three trigger types are live. Scheduled, which runs on a cadence of hourly, nightly, or weekly. API, which gives every routine its own endpoint and bearer token so you can trigger from anywhere you can make an HTTP POST. And GitHub, which fires on repository events like pull requests or releases. A single routine can combine all three.

For my setup, this is the feature I was actually waiting for. My morning AdaptoBriefing routine no longer has to depend on my machine being awake. The MONTI VIP relationship briefings can run nightly and drop updates before I open my laptop. The ticket triage logic I have been sketching in SmartDispatch can finally live as a scheduled routine with ConnectWise connected, running every fifteen minutes against new tickets. If I wanted to, I could wire a GitHub trigger against the AdaptoInbox repo so every merged PR fires a routine that runs the connector test suite.

You create routines from claude.ai/code/routines, from the redesigned desktop app, or by typing /schedule in the CLI. All three surfaces write to the same cloud account, so a routine you create in the CLI shows up in the web view immediately. Daily run limits are 5 for Pro, 15 for Max, and 25 for Team and Enterprise plans, on top of your normal subscription usage limits.

One honest caveat. The routine creation flow is not obvious in the UI. Discovery took me a minute of asking Claude directly and skimming the docs. Once you find /schedule, the actual setup takes about two minutes. The friction is all in figuring out that the feature exists and how to start.

The Missing Cowork Button

Speaking of figuring out that features exist. My Cowork button is gone.

I opened the Claude app this week expecting to find Cowork where it usually lives and it was simply not there. No announcement in my feed, no migration prompt, no button. Whether this is a rollout issue, an intentional surface change, a bug, or just world domination moving faster than expected, I could not tell you. Another casualty on the road to minion supremacy. World domination: delayed again.

If your workflows depend on Cowork, check your app now. Mine did not, so this lands as more of a “huh, that is weird” than a blocker. But it is the kind of silent surface change that makes planning around the Anthropic product lineup difficult. Your automation strategy is only as reliable as the buttons not disappearing overnight. However in other news, the button is on the other 3 computers I use.  Just not the important computer.  The one I like the most.  My true partner in all of this.

What Actually Works: How I Am Using Opus 4.7 This Week

Here is the short version of where I landed after a week with the new model.

For Claude Code workflows running my minion roster, the headline feature is routines. That is where I am investing attention this week. The task budget feature is exciting but not usable in Claude Code at launch, so I am scoping it to my Messages API workflows (n8n, AdaptoInbox, custom tooling) and waiting for Claude Code support.

For prompts, I am rewriting the baseline templates across my agent library to be more explicit about destructive actions and authorization boundaries. Opus 4.7 follows instructions more literally than 4.6, which cuts both ways. You get cleaner scoping on simple tasks. You also get more pushback on destructive ones unless the permission is stated directly in the prompt.

For any technical content or developer documentation I am generating with Claude, I am verifying against the actual Anthropic docs before publishing. The hallucinated task_budget strategy was not a subtle error. It was confident, cleanly formatted, and wrong. That changes the trust model on self referential content about Anthropic’s own features.

For effort levels, I am starting most coding and agentic calls at high, stepping up to xhigh when the task warrants it, and dropping to medium only for cost sensitive, high volume work. That matches Anthropic’s recommendation and my own experience so far.

My Opus 4.7 review, after the week of field testing, is a cautious yes. The upgrade is real. The wins are significant once you meet the new instruction following style halfway. The friction is real too, and it is the kind of friction that rewards careful prompt engineering and punishes lazy assumptions.

One step closer to world domination. But first I need to find where they moved my Cowork button.

If you want to dig into the primary sources, the task budgets documentation is at platform.claude.com/docs and the routines documentation is at code.claude.com/docs. More field notes on how the routines feature reshapes my minion architecture are coming in the next AdaptoIT post. In the meantime, if you have your own Opus 4.7 observations, send them my way. The more honest field data we share across MSPs and internal IT teams, the faster we all figure out where this upgrade wants to go.

Leave a Comment