Rules-based automation hits a ceiling. AI can push past it – but only if your team trusts it enough to use it. Here’s the realistic path from “AI suggests” to “AI acts.”
We’ve been trying to automate ticket dispatch for years. Every MSP tool promises intelligent routing, smart assignment, AI-powered triage. Most of it is marketing over substance. We tried the off-the-shelf solutions and they never quite fit how we actually work.
So we built our own. Rules that match skills to ticket types, load balancing across technicians, escalation paths for SLA breaches. It finally works. It’s predictable. And now that we have something that fits, we can see exactly where it hits a wall – the moment a situation requires judgment instead of matching.
A ticket comes in: “Print jobs disappearing.” The rules see “print” and route it to the tech with printer skills. But someone reading the description would catch that this is a print server issue, not a desktop printer jam. The client called twice this week about related problems. The tech who handled yesterday’s ticket already has context that would cut resolution time in half.
Rules don’t know any of that. They match patterns. They don’t understand context.
This is where AI agents enter the conversation – not as chatbots answering questions, but as systems that can observe, reason, and act. The promise is obvious. The implementation is where things get complicated.
The Trust Gap Nobody Talks About
Here’s the honest version: my team isn’t ready for autonomous AI dispatch. Neither is yours, probably.
It’s not a technology problem. The AI capability exists. You can absolutely build a system that reads ticket descriptions, analyzes historical resolution patterns, considers technician workload and client history, and makes assignment decisions. The models are good enough. The APIs are there.
The problem is organizational. Dispatchers have years of institutional knowledge. They know that Sarah handles this client’s network issues because she built the original design. They know that Mike is technically skilled but struggles with difficult personalities, so certain clients get routed elsewhere. They know that Friday afternoon tickets from a specific contact are usually user error, not emergencies.
That knowledge isn’t in any system. It lives in people’s heads. And when you tell those people “AI is going to do your job now,” they don’t hear efficiency. They hear replacement. They disengage. The implementation fails not because the technology didn’t work, but because the humans stopped cooperating with it.
I’ve watched this happen. I’m not going to pretend my shop is different.
The Realistic Path: Supervised Learning Through Daily Use
The gap between “AI can do this” and “my team will let AI do this” is where most implementations stall. Here’s the bridge: supervised learning in production.
Instead of flipping a switch from human dispatch to AI dispatch, you build a feedback loop:
AI suggests – The system analyzes the ticket, considers all the factors a human dispatcher would (and some they wouldn’t have time to check), and recommends an assignment.
Human decides – The dispatcher sees the recommendation. They can accept it, modify it, or reject it entirely.
System learns – Every decision becomes training data. Accept means the AI got it right. Modify means it was close but missed something. Reject means it needs to understand why.
This isn’t a temporary training phase before “real” autonomy. This is the model. The dispatcher isn’t being replaced – they’re teaching. Their judgment gets encoded into the system over time. Six months in, the AI isn’t just following rules you wrote. It’s following patterns your best dispatcher demonstrated.
Beyond Buttons: Teaching Through Conversation
Accept/modify/reject buttons capture the decision. They don’t capture the reasoning.
When a dispatcher overrides an AI recommendation, the system needs to understand why. Not for logging purposes – for learning. “I assigned this to Sarah instead of Mike” is data. “Sarah handled this client’s firewall issue last week and already has the context” is insight.
This is where the interaction model matters. If teaching the AI requires filing a ticket, updating a configuration, or waiting for a developer to adjust the rules, it won’t happen. Dispatchers are busy. They’ll override, move on, and the system never learns.
But if the AI can ask “help me understand – was it the tech choice, the priority, or something else?” and the dispatcher can respond in natural language, in the moment, you get real-time domain expertise transfer.
“Sarah talked to this client yesterday about the same issue.”
Now the system knows: recent client interaction is a routing factor. It didn’t need a rule update. It didn’t need a developer. The domain expert taught it directly.
That’s not just feedback. That’s institutional knowledge becoming system knowledge, one interaction at a time.
What AI-Assisted Dispatch Actually Looks Like
We already run rules-based auto-assignment. Every 15 minutes, the system evaluates unassigned tickets, matches them against routing rules based on board, ticket type, priority, and technician skills, and assigns them using configurable strategies – lowest workload, round robin, or highest skill match.
It works. But it’s pattern matching, not reasoning. Here’s what an AI layer would add:
Outcome-aware routing. Not just “who has the skill” but “who actually fixes this stuff.” A tech might have a networking skill tag but consistently escalate firewall tickets. Another tech might not have the skill formally assigned but closes similar tickets in half the time with better customer satisfaction scores. Historical resolution data reveals performance patterns that skill tags miss.
Context continuity. The client called twice this week. The tech who handled the first call already has context – they’ve seen the environment, talked to the user, maybe even identified a root cause they haven’t had time to address. Rules don’t know that. AI reading ticket history does.
Skill gap analysis. If a tech keeps successfully closing tickets for a skill they’re not assigned, that’s a training signal. Either add the skill, or recognize they’ve grown beyond where you had them categorized. Flip side – if someone has a skill but tickets keep bouncing back or taking twice as long as peers, maybe that proficiency level needs adjustment. The system can surface these patterns instead of waiting for annual reviews to catch them.
Related ticket identification. Three tickets came in this week from the same client about performance issues. Different users reported them. They look unrelated on the surface. But AI pattern matching might catch that they’re all pointing to the same underlying infrastructure problem. Route them to the same tech, flag them as potentially related, and you’ve turned three separate troubleshooting sessions into one root cause investigation.
The Autonomy Spectrum
Not every decision needs the same level of oversight. There’s a spectrum:
AI informs. System surfaces insights in a dashboard. “Consider assigning to Mike – he’s 3-for-3 on this client’s network issues this quarter.” Human makes all decisions. Lowest risk, lowest efficiency gain.
AI suggests with review. System makes a recommendation, human has a window to override before it executes. Good balance for most scenarios. Catches obvious mistakes while reducing decision load.
AI acts with exceptions. System handles routine assignments autonomously, flags unusual situations for human review. High-volume, low-complexity tickets go straight through. Anything outside normal patterns gets human eyes.
Full autonomy. System acts, humans monitor outcomes. Only appropriate after significant trust-building and for well-understood ticket categories.
Most organizations jump straight to debating full autonomy. That’s the wrong conversation. The right conversation is: what’s the smallest scope where we’d be comfortable letting AI act, and how do we expand from there based on demonstrated performance?
The Team Dynamics Reality
Here’s the thing about our current system: it only knows what I know, because I coded it. The rules reflect my understanding of how tickets should route. But I’m not the one dispatching tickets every day.
My dispatcher has knowledge I don’t have. She knows which techs work well with which clients. She knows that certain ticket descriptions mean something different than they appear on the surface. She understands the nuances that come from doing this work day in and day out – nuances I’ve never captured in rules because I don’t even know to ask about them.
And she’s busy. Crazy busy, like any dispatcher in the MSP world. The solution has to fit into her workflow, not add to it. If teaching the AI feels like extra work on top of an already packed day, it won’t happen.
Dispatching Is More Than Initial Assignment
Here’s what the automation vendors miss: dispatching isn’t just putting tickets in a queue and walking away. That’s the easy part.
Real dispatching is ongoing board management. It’s looking at a tech’s schedule and realizing they’re underwater for the afternoon, so something needs to move. It’s balancing the priorities of tickets against the demands of end users who are calling to check status. It’s knowing that a P3 ticket for a difficult client might actually need more urgency than the priority suggests, while a P1 from another client is probably user error and can wait for verification.
It’s a constant rebalancing act across every tech’s board, all day long. And it requires understanding three things simultaneously: what the tickets actually need, what the end users actually expect, and what each tech is actually capable of handling right now.
No rules engine captures that. It’s too dynamic. Too contextual. Too human.
The Vision: A Conversational Partner
What we’re exploring is a chatbot interface – not a dashboard she checks, but a conversation she has while working through the daily workload.
“Sarah’s board is stacked for this afternoon and the Contoso ticket just escalated. What can move?”
The AI looks at Sarah’s queue, evaluates which tickets could shift to other techs based on skills and availability, considers which ones are time-sensitive versus flexible, and suggests options. Not just “move ticket 4521 to Mike” but “Mike handled Contoso’s network issues last week and has a gap at 2pm – want me to move the firewall ticket to him and push Sarah’s desktop refresh to tomorrow?”
The dispatcher can accept, modify, or push back. “No, keep Sarah on Contoso – she’s been working that relationship. Move the desktop refresh instead.”
And now the AI knows something it didn’t before: client relationship continuity matters more than pure schedule optimization for this dispatcher. That’s not a rule I would have written. It’s institutional knowledge captured in a natural conversation.
The same interaction works for teaching. “Why did you assign that to James instead of the recommended tech?” She explains. The AI learns. Next time, it factors that in.
Over time, the AI becomes a partner that understands how she thinks about dispatching – not because someone documented it in a rules engine, but because she taught it through daily work.
The path forward isn’t convincing them they’re wrong. It’s showing them a model where their expertise matters more, not less. Where the AI makes them more effective instead of making them redundant. Where they’re teaching, not being replaced.
That means:
Transparency about what the AI is doing and why. Easy override capabilities that don’t require justification. Visible learning – when they teach the system something, they can see it stick. Metrics that show the system getting smarter over time, because of their input.
The dispatcher who teaches the AI to handle routine tickets isn’t automating themselves out of a job. They’re elevating their role from “assign tickets all day” to “handle the complex stuff and train the system on everything else.”
That’s a better job. But they have to believe it to engage with it.
Where This Is Going
We’re not there yet. This is the direction, not the destination.
Phase 1 is building the feedback mechanism into our existing dispatch system. AI recommendations alongside current assignments, accept/modify/reject tracking, basic learning from decisions.
Phase 2 is the conversational layer. When someone overrides, let them explain why in natural language. Build the context capture that turns individual decisions into systemic learning.
Phase 3 is selective autonomy. Pick a ticket category – probably high-volume, low-complexity, low-risk – and let the AI run. Monitor closely. Expand scope based on performance.
Phase 4 is the skill analysis layer. Use resolution data to surface training needs, skill gaps, and misaligned proficiency ratings. Turn dispatch data into workforce development insights.
The timeline is measured in quarters, not weeks. This isn’t a deployment. It’s a culture shift with technology attached.
The Bigger Picture
Episode 1 of this series was about connectors – recognizing when native integrations can replace custom builds, and meeting users in the tools they already use.
This episode is about agency – when AI moves from answering questions to taking actions. The connector question was “where does data need to flow?” The agency question is “where can AI make decisions?”
The answer isn’t “everywhere, as fast as possible.” The answer is “wherever you can build the trust to support it, at the pace your organization can absorb.”
That’s less exciting than the AI hype cycle suggests. It’s also more honest. And it’s the difference between implementations that stick and implementations that get quietly disabled six months later because nobody trusted them.
AI agents will replace workflows. Not all at once. Not without resistance. And not without the hard work of building trust through demonstrated competence.
The organizations that get this right won’t be the ones with the most advanced AI. They’ll be the ones that figured out how to bring their teams along for the transition.
That’s the work that matters.
This is part of the “Going Past Chatbots” series, exploring what comes after the initial AI deployment. Next up: What Copilot Can’t Do (Yet) – understanding the boundaries of native AI tools and where custom solutions still earn their keep. To read the previous post, check out: Going Past Chatbots: The Connector Strategy