The Practical Guide to AI in Emergency Services

February 13, 2026 · Chris Izworski

I've spent years working at the intersection of artificial intelligence and emergency services — first as the person deploying AI in a 911 center, now at Prepared helping bring these tools to agencies across the country. Along the way, I've developed a set of principles that I think can help any emergency services leader who's trying to figure out whether and how to adopt AI.

This isn't a technology guide. It's a practical one. Because in my experience, the gap between AI that works in a demo and AI that works in a dispatch center has almost nothing to do with the technology.

Start With a Problem, Not a Product

The single biggest mistake I see agencies make is starting with a product and looking for a problem to solve with it. This is backwards. I wrote about this on LinkedIn in "Stop Chasing AI Headlines. Build a Small, Boring Practice."

At Saginaw County 911, we didn't start by asking "what can AI do?" We started by asking "what's consuming the most dispatcher time that doesn't require human judgment?" The answer was clear: non-emergency calls for routine information. That became our target. The AI was the tool we chose to address it, not the other way around.

If you're a 911 director or emergency manager and you can't finish the sentence "AI will help us solve the problem of ___" in ten words or less, you're not ready to deploy AI. And that's fine. Better to wait than to deploy something nobody needs.

Your People Come First

Dispatchers, call-takers, supervisors — these are the people who will live with whatever you deploy. They need to be involved from the beginning. Not consulted after the fact. Not given a training session the week before go-live. Involved.

At Saginaw County, I learned this the hard way. Early enthusiasm from leadership didn't automatically translate to trust from the people actually using the system. I had to spend time in the dispatch center, listen to concerns, adjust the deployment based on real feedback, and be willing to slow down when the team needed more time.

The technology isn't a light switch — it's jagged. It works well in some situations and poorly in others. Your dispatchers will figure out those edges faster than any vendor, and their input is what turns a good demo into a useful tool.

Measure Before You Deploy

You can't know if AI is helping unless you know what things looked like before. That means measuring call volumes, handling times, hold times, and dispatcher workload before the deployment — and then measuring the same things afterward.

This sounds obvious, but a surprising number of agencies deploy technology without baseline metrics. Then, six months later, they can't answer the basic question: did it work?

At Saginaw County, we tracked non-emergency call volumes, average handling times, and dispatcher overtime hours for three months before deployment. That data became the foundation for evaluating whether the AI was actually reducing workload or just shifting it around.

Communicate With the Public

When WNEM covered our AI deployment, the public response ranged from enthusiastic support to real concern. Both reactions were valid. People have a right to know how their emergency services are being run, and they have legitimate questions about AI in a life-safety context.

My advice: get ahead of the story. Don't wait for media to call you. Issue a press release. Hold a community meeting. Be specific about what the AI does and — critically — what it doesn't do. When asked about the dangers of AI, don't be defensive. Acknowledge the risks, explain your safeguards, and invite feedback.

Transparency isn't just the right thing to do — it's the smart thing. Public trust is the most valuable asset a 911 center has, and it's easier to maintain than to rebuild.

Think in Phases

Don't try to do everything at once. A phased approach works better for several reasons: it limits risk, generates real data for each subsequent phase, and gives your team time to adapt.

Phase one might be AI-assisted call routing. Phase two could be natural language processing for non-emergency calls. Phase three might be predictive analytics for resource deployment. Each phase builds on the one before, and each one is small enough to roll back if something goes wrong.

I've described this as AI starting to behave like infrastructure — not a dramatic disruption, but a layer of capability that you build on incrementally. The agencies that treat it this way are the ones that succeed.

The Human Work Is the Hard Work

I keep coming back to this theme because it's the most important lesson I've learned: intelligence is getting cheap, but insight isn't. The AI technology is genuinely impressive. It's also genuinely available — you don't need to be a big-city agency with a massive budget to access it.

What you do need is the human work: understanding your problem, engaging your people, communicating with the public, measuring your results, and adjusting based on what you learn. That work is hard, slow, and unglamorous. It's also the only thing that separates a successful deployment from a expensive failure.

If you're in emergency services and thinking about AI, I hope this is useful. I'm always happy to talk about what I've learned — you can find me on LinkedIn or through my work at Prepared.

Related Reading

How AI Is Quietly Transforming 911 Administrative Work Claude 4.6, Codex 5.3, and Gemini 3: The AI Race in 2026 AI & Technology — Full background and writing LinkedIn Writing — Articles and posts on AI