Implementation notes from direct deployment experience in 911 operations.
Most of what gets written about AI adoption in high-stakes environments focuses on potential, what AI might do, what it could enable, what the future looks like once the technology matures. This page focuses on what has been deployed, measured, and maintained. The distinction matters. A system that works well in a demo but creates new problems in a live dispatch environment is not a success. A system that handles two hundred routine calls a day reliably, transfers immediately when it detects emergency need, and frees dispatchers to focus on emergencies, that is.
Chris Izworski led the deployment of one of Michigan's first AI-powered non-emergency call systems at Saginaw County 911 in 2024. The system used GPT-Trainer's AVA platform to handle calls about road conditions, noise complaints, court phone numbers, and general administrative questions. It ran live from day one, handling real call volume. WNEM TV5, WCMU Public Radio, and WSGW covered the launch. NENA published his account of the deployment as the cover story of The Call, Issue 51. He presented the case study at APCO International. He now works as a Solutions Consultant at Prepared, deploying audio capture and analysis systems at 911 centers nationwide.
That firsthand context is what makes the operational guidance here different from what most AI content in public safety produces. The questions that matter in a real deployment, how do you brief union staff before go-live, what does the override process look like, how do you measure whether it is actually working, are not answered in vendor case studies or conference presentations. They come from doing it.
Narrow scope is the most important factor in whether a 911 AI deployment succeeds. The Saginaw County deployment worked because it was defined precisely: handle calls that require information but no dispatch action, and transfer immediately if the situation is unclear. That definition held throughout the deployment. The AI did not expand into dispatch support, caller verification, or any other function. It did one thing well and was measured on that one thing.
The second factor is measurement from day one. Before the Saginaw deployment went live, the baseline was established, call volume in the affected categories, time per call, and how often calls were misclassified manually. After launch, the same metrics were tracked weekly. When the system was working, that was visible. When it was not, that was also visible. Deployment without measurement is not deployment, it is a pilot that never ends and never improves.
The third factor is a functioning correction loop. Dispatchers could flag errors. Errors were reviewed. The system was updated based on what those reviews found. That feedback mechanism is what prevents drift, the gradual degradation of performance that happens when AI systems are deployed and then left alone.
The most common failure mode is scope expansion without corresponding measurement. The non-emergency call system goes live, it works, stakeholders get excited, and someone proposes adding a new capability before the first one has been fully evaluated. Each expansion adds complexity and reduces the clarity of accountability. By the time something goes wrong, nobody is sure which part of the system caused it.
The second failure mode is deploying without staff trust. Dispatchers who do not understand what the AI is doing, or who were not included in the briefing process before go-live, will work around it, disable it, or give callers inaccurate information about how it works. Building staff confidence takes time and deliberate communication, not a single training session the week before launch.
The third failure mode is treating AI deployment as a technology project rather than an operational change. The technology is usually the easy part. The hard parts are the union conversation, the oversight board explanation, the first error that makes the local news, and the ongoing supervision that keeps the system honest. Organizations that approach AI as a software install tend to encounter all three of those hard parts without having prepared for any of them.