Chris Izworski

HomeGuides › The Jagged Frontier of AI

The Jagged Frontier of AI, Explained

Why AI can write a legal brief but can’t count the letters in “strawberry”

By Chris Izworski · February 2026 · 7 min read

Key Takeaways

There’s a pattern I’ve seen play out dozens of times. Someone tries an AI tool for the first time, gives it a hard task, and it produces something astonishingly good. They’re amazed. They start using it for everything. Then it makes a mistake so basic, so obviously wrong, that they lose all confidence in it overnight.

Both reactions are understandable, and both are wrong. The mistake is assuming that AI capability is uniform — that if it can do one hard thing, it can do all easy things. It can’t. And understanding why is essential for anyone who wants to use these tools without getting burned.

What “Jagged” Means

Researchers at Harvard Business School, studying how consultants at BCG used AI, coined the term “jagged technological frontier.” The idea is simple: if you mapped AI capability across all possible tasks, the boundary line wouldn’t be smooth. It would be jagged — shooting up to superhuman performance in some areas, then dropping below amateur level in others, with no obvious logic connecting the peaks and valleys.

A model can summarize a dense legal document better than most lawyers. But it might confidently tell you that 7 × 8 = 54. It can write working code for a complex algorithm, then fail to sort a list of ten numbers correctly. It can produce a beautiful, nuanced essay on grief, then hallucinate a citation that doesn’t exist.

The frontier is jagged because these systems don’t “understand” tasks the way humans do. They process patterns in language. Some tasks align naturally with that pattern-matching ability. Others don’t, for reasons that are often opaque even to the people who build the models.

Why This Catches People Off Guard

Humans have a deeply ingrained mental model of competence: if someone can do something hard, they can probably do everything easier. A surgeon can probably also put on a bandage. A concert pianist can probably play “Happy Birthday.”

AI violates this expectation constantly. It can do the surgeon-level task and fumble the bandage. And because it presents everything with the same fluent confidence — no hesitation, no uncertainty in its tone — there’s no reliable signal to tell you when it’s in a valley rather than on a peak.

This is why people oscillate between over-trust and complete distrust. Neither is the right response. The right response is calibration — building an accurate internal map of where the frontier is high and where it drops.

Where AI Tends to Be Strong

Pattern recognition across large volumes of text is the home territory. Summarization, translation, extracting structure from unstructured data, rewriting text in a different style or tone, generating first drafts, explaining complex concepts in simpler language, brainstorming options — these all play to the fundamental strengths of how language models work.

Tasks that have well-established patterns in the training data also tend to go well. Writing code in popular programming languages, answering frequently-asked factual questions, producing standard document formats — the model has seen millions of examples, and it draws on that experience effectively.

Where It Tends to Be Weak

Precise computation is famously unreliable. Spatial reasoning — anything involving physical relationships, counting, or geometric logic — is often poor. Tasks requiring sustained logical chains, where each step depends on the previous one being exactly right, tend to accumulate errors.

And then there are the hallucinations: confident assertions about things that aren’t true. Made-up citations, invented statistics, fabricated quotes. The model isn’t lying in any intentional sense. It’s generating plausible-sounding language, and sometimes plausible-sounding language describes things that don’t exist.

The tricky part is that the weak spots don’t stay fixed. Each new model generation pushes some valleys up into peaks while occasionally introducing new weaknesses. The frontier shifts, which means your calibration needs to be a living thing, not a one-time assessment.

How to Work With a Jagged Frontier

The practical implication is straightforward: never fully trust AI output in any domain without verification, but don’t discard it either. Use it as a starting point, a first draft, a thinking partner — and always apply your own judgment before the output leaves your hands.

For tasks that fall on the strong side of the frontier, you can lean in more heavily and edit lightly. For tasks near the weak side, use AI to accelerate your process but plan to do more of the substantive work yourself. And for tasks where you can’t tell which side you’re on, default to skepticism.

The most dangerous position is uncalibrated confidence — assuming the model is right because it sounds right. The second most dangerous position is blanket rejection — refusing to use these tools because they sometimes fail. The territory between those two extremes is where the value lives.

The Frontier Will Keep Moving

Every few months, a new model closes some of the gaps. Math gets better. Hallucinations become less frequent. Spatial reasoning improves. But new jagged edges appear too, often in unexpected places. And the fundamental dynamic — uneven capability that defies human intuition about difficulty — isn’t going away anytime soon.

The people who thrive with AI are the ones who hold this complexity comfortably. They don’t need AI to be uniformly good. They don’t expect it to be. They’ve mapped enough of the frontier through their own experience to know where to lean in, where to push back, and where to do the work themselves.

That map isn’t something you can read in a blog post. You build it by using the tools daily, paying attention to what works, and being honest about what doesn’t. Which, it turns out, is the same way you build competence with any tool that matters.

I first wrote about this idea in a LinkedIn post called AI Isn’t a Light Switch — It’s Jagged, which resonated with people who’d been burned by exactly this dynamic. The concept builds on research from Harvard Business School and the experience of thousands of people trying to figure out what these tools are actually good at. I’ve continued exploring related ideas on Medium and in my LinkedIn writing.

If you want to understand how AI is shifting from a novelty to something more fundamental, read AI Is Starting to Behave Like Infrastructure — it’s the companion piece to this guide.

More on practical AI thinking:

Building a Daily AI Practice →  ·  LinkedIn writing →
Chris Izworski
Chris Izworski
Writer, gardener, and technologist in Bay City, Michigan. Writes about AI, Great Lakes living, and what it means to pay attention.

Related — AI & Emergency Services

Press CoverageAI in 911 DispatchAI in 911 FAQEmergency Tech FAQAI in 911 Admin WorkPractical AI for Emergency ServicesAI as InfrastructureThe AI Jagged FrontierBuilding a Daily AI PracticeProfessional BackgroundCertificationsCareer TimelineSpeaking