Skip to main content

When Not to Use AI: A Checklist from People Who Ship It

Neil Simpson
methodologyconsulting
Person writing a checklist on paper with a pen at a minimal desk

We build AI systems professionally. It's how we make a living. And at least once a month, we tell a prospective client not to use AI for their project.

This isn't false modesty. It's pattern recognition. After shipping dozens of production AI systems, we've developed a clear sense of where AI creates genuine value — and where it creates expensive, fragile complexity that would have been better solved with a database query and an if-statement.

The Decision Framework

Before any AI project, we run through a simple set of questions. If the answers point away from AI, we say so. The client's trust matters more than the engagement revenue.

Can you write the rules?

If the logic can be expressed as a decision tree, a lookup table, or a set of business rules, you don't need a model. You need code.

AI is powerful because it handles ambiguity — natural language, fuzzy matching, pattern recognition across unstructured data. If there's no ambiguity in your problem, a deterministic system will be faster, cheaper, more reliable, and easier to debug.

Tax calculation? Rules. Invoice routing based on department codes? Rules. Scheduling based on availability windows? Rules. The fact that these processes are "boring" doesn't make them AI candidates. It makes them engineering candidates.

Is the cost of being wrong acceptable?

AI systems are probabilistic. They're wrong sometimes. For many use cases, that's fine — a product recommendation engine that's occasionally irrelevant isn't dangerous. A medical diagnostic tool that's occasionally wrong is.

Before reaching for AI, quantify the cost of an error. If a wrong output means a minor inconvenience, AI is probably fine. If a wrong output means regulatory penalties, safety risks, or irrecoverable financial loss, you need either a deterministic system or an AI system with heavyweight human oversight — which often eliminates the efficiency gains that justified the project.

Do you have the data?

AI systems need data — training data, evaluation data, context data. Not hypothetical data you plan to collect. Data you have now, in sufficient quality and volume.

The conversation usually goes like this: "We want an AI that analyses our customer interactions to identify churn risk." Great — where are the labeled examples of customers who churned? "We don't have those yet, but we could start collecting..." Stop. You're describing a data project, not an AI project. Do the data project first.

We've seen teams spend months building AI systems that underperform because the training data was noisy, biased, or simply too small. The model was fine. The data wasn't there.

Is the process stable?

AI systems learn patterns from historical data and apply them to new inputs. If your process changes frequently — new regulations, shifting business rules, reorganising categories — the model will constantly lag behind reality.

This doesn't mean AI can't work in dynamic environments. But it means the maintenance cost is high. Every process change requires re-evaluation, potentially new training data, and regression testing. If the process changes quarterly, you'll spend more time maintaining the AI than you saved by using it.

Will anyone trust the output?

This is the question that kills more AI projects than technical failure. If the end users don't trust the system's outputs, they'll check every result manually, defeating the purpose.

Trust requires explainability. If a loan officer can't understand why the AI flagged an application, they'll ignore the flag. If a doctor can't see the reasoning behind a diagnostic suggestion, they'll order the tests anyway. Build the AI without solving for trust and you've built expensive shelfware.

Situations Where AI Consistently Loses

Based on our experience, these are the patterns where we actively steer clients away:

Low-volume, high-stakes decisions. If you make this decision 50 times a year and each one matters enormously, a human with good tooling will outperform an AI. The model doesn't have enough examples to learn the nuance, and the cost of getting one wrong outweighs any efficiency gain.

Replacing a process that isn't broken. "We want to add AI to our onboarding flow." Why? "Because competitors are." That's not a business case. If your onboarding works, has good completion rates, and users aren't complaining, adding AI introduces risk with no clear upside.

Simple classification with clean data. If you're classifying structured data into known categories with clear boundaries, traditional machine learning or even rule-based systems will often outperform LLMs at a fraction of the cost. You don't need a billion-parameter model to sort invoices into 12 categories based on vendor codes.

Problems that need to be solved once. If it's a one-time migration, analysis, or transformation, the effort to build an AI system exceeds the effort to just do it. AI's value compounds over time through repeated use. A system that runs once and is discarded has negative ROI.

Heavily regulated processes requiring audit trails. Some regulatory environments require deterministic, reproducible decision-making with complete audit trails. Probabilistic AI systems are fundamentally incompatible with "explain exactly why this output was produced and guarantee it will be identical for the same input." Regulatory requirements should drive architecture decisions, not the other way around.

What to Do Instead

When we advise against AI, we don't leave a vacuum. The alternatives are usually:

  • Better tooling. Most "AI" requests are really requests for better software. A well-designed dashboard, a streamlined workflow, or an automated pipeline solves the underlying problem without model complexity.
  • Process improvement. Sometimes the problem is the process, not the technology. Mapping the current workflow, eliminating bottlenecks, and standardising inputs delivers more value than any model.
  • Traditional automation. Scheduled jobs, webhooks, rules engines, and workflow automation tools handle 80% of the "we need AI to automate this" requests. Build-or-buy analysis usually reveals that the automation layer, not the intelligence layer, is what's missing.
  • Doing nothing. Seriously. If the current process works and the projected ROI doesn't justify the investment, the responsible advice is to wait. The technology improves every quarter. A project that doesn't make sense today might be obvious in twelve months.

The Honest Conversation

The AI industry has an incentive problem. Every vendor, consultancy, and platform benefits from you believing that AI is the answer. Very few people in the room are incentivised to say "this isn't the right tool for this problem."

That's why we think the ability to say no is the most valuable thing a consulting partner can offer. Anyone can build you an AI system. The harder skill is knowing when not to.

If your problem has clear rules, your data isn't ready, your process is unstable, or the cost of being wrong is unacceptable — AI isn't the answer yet. And "yet" is an important word. The answer changes as models improve, costs drop, and your data matures.

But right now, today, for this project? Sometimes the right answer is a well-written function and a reliable database.