EirePlan

    Explainable or Unusable: A Standard for AI in Planning

    Explainable or Unusable: A Standard for AI in Planning Across professional services, AI is being sold as faster, smarter, and more transformative than any tool that came before it. In planning, law,

    This Blog Post was published on 5 February 2026

    Cover image for Explainable or Unusable: A Standard for AI in Planning

    Explainable or Unusable: A Standard for AI in Planning

    Across professional services, AI is being sold as faster, smarter, and more transformative than any tool that came before it. In planning, law, regulation, and policy, the promise is familiar: reduce workload, accelerate decisions, surface insights hidden in complexity. And yet, for many practitioners, the lived experience of AI tools is far less convincing.

    The primary failure is not that these systems are inaccurate. Nor that they lack ambition, computational power, or technical sophistication. The real failure is simpler and more fundamental: most AI tools cannot explain themselves in a way that aligns with professional accountability.

    This is not a critique of planning systems, planning authorities, or professional practice. Planning already operates under clear standards of reasoning, documentation, and defensibility. The critique is of how AI is often applied to such systems: imported wholesale from consumer or experimental contexts, without adapting to the requirements of professional use.

    If AI cannot show its workings, it cannot function as decision support. At best, it becomes an interesting side tool. At worst, it introduces new risks into workflows that depend on traceability and trust.

    Impressive outputs are not the same as usable tools

    Modern AI systems are exceptionally good at producing outputs that look authoritative. They generate fluent text, confident summaries, and persuasive conclusions. In a demo, this can be compelling. In a professional workflow, it is often useless.

    A planning professional does not need an answer that merely sounds correct. They need to know:

    • where that answer came from
    • what information was considered
    • what was excluded or abstracted away
    • how the conclusion could be defended if challenged

    An AI tool that produces a polished paragraph without exposing its sources or reasoning is closer to a presentation engine than a professional instrument. It may save seconds in drafting, but it costs minutes, or hours, when the user has to reverse engineer the logic to check whether it can be relied upon.

    In practice, many practitioners learn quickly that they cannot cite an AI output, cannot stand over it in correspondence, and cannot use it as evidence in a report. The result is predictable: the tool is sidelined, regardless of how impressive it looked on launch day.

    Why “trust the model” is not acceptable

    In consumer contexts, “trust the model” is often treated as an acceptable abstraction. In planning, law, or regulation, it is not.

    Planning decisions must be defensible. They are subject to appeal, judicial review, and public scrutiny. Even preparatory work, such as feasibility assessments, planning statements, and screening analyses, exists within an accountability chain. The professional signing off on a document remains responsible for its content, regardless of what software assisted in its preparation.

    An AI system that asks users to accept its outputs on faith breaks that chain. It asks professionals to outsource judgement without outsourcing responsibility, an impossible bargain.

    This is why black box AI creates risk rather than reducing it. If a conclusion cannot be explained, it cannot be defended. If it cannot be defended, it cannot be safely used.

    In many cases, practitioners respond rationally. They treat AI suggestions as informal prompts, not as inputs they can rely on. The system becomes a brainstorming tool, not a decision support system. Its value collapses accordingly.

    The hidden cost of black box AI

    Proponents of opaque models often argue that explainability is technically difficult, or that requiring it would limit model performance. In professional contexts, this misses the point.

    The cost of opacity is not theoretical. It shows up as:

    • duplicated work, as users manually verify AI outputs
    • increased cognitive load, as professionals reconcile AI suggestions with known policy constraints
    • institutional resistance, as organisations decline to integrate tools they cannot audit
    • reputational risk, when outputs cannot be traced back to authoritative sources

    In planning, these costs compound existing pressures. The system already involves complex legislation, layered policy hierarchies, and extensive documentation. AI that obscures its reasoning does not simplify this complexity. It adds another layer on top of it.

    What explainability actually means in practice

    Explainability is often discussed in abstract terms: transparency, interpretability, ethical AI. In professional workflows, it is far more concrete.

    At a minimum, usable AI systems must be able to show:

    1. Which documents were used
    2. Which policies or sources were abstracted
    3. How conclusions were derived

    This does not require exposing model weights or internal embeddings. It requires exposing the logic flow: how inputs were filtered, compared, and synthesised into an output.

    This mirrors how professionals already work. Planning reports do not merely state conclusions. They set out policy context, assess compliance, weigh considerations, and explain judgement calls. An AI system that cannot do the same is misaligned with the domain it claims to support.

    Explainability is not optional in regulated environments

    There is a tendency to frame explainability as a “nice to have”, something to add once models are more mature. In regulated environments, this framing is backwards.

    Explainability is the baseline requirement. Accuracy without traceability is insufficient. Speed without defensibility is irrelevant.

    Crucially, this is not a technical limitation. It is a design choice.

    Many AI products are built by optimising for demo impact: fast answers, confident language, minimal friction. Professional tools require a different optimisation target: clarity, auditability, and alignment with existing accountability structures.

    Choosing explainability means accepting that some outputs will be more structured, more conditional, and less rhetorically polished. That is not a weakness. It is what makes them usable.

    Planning as a revealing case study

    Planning is a particularly clear example because its standards are explicit. Decisions must be reasoned. Evidence must be cited. Judgement must be explainable to third parties, whether inspectors, courts, clients, or the public.

    Planners already navigate complex information environments. They are accustomed to balancing policy objectives, interpreting precedent, and documenting rationale. The problem is not complexity. It is opacity.

    AI, when applied well, should clarify this complexity: surface relevant policies, highlight constraints, and organise information in ways that support professional judgement. When applied poorly, it obscures the very reasoning it claims to accelerate.

    This is why many planning professionals are sceptical of AI hype but open to practical tools. They do not need an oracle. They need a colleague who does the legwork and shows how it got there.

    A standard worth insisting on

    If AI is to play a meaningful role in planning and other regulated fields, a clear standard should apply: explainable or unusable.

    This is not a call to slow innovation or constrain ambition. It is a call to align AI systems with the realities of professional work. Systems that expose their reasoning will be trusted, adopted, and improved. Systems that do not will remain peripheral, regardless of how advanced they appear.

    Useful AI behaves like a good colleague. It shows its working. It cites its sources. It makes its reasoning open to challenge, and therefore fit for use in accountable decision making.

    Anything less is not decision support. It is just output.

    Ready to start using EirePlan?

    Create your account and start working in a planning-native workspace for Irish planning teams.