Why We’re Building a Planning Knowledge Engine (Not Just Another AI Chatbot)
Eireplan is building a planning knowledge engine, not just another AI chatbot. Learn why context matters.
This Blog Post was published on 20 January 2026

Planning professionals are rightly sceptical of artificial intelligence.
In recent years, generic AI tools have been presented, often loudly, as universal problem solvers: systems that can “answer any question”, “draft any document”, or “replace professional judgement”. For those working in planning, development, and infrastructure, such claims are not merely unconvincing; they are fundamentally misaligned with the realities of the planning system.
Planning is not a domain of generic answers. It is a domain of hierarchy, precedent, geography, time, and accountability. It is governed by statute, shaped by policy layers, and ultimately decided by human actors who are accountable to the law, the courts, and the public.
At Eireplan, we are not building an AI chatbot and hoping it will somehow understand planning. We are building a planning knowledge engine: a structured, verifiable intelligence layer designed to support planners, not replace them, within the constraints and responsibilities of the Irish planning system.
This distinction matters.
The Limits of Generic AI in Planning
Large language models are impressive at producing fluent text. They are far less reliable at answering questions where correctness depends on contextual validity rather than linguistic plausibility.
In planning, this problem is acute.
A generic AI system does not inherently understand:
- the legal hierarchy between primary legislation, ministerial guidelines, regional strategies, development plans, and local area plans;
- the fact that policy applicability varies by local authority, zoning, and site context;
- that policy interpretation is time-bound, with superseded plans and transitional provisions;
- or that a “reasonable-sounding” answer is professionally useless if it cannot be traced to a specific policy, section, or decision.
For a planner, an answer without provenance is not an answer. It is a risk.
This is why general-purpose AI tools, however capable they may appear, are ill-suited to professional planning work when used in isolation. They are not designed to reason over policy hierarchies, nor to distinguish between current and historical frameworks, nor to surface uncertainty where interpretation is contested.
Most importantly, they cannot carry accountability. Planners do.
Planning Is a Structured Knowledge System, Not a Textual One
The Irish planning system is often described as complex, but complexity is not the core challenge. The deeper issue is structure.
Planning knowledge is inherently layered:
- Law establishes the statutory basis and procedural requirements.
- National policy sets strategic objectives and mandatory considerations.
- Regional strategies translate those objectives spatially.
- Local development plans and LAPs apply policy at site and area level.
- Decisions and appeals interpret and test these layers in practice.
Each layer constrains the one below it. Each operates within defined temporal boundaries. None can be treated as interchangeable text.
A system that treats planning documents as a flat corpus to be summarised or paraphrased will inevitably collapse these distinctions. The result may look coherent, but it is epistemically unsound.
A planning knowledge engine, by contrast, must preserve structure first and generate language second.
Why Verifiability Is Non-Negotiable
In regulated professions, confidence does not come from eloquence. It comes from traceability.
Any tool used in planning must be able to answer a simple follow-up question:
“Where does that come from?”
Verifiability is not an optional feature. It is the basis on which professional advice is defended, whether to a client, a local authority, An Bord Pleanála, or a court.
This is where generic AI systems struggle most. Even when they produce correct answers, they often cannot reliably explain why those answers are correct in a way that maps cleanly onto statutory or policy sources.
Eireplan is designed on the opposite premise: that every substantive output must be anchored to identifiable sources, specific policies, sections, decisions, or datasets, within their proper temporal and geographic scope. Where uncertainty exists, it must be surfaced, not smoothed over.
This approach reflects the realities highlighted in official analyses of the Irish planning system, which show rising complexity, increasing workloads, and significant invalidation rates linked to procedural and documentation issues PLANNING SOLUTION.
Temporal and Geographic Context Are Not Edge Cases
One of the most common failure modes of generic AI tools in planning is their treatment of time and place as secondary details.
In practice, they are central.
A development plan that expired last year is not “mostly relevant”. A policy that applies in one county but not another cannot be treated as a general principle. Transitional provisions matter. Material contraventions matter. Ministerial guidelines issued after plan adoption matter.
Planning decisions are not made in the abstract; they are made on specific sites, under specific plans, at specific moments in time.
A planning knowledge engine must therefore be capable of:
- distinguishing current policy from superseded policy;
- understanding spatial applicability down to zoning and site context;
- and reflecting how interpretation evolves through decisions and appeals.
This is not something that can be reliably inferred from language alone. It requires structured data, explicit modelling of policy relationships, and constant maintenance as the planning framework evolves.
AI Does Not Replace Planners, and Should Not Try To
It is worth stating this plainly: AI does not replace planners.
Nor should it.
Planning is not a clerical exercise. It involves professional judgement, negotiation, ethical responsibility, and public accountability. These are not deficiencies to be automated away; they are the core of the profession.
The appropriate role of AI in planning is supportive, not substitutive.
Used well, AI can:
- reduce repetitive research across legislation, policy, and precedent;
- assist with structuring and cross-checking documentation;
- surface relevant considerations that might otherwise be missed under time pressure;
- and help professionals focus their expertise where it matters most.
Used poorly, AI can obscure responsibility, encourage over-confidence, and introduce subtle but serious errors into regulated processes.
Eireplan is built on the assumption that human accountability remains central. Any output generated within the platform is intended to be reviewed, edited, and ultimately owned by the planner. The system supports reasoning; it does not stand in for it.
From Chat Interfaces to Knowledge Infrastructure
Much of the current discourse around AI focuses on conversational interfaces. While dialogue is a useful mode of interaction, it is not a substitute for underlying intelligence.
A chat interface without a robust knowledge architecture behind it is, at best, a convenience layer. At worst, it is a confidence amplifier for unverified information.
Eireplan’s focus is therefore not on conversation for its own sake, but on building a planning-native intelligence layer that can be queried, explored, and cited. Natural language is simply one way of accessing that layer, not the defining feature.
This distinction is subtle but important. We are less interested in whether a system can “sound right” and more concerned with whether it can be right in context.
Human Accountability as a Design Principle
Every planning application, report, or submission ultimately bears a human name. That name carries professional and legal responsibility.
A planning tool that blurs this line, by presenting AI outputs as authoritative or self-sufficient, undermines professional practice rather than supporting it.
For this reason, Eireplan is designed to reinforce, not dilute, accountability. It makes sources explicit, assumptions visible, and gaps identifiable. It does not hide complexity behind fluent prose.
In doing so, it aligns with how experienced planners already work: assembling evidence, interpreting policy, exercising judgement, and standing over their conclusions.
A Measured View of What Comes Next
There is genuine potential for AI to improve how planning knowledge is accessed and applied. The pressures facing the Irish planning system, volume, complexity, time constraints, are real and well-documented. Tools that reduce friction without compromising rigour are not a luxury; they are increasingly necessary.
But progress in this space will be incremental, not spectacular. It will come from careful integration into professional workflows, respect for regulatory realities, and a clear understanding of where automation ends and judgement begins.
Eireplan is being built with that restraint in mind.
We are not trying to replace planners, override decision-makers, or short-circuit due process. We are building infrastructure: a knowledge engine that reflects how planning actually works, and that supports professionals in doing their work with greater clarity, confidence, and accountability.
In a system as consequential as planning, that is ambition enough.
