Skip to main content

Command Palette

Search for a command to run...

If AI Writes Code, Where Do Guardrails Live - a Conjecture

Published
4 min read
If AI Writes Code, Where Do Guardrails Live - a Conjecture

Guardrails in software engineering are often misunderstood.

Most engineers associate guardrails with:

  • null checks

  • validations

  • defensive coding

That is only a small part of the story.

A more accurate definition is:

Guardrails are constraints placed at trust boundaries to ensure correctness is preserved.

This definition becomes far more important when we move from human-written systems to AI-assisted systems.


Guardrails Today (Human-written Systems)

In traditional systems, we place guardrails at edges:

  • HTTP requests

  • Kafka events

  • third-party APIs

  • legacy systems

Why?

Because these are places where:

we lose control over correctness

So we:

  • validate inputs

  • enforce invariants

  • fail fast

Once data crosses this boundary and is accepted:

  • we treat it as valid

  • we avoid re-validating deep inside business logic

If you see null checks everywhere inside your core logic:

  • your boundaries are weak, or

  • your contracts are unclear


Trust Boundaries (The Core Idea)

The key concept is:

Edges are often at system boundaries, but they really exist wherever you cannot fully trust the data or behavior.

Even inside your codebase:

  • legacy systems

  • unclear modules

  • unsafe dependencies

…are also trust boundaries.

They should be:

  • isolated

  • validated at entry

  • normalized into a clean domain model

Sometimes this isolation is done via an adapter (anti-corruption layer), especially when:

  • models differ

  • contracts are unstable

  • you want to protect your domain

Not always — only when needed.


Enter AI: What Changes?

When AI starts contributing to software development, something fundamental shifts.

AI is not just generating code.
It is increasingly used to:

  • suggest designs

  • shape abstractions

  • influence system structure

So the question is no longer just:

“Is the code correct?”

It becomes:

“Is what we designed and built aligned with what we actually intended?”


The New Trust Boundary

Previously, the primary trust boundaries were:

  • external inputs

  • external systems

Now, another boundary emerges:

The gap between what we want to build (intent) and what actually gets designed and built (AI output).

Even if everything “works”, that does not guarantee:

  • correct abstractions

  • appropriate boundaries

  • right level of generalization

This is where subtle but important issues appear.


Validation in AI-driven Systems

Correctness is no longer just about code.

We now validate across layers:

  • Design against intent

  • Implementation against design (and intent)

Each layer is checked against the one above it.

This is not entirely new — but it becomes much more critical when:

  • design is partially generated

  • implementation is automated

  • intent is not explicitly captured


Guardrails in AI-driven Development

Guardrails are no longer just about code-level checks.

They exist at multiple levels:


1. Specification Guardrails

Clear definitions of:

  • requirements

  • acceptance criteria

  • invariants

  • non-goals

Without this:

AI produces plausible but incorrect systems.


2. Design Guardrails

Humans define:

  • system boundaries

  • data ownership

  • API contracts

  • domain models

AI can assist, but:

it does not reliably choose the right abstractions for your context.


3. Verification Guardrails

Tests and validation become critical:

  • unit tests

  • integration tests

  • contract tests

But beyond tests:

ensuring alignment between intent, design, and implementation becomes essential.


4. Operational Guardrails

At runtime:

  • monitoring

  • rate limiting

  • circuit breakers

  • feature flags

  • rollback mechanisms

Because:

failures can propagate faster when systems are generated quickly.


The Role of Humans

There is a common narrative:

“AI will replace software engineers.”

A more grounded view is:

The role of engineers shifts toward defining and enforcing guardrails.

Even today, engineers:

  • define problems

  • design systems

  • decide trade-offs

AI changes how much we implement directly.

It does not remove the need to:

  • define intent

  • ensure alignment

  • validate outcomes


The Deeper Shift

Today:

  • code is the primary artifact

  • design is often implicit

With AI:

  • design and intent need to be explicit

  • code becomes generated detail


Final Thought

Guardrails are not about who writes code.
They are about where you cannot afford to be wrong.

AI does not remove trust boundaries.

It introduces a new one:

Intent ↔ what gets designed and built

And because of that:

Guardrails become more important, not less.


One-line takeaway

In AI-driven systems, correctness is enforced by validating design and implementation against intent across trust boundaries.

More from this blog

Backend Software Engineering with Krishna Kumar Mahto

21 posts

Backend Software Engineer, focused on Java-based backend applications, PostgreSQL in databases, Kafka/ActiveMQ/RabbitMQ in messaging.