Skip to main content
  1. posts/
  2. posts/

From Determinism to Ambiguity: An Engineer’s Unease with LLMs

·5 mins· loading · loading ·
 Priyak Dey
Author
Priyak Dey
Code and Wander: Journeys in Code, Camera, and the Quest for Meaning

Provenance & Disclosure
#

Epistemic Boundary

This post is the result of a long, uninterrupted conversation between me and an AI language model (ChatGPT 5.2), conducted in a single chat session with shared context. The model summarizes that conversation into this blog post. This is a second-draft synthesis; further iterations would alter the original conversational context.

  • The questions, objections, framing, and discomfort are mine.
  • The responses, expansions, and synthesis are generated by the model.
  • I edited only for structure and flow — not to change arguments or conclusions.

The medium is intentional.
This post is not just about AI — it is with AI.


A Familiar Pattern in Software Evolution
#

For decades, software engineering evolved along a remarkably consistent axis.

Classical software evolution:

  1. ASM → C → Java → Rust
  2. Manual memory → GC
  3. Hand-written SQL → ORM

What changed?

  • Abstraction level ↑
  • Productivity ↑
  • Error surface ↓ (usually)

What did not change?

  • Determinism
  • Referential transparency (given same inputs → same outputs)

This matters more than we usually admit.


Determinism Was the Contract
#

A compiler is a function: C(program, flags, environment) → binary

Compile the same program, with the same flags, on the same machine, and you expect the same output.
If you don’t get it, something is wrong — and crucially, that wrongness is diagnosable.

This property underpins:

  • debugging
  • testing
  • reasoning
  • reproducibility
  • trust in tools

We built entire engineering cultures around this assumption.


LLMs Are Not Deterministic Systems (by Design)
#

Large Language Models break this contract.

An LLM is:

  • A probabilistic sequence model
  • Sampling from a distribution: P(token | context)

Even with temperature → 0, you’re still relying on:

  • Floating-point math
  • Non-associativity
  • GPU nondeterminism
  • Beam / tie-breaking artifacts

An LLM is not a function. It’s a stochastic process.

Ask the same question, in the same chat window, with the same context — and you can get different answers.

Not because something external changed.
But because probability is the mechanism.

To an engineer trained on deterministic systems, this feels like a regression.


“But Humans Are Non-Deterministic Too”
#

This is the most common response.

Code reviews vary.
Design opinions differ.
Two senior engineers can disagree on the same PR.

True — but incomplete.

Humans are stochastic.
Institutions are not supposed to be.

Organizations exist to reduce variance through:

  • style guides
  • checklists
  • standards
  • ownership
  • accountability

The real discomfort is not that LLMs are probabilistic.

You’re not saying:

“LLMs are stochastic”

You’re saying:

We are injecting randomness into institutional knowledge systems
and calling it progress.

That’s dangerous.


Automation on Top of Uncertainty
#

This concern becomes concrete when LLMs move from chat to automation.

Today they are used for:

  • PR reviews
  • code generation
  • CI automation
  • architectural suggestions

Now consider this:

The same PR, reviewed twice by the same model, can produce:

  • different comments
  • in different places
  • focusing on different concerns

This is not “human-like helpfulness”.

It is institutional incoherence.

A deterministic linter may be blunt, but it is predictable.
A stochastic reviewer feels intelligent — until you rely on it.

Where LLMs Actually Make Sense
#

Here is the real answer:

LLMs are valuable exactly where determinism was never achievable, but speed and breadth were.

These include:

  • exploration
  • brainstorming
  • hypothesis generation
  • explanation
  • summarization
  • pattern recall
  • “what should I look at next?”

In these domains, humans were already the bottleneck — slow, inconsistent, expensive.

LLMs compress heuristics, not truth.


The Authority Boundary Problem
#

The core architectural mistake is subtle but critical.

Bad design: LLM → decision

Correct design:

  • LLM → suggestions
  • Deterministic systems → verification
  • Humans → judgment

The moment a stochastic system crosses the authority boundary — deciding what matters, what passes, what is approved — randomness becomes policy.

And policy should not be sampled.


The Forgery Analogy
#

Defenders often say:

“The model doesn’t copy. It learns style and structure, like humans do.”

Taken seriously, this has consequences.

Forgery law already solved this problem.

A forged painting is illegal even if:

  • the canvas is new
  • the paint is new
  • no pixels are copied
  • the work is technically “original”

Forgery is illegal not because of copying, but because of substitution without provenance.

You may imitate a style.
You may not replace the source of value.

LLMs sit in a legal and ethical gap largely because the law has not caught up to industrial-scale imitation.

Scale changes ethics. It always has.


The Quiet Cost: Abstraction Atrophy
#

There is a deeper risk that does not show up in benchmarks.

Abstractions are born from friction:

  • writing forces thinking
  • debugging forces understanding
  • constraints force creativity

When LLMs reduce cognitive inhibition too aggressively — auto-designing, auto-reviewing, auto-summarizing — they also reduce the pressure that creates new abstractions.

The danger is not wrong answers.

The danger is shallow fluency replacing internal models.

We’ve seen this before:

  • calculators weakened number sense
  • GPS weakened spatial reasoning

LLMs operate at a much higher cognitive layer.


This Is Not an Anti-AI Argument
#

This is not a rejection of LLMs.

It is a demand for correct placement.

LLMs are powerful tools — but terrible authorities.

They should:

  • assist
  • suggest
  • explore
  • accelerate

They should not:

  • decide
  • approve
  • enforce
  • replace skill-forming work

Progress is not speed alone.
Progress is speed without losing the ability to reason about what we built.


Why This Post Exists
#

This post was written with an AI system because critiquing a tool without using it would be unserious — and treating it as an oracle would be dishonest.

The conversation itself demonstrates both:

  • the power of LLMs as thinking amplifiers
  • and the discomfort around determinism, authorship, and abstraction

That tension is the point.


The Real Question
#

The question isn’t:

“Can AI write code?”

It’s:

“Who is allowed to decide what matters?”

Until that boundary is drawn clearly, skepticism is not obstructionism.

It is professionalism.


comments powered by Disqus