This article was originally published on LinkedIn.

As software developers, we've spent decades building systems that forced us to be precise. SQL queries. API parameters. Configuration files. If you wanted something from a machine, you had to speak its language.

That friction was expensive. But it also served a purpose we rarely talked about: it made us slow down. Think. Validate our own intent before we could even express it.

That era is ending.

Large language models have done something remarkable. They've collapsed the gap between what we mean and what we can say. Natural language is now a valid interface. You don't need to translate your intent into syntax. You just... describe it.

And that sounds like progress. In many ways, it is.

But it's created an imbalance that most of us haven't fully reckoned with yet.

Here's the shift: intent expression has become trivially easy. Intent validation has not.

I can ask an LLM to write code that updates customer records based on a set of conditions. I can describe those conditions in plain English. The model will produce something that looks reasonable. Often, it runs without errors.

But does it do what I actually meant? Does it handle the edge cases I didn't think to mention? Does it respect the business rules I assumed were obvious?

Those questions are often just as hard to answer as they ever were. Maybe harder, because now I'm reviewing code I didn't write, against intent I didn't fully articulate.

The old friction wasn't just an obstacle. It was a forcing function.

When you had to translate your intent into precise syntax, you discovered the gaps in your own thinking. The compiler didn't just check your code. It checked your clarity. The process of expression was also a process of validation.

That coupling is now broken.

This is the fault line that LLMs have introduced. Not abstraction itself - we've had abstraction for decades. Not natural language interfaces - those have been around too. The fault line is the asymmetry between how easy it is to say what you want and how hard it is to verify that you got it.

And that asymmetry scales.

A developer who manually writes a function understands every line. They can trace the logic. They know what they assumed.

A developer who prompts an LLM to generate that function is now a reviewer. They have to validate code against intent they expressed in a form that was never rigorous to begin with.

That's a fundamentally different cognitive task. And most of us aren't trained for it.

I've seen this play out in practice.

Someone asks an LLM to generate a SQL query. The output looks right. The syntax is valid. The results come back. But there's a subtle filter that doesn't match what they actually needed - and they don't catch it because they were never forced to articulate that filter precisely in the first place.

The system worked. The intent was lost somewhere between expression and execution.

This isn't an argument against using LLMs. I use them constantly. They're genuinely useful.

But I've started thinking differently about where the work actually happens now.

The work used to be in expression. Getting your intent into a form the system could accept. Now the work is in validation. Verifying that what came back matches what you meant - including the parts you never said out loud.

That's a different skill. And it requires a different discipline.

What does that discipline look like?

It starts with treating generated output as a draft, not a deliverable. Every result needs review against the original intent - and that review has to be more rigorous than the prompt was.

It means articulating assumptions explicitly, even when the interface doesn't require it. The LLM will fill in gaps. You need to know what those gaps were.

It means designing prompts that do more than request output. Professional prompts ask the model to document its assumptions, trace data lineage, flag areas of uncertainty, run verification tests, and provide confidence assessments. The prompt itself becomes a specification - not just for what you want, but for the validation artifacts you need alongside it.

It means testing behavior, not just syntax. Validation isn't "does this run?" It's "does this do what I meant in the cases I care about?"

And it means accepting that faster expression doesn't mean faster delivery. The time you saved on the front end often shows up on the back end - in debugging, in rework, in edge cases you didn't anticipate.

The real risk isn't that LLMs abstract too much. Abstraction is fine. We've always built layers of abstraction to hide complexity.

The risk is that we mistake ease of expression for completeness of thought. That we assume the output is right because the input felt clear. That we let the fluency of the interface obscure the rigor that's still required.

Intent validation has always been the hard part. We just used to have systems that forced us to do it upfront.

Now we have a choice. And choices require discipline.

I'm not nostalgic for the old friction. I don't miss debugging syntax errors at midnight.

But I've learned to respect what that friction gave us: a built-in checkpoint between what we thought we wanted and what we actually asked for.

LLMs have removed that checkpoint. Which means we have to rebuild it ourselves - in our processes, in our habits, in the way we think about what "done" actually means.

That's the real work now. Not learning to prompt better. Learning to validate better.

And that's a skill that doesn't come for free.