"Why Do We Even Need You?" — What Founders Miss About AI and Engineering

"Why Do We Even Need You?" — What Founders Miss About AI and Engineering

Wow, We're Not Going to Need You Soon

AI will write code. Humans will own what happens after.


Recently I was sitting in a meeting with a startup founder building an app for teachers. I was there as the "technical resource" — a phrase that secretly means "person who tempers optimism with reality." We'd been using AI coding tools to move fast, and they were working beautifully. The founder watched Claude generate a working feature in minutes and said, half-joking: "Wow, we're not going to need you pretty soon."

I laughed. Then thought: "You're going to get in so much trouble."

It's a joke I hear a lot lately — and it makes sense. It's just also wrong in a very specific way.

The Part AI Actually Gets Right

Look, I'm not here to pretend AI coding tools aren't impressive. They are. GitHub Copilot, ChatGPT, Cursor, and the rest have legitimately changed how we build software. They excel at boilerplate, scaffolding, syntax recall, and turning clear requirements into functional code faster than I could type it myself.

This matters. When you're prototyping, experimenting, or building something new from scratch, AI can feel like magic. Research backs this up — studies show these tools genuinely boost productivity on routine tasks like code completion, bug detection, and optimization. I use them constantly. They're terrific.

For some things.

AI Loves a Clean Slate

AI coding shines in environments without history, context, or legacy assumptions. Give it a fresh whiteboard — a new demo, a prototype, a greenfield project with no constraints — and it's phenomenal. The problem is that what feels like magic in those ideal conditions often fails to capture what actually matters once reality shows up.

Research from MIT and collaborators confirms this: routine coding and basic generation tasks are increasingly achievable by AI, but high-level design thinking remains firmly in the human domain. AI can generate solutions. It cannot yet understand which solutions will survive contact with the real world.

This works great right up until the moment you learn something new.

The Nanny-Share App

Let me show you what I mean.

I was building an app for coordinating nanny-share arrangements. The data model seemed straightforward: Users belong to Families, Families belong to Pods (groups that share a nanny). Simple hierarchy. One-to-many relationships all the way down.

AI generated beautiful onboarding flows. Clean models. Elegant code. I felt productive.

Then we learned something: families often participate in multiple pods simultaneously. Different days, different arrangements, different nannies. The actual requirement was many-to-many, not one-to-many.

Suddenly the entire data model was wrong.

This is where things get interesting. AI suggestions for the refactor missed edge cases constantly. They'd update one part of the schema but forget to handle backward compatibility. They'd fix the obvious problem while introducing subtle inconsistencies elsewhere. Every suggested change required me to audit the entire codebase manually to catch what got broken.

Independent analysis shows this pattern isn't unique to my project — AI-generated code tends to be more repetitive, structurally simpler, and more likely to introduce maintainability issues than human-authored solutions. The code works. It just doesn't think ahead.

This is the part of software development that only exists after you've already made mistakes.

Engineers Aren't Typists

Here's what that founder didn't understand: software engineers were never primarily valued for typing speed. The job has always been about architectural thinking, long-term maintainability strategy, and decision-making under uncertainty.

AI accelerates output. It doesn't reason about boundaries, constraints, trade-offs, or what the system will look like in six months when requirements change again.

As one analysis puts it: "AI coding tools can translate human requirements into functional code … but leaders will differ in how comfortable they are relying on AI over the insights of skilled developers." Translation: the output is easy. The judgment about which output to pursue is hard.

The role is shifting from task execution to system design and oversight. That's not a demotion — it's a clarification of what the job actually was all along.

AI Adds, Owners Subtract

Here's another example: I asked AI to build several screens with similar layouts. It generated working code for each one. The spacing was slightly different on every screen. Each view had its own layout logic, duplicated three times with minor variations.

A junior developer might ship that. A seasoned developer immediately thinks: "This is begging for a reusable component."

Technical debt research confirms what every experienced engineer already knows — AI code frequently violates basic DRY principles and established design patterns, accelerating maintenance burden rather than reducing it. The tools generate solutions quickly. They don't naturally abstract, refactor, or consolidate.

AI adds. Owners subtract.

This isn't about AI being "bad" — it's about recognizing that shipping working code and maintaining coherent systems are fundamentally different activities.

The Convex Problem

I use Convex for real-time databases. It's reactive — when data changes, subscribed components update automatically. No polling required. It's the entire point of the tool.

AI kept suggesting polling implementations.

I'd correct it. It would apologize. Generate new code. Still include polling logic.

I'd remind it again: "Convex is reactive. We subscribe, not poll."

It would acknowledge this. Then generate code that... checked for updates on an interval.

This happened repeatedly across different sessions, different prompts, different approaches. The pattern-matching was stuck on what databases usually do, not what this specific database actually requires.

Research shows developers frequently reject or heavily modify AI suggestions because they fail to meet functional or non-functional requirements. The AI recognizes patterns. It doesn't understand paradigms.

Humans don't just recall information — they weight it.

Technical Debt at Scale

AI doesn't inherently create technical debt. But it can supercharge it.

Every duplicated layout, every missed abstraction, every context-unaware suggestion that gets merged because it "works" compounds over time. Analysis from GitClear found significant increases in code duplication and "reverted code churn" in codebases using AI assistance — metrics that directly correlate with maintenance burden.

The code functions. The system degrades.

One study puts it bluntly: "AI-generated code can lead to violations of fundamental engineering principles like DRY, increasing both duplication and long-term maintenance cost." The short-term velocity gain becomes a long-term tax on every future change.

AI doesn't create debt — it accelerates it.

What Engineers Actually Do Now

The job hasn't disappeared. It's crystallized.

Modern engineering work increasingly focuses on:

  • Architectural decisions that AI can't evaluate
  • Trade-off analysis between competing approaches
  • Long-term system thinking beyond immediate requirements
  • Code review and quality enforcement at scale

Multiple studies show that while AI suggestions may boost initial productivity, they often impose heavier review and maintenance burdens on experienced developers. The work shifts from writing to judging, from generating to curating, from building to stewarding.

This isn't less valuable. If anything, it's more essential — because the volume of generated code requiring adult supervision has exploded.

Fear vs. Reality

Will some roles change? Yes. Junior positions focused purely on routine implementation may face pressure. Mundane tasks will continue automating.

But core engineering thinking isn't going away. The ability to understand system boundaries, anticipate edge cases, design for change, and make architectural decisions under uncertainty remains exclusively human territory.

AI doesn't replace the engineer. It reveals what real engineering work actually is.

Still Employed

That founder was half-right. We probably won't need engineers to type boilerplate much longer. We'll need them to decide what gets built, how it fits together, and whether it will still make sense when assumptions change.

Which they always do.

The joke assumes typing code was ever the valuable part. It wasn't. We're not being replaced — we're being reassigned to the work that actually mattered the entire time.

AI writes code. Humans own consequences.