AI Changed How I Write Code—Then It Changed How I Think About Backends

AI Changed How I Write Code—Then It Changed How I Think About Backends

Or: How I became the person I used to silently judge in architecture reviews

Engineers are like hoarders, but for patterns. We pick up mental models that served us well once, and then we clutch them like a security blanket through every job change, every framework migration, every paradigm shift. It takes actual conscious effort to put them down. Sometimes it takes getting slapped in the face with evidence that we've become the problem.

I know this because I recently caught myself doing it. Again. And this time, I had an AI assistant helping me do it faster.

The DoorDash Years (Or: GraphQL As a Very Expensive HTTP Wrapper)

Back at DoorDash, I watched a fascinating pattern emerge in real-time. GraphQL had just been adopted across the organization, and engineers were absolutely determined to make it behave exactly like REST.

Queries were designed to mirror existing service boundaries. Resolvers were basically one-to-one mappings to the endpoints they replaced. The schema looked like someone had taken the Swagger docs, removed the HTTP verbs, and called it a day. GraphQL's whole compositional model—the thing that makes it GraphQL and not just "REST with extra steps"—was being actively suppressed.

This is like buying a chainsaw and using it exclusively as a paperweight. Technically valid. Spiritually devastating.

The result was predictable: GraphQL became a stricter transport layer instead of a compositional one. All the overhead of maintaining a schema, none of the benefits of flexible queries. The infrastructure team eventually started exploring alternatives—not because GraphQL was ineffective, but because it was being used incorrectly at scale.

Here's the uncomfortable part: teaching people how to unlearn REST-style thinking turned out to be more expensive than just replacing the tool entirely. Sometimes teams discard a more powerful abstraction rather than adapt how they use it. I watched this happen and felt very smart and superior about it, like an anthropologist observing a fascinating but doomed tribe.

And then, years later, I became one of them.

The Mirror Moment (Character Growth, Delayed)

I've been working with Convex lately—one of those modern backends-as-a-service platforms that handles all the infrastructure stuff so you can focus on building features. Convex gives you strongly typed queries and mutations, real-time subscriptions out of the box, automatic caching, and server-side composition. It's genuinely good.

And I immediately tried to make it behave like a traditional three-tier architecture.

I designed rigid service layers. I treated backend functions like formal APIs with strict input/output contracts. I was optimizing for hypothetical future consumers who might want to call these endpoints. I was building interfaces for clients that didn't exist and probably never would.

The kicker? I was using Cursor to do all this, and the AI kept suggesting simpler, more direct approaches. "Just return what the component needs," it would essentially say. And I kept overriding it, adding abstraction layers, making things "proper." I felt disciplined doing it. I felt like I was being a Serious Engineer Who Doesn't Cut Corners. I was applying lessons I'd learned the hard way over years of building systems that scaled.

Except I wasn't building systems that scaled. I was building prototypes. I was building MVPs. I was building things where the only consumer was me, right now, and the primary risk wasn't that my service layer wouldn't accommodate the needs of ten different clients—it was that I'd run out of runway before proving the product worked at all.

I was being the GraphQL-as-REST guy. I had become the thing I once observed with detached academic interest. And I was mass-producing bad architecture at 10x speed thanks to AI assistance.

The Actual Insight (Finally)

Here's what took me embarrassingly long to internalize: for startups, infrastructure has become reliable, scalable, and largely turnkey. The hard problems have moved. And this changes where discipline is most valuable.

The traditional mental model treats durability as the priority everywhere. Design your database for correctness and extensibility. Design your service layers with the same rigidity. Build generic endpoints meant to satisfy all possible consumers. Everything should be robust, everything should be abstracted, everything should be Future-Proof™.

This made sense when infrastructure was hard. When spinning up a database was expensive. When changing a schema meant a deployment that could go sideways. When service boundaries were load-bearing decisions you'd live with for years.

But if you're using something like Convex—or Supabase, or Firebase, or any of the modern backends that handle scaling and reliability for you—the cost structure has shifted. You're no longer paying a huge price for flexibility at the infrastructure level. You're paying for premature abstraction instead.

AI-assisted coding makes this even more true. When Cursor can generate a new endpoint in 30 seconds, the economics of "build only what you need right now" change dramatically. The old calculus—spend extra time now to avoid rework later—breaks down when "extra time" shrinks to nearly zero and "rework" becomes trivially cheap.

The split I eventually landed on:

Data models should still be thoughtfully structured and extensible. This is where discipline pays dividends. Clear entity boundaries. Room for growth. Consistent invariants. Your database schema is a long-lived asset. Treat it that way.

Service layers should be flexible, targeted, and client-aware. This is where discipline becomes friction. Backend functions are changeable interfaces. They exist to serve actual clients, not theoretical ones. Build what you need right now. Add new ones when the need changes.

Durability at the data layer. Flexibility at the service layer. Stop conflating them.

What This Actually Looks Like

In practice, this means something like the backend-for-frontend pattern, but without the guilt.

When I know who my client is—which, at the early stages, is always just me and my one frontend—I can design backend functions that return exactly what that client needs. No overfetching, no underfetching, no elaborate transformation logic on the frontend. The endpoint exists to serve a specific screen. When the screen changes, the endpoint changes. It's fine. Endpoints are cheap.

This is terrifying if you come from a world where creating a new endpoint meant coordinating across teams, updating documentation, maintaining backwards compatibility, and probably sitting through a design review. In that world, you learn to make endpoints generic because changing them is expensive.

But that's not the world anymore. Or at least, it doesn't have to be. Especially when you can tell Cursor "make me an endpoint that returns exactly what this component needs" and have working code in under a minute.

Benefits I've actually experienced from loosening up:

Frontend complexity drops dramatically: When your backend returns exactly what your UI needs, you stop writing elaborate data transformation code in your React components. The component receives props. The component renders props. The component doesn't need to understand the relationship between six different entities.

Iteration speed increases: When changing an endpoint is as easy as changing a function—and AI can help you do it in seconds—you stop treating endpoint design as a load-bearing decision. You try things. You adjust. You discover what the interface should be through use rather than speculation.

Speculative abstraction disappears: You stop building for consumers that don't exist. This feels irresponsible at first—aren't we supposed to plan for the future?—but it turns out most speculative abstractions are wrong anyway. You're not saving future-you from rework. You're creating present-you extra work and future-you rework when the speculation doesn't pan out.

The useState Trap (A Brief Detour for React Developers)

If you're a fullstack dev living primarily in React-land, there's a related mental model worth examining: the tendency to think of "state" as something that lives inside components.

useState is great. useReducer is great. Context is fine when used appropriately. But here's the thing: by the time you're managing complex state in your React components, you've often already lost.

The question isn't just "where does this state live in my component tree?" The question is "where does this state live in my system?"

This is worth calling out because AI coding assistants tend to reach for useState by default. It's what most of their training data shows. If you prompt Cursor or Copilot to build a feature, you'll often get local state management even when a better architecture exists. The AI reflects the patterns it learned from—and most codebases it learned from weren't using real-time backends. Teaching yourself to think about state end-to-end means you can guide the AI toward better solutions instead of accepting the default.

Consider: you have a list of items that a user can create, edit, and delete. The traditional React-brained approach is to fetch the data on mount, store it in local state, optimistically update that state when the user takes actions, and maybe sync it back to the server. You end up with elaborate state management—loading flags, error states, optimistic update logic, cache invalidation ceremonies.

But if your backend supports real-time subscriptions (Convex does, and so do others), that state doesn't need to live in your component. The database is your state. The component subscribes to it. When the data changes—whether from this client, another client, or a server-side process—the component re-renders with the new truth.

This is what I mean by thinking about state end-to-end. The traditional split is: "database is for persistence, component state is for UI." But modern infrastructure blurs this. When your backend can push updates in real-time with negligible latency, the database becomes the single source of truth for way more than it used to be.

The implications ripple through your architecture:

Local state shrinks dramatically. If the server can tell you what's true and update you instantly when it changes, you stop caching server state locally. useState becomes what it was always supposed to be: UI state. Is the modal open? Is the user hovering? What's the current form input before submission? That's local state. The list of todos is not local state. It's server state that you happen to be displaying.

Optimistic updates become optional, not mandatory. This is more intuitive with web applications using frameworks like NextJS. But regardless of where your UI is rendered, when server round-trips are fast enough, you often don't need to pretend the action succeeded before confirming it. This eliminates entire categories of state management complexity—no more reconciling optimistic state with server responses, no more rollback logic when things fail.

Your mental model shifts from "data flows down, events flow up" to "data flows from the database, events flow to the database." The component tree becomes a projection of database state, not a parallel copy of it.

This doesn't mean local state disappears entirely. Form inputs in progress, UI chrome, animation states—these still live in components. But the application state, the stuff your users actually care about persisting, lives in the database. And if your infrastructure lets you subscribe to that state in real-time, you get reactivity without the state management overhead.

I spent years building elaborate Redux stores to manage server state on the client. Tanstack Query was a revelation because it made caching and synchronization automatic. But the real paradigm shift is realizing that with the right backend, you might not need to cache server state at all. You just... subscribe to it.

Think about your state end-to-end. Not "where in my component tree" but "where in my system." The answer might simplify things more than you expect.

When Generalization Actually Makes Sense

I'm not arguing for chaos. There are real cases where shared, generalized services earn their keep.

When you have multiple distinct client types that genuinely need the same functionality. When you're exposing endpoints to external consumers who can't just ask you to change things. When you need to provide platform-level guarantees about behavior consistency.

The key distinction: these emerge from reality, not anticipation.

When you find yourself copy-pasting the same query logic into your third backend function, that's a signal. When an external partner asks for API access and you realize your current functions assume internal context, that's a signal. When your mobile app and web app need subtly different transformations of the same underlying data, that's a signal.

Generalization becomes an optimization step, not a starting point. You earn the right to abstract by proving the abstraction is real.

The Convex-Specific Stuff (Since I Mentioned It)

Convex happens to be the system I'm using, so I'll be specific about how this plays out there.

Convex gives you strongly typed schemas with runtime validation. Your data model has structure and constraints. Changes to the schema are intentional, reviewed, and versioned. This is the durable layer—exactly where you want discipline.

Convex also gives you server-side functions that run next to your data. Creating a new query or mutation is literally writing a TypeScript function. There's no deployment ceremony, no infrastructure to provision, no coordination overhead. This is the flexible layer—exactly where you want speed.

My initial instinct was to treat those server-side functions like microservice endpoints. Design them generically. Document their contracts. Build them to last.

My adjusted approach: design them for the client that exists. Usually that's a specific React component on a specific screen. When that screen changes, the function can change. When I need to share logic, I extract it into a helper. When I need a new consumer, I can build a new function that shares underlying queries but returns a different shape.

Convex doesn't mandate this philosophy—you can absolutely build rigid service layers if you want. But the system rewards flexibility. The cost of creating or changing a function is so low that you're only hurting yourself by over-engineering.

Why This Matters More in the AI Era

Here's the thing about AI-assisted coding that doesn't get discussed enough: AI amplifies whatever mental model you bring to it.

If you're thinking in old patterns—rigid service layers, speculative abstractions, premature generalization—AI helps you build the wrong thing faster. You can generate elaborate boilerplate at superhuman speed. You can create beautifully documented interfaces for clients that will never exist. You can architect yourself into a corner with unprecedented efficiency.

But if you've internalized the right patterns—durable data, flexible services, build for the client that exists—AI becomes a genuine multiplier. You can iterate on endpoints as fast as you can describe what you need. You can experiment with different data shapes without the friction that used to make experimentation expensive. You can treat your service layer as the malleable interface it should be.

The teams who figure this out will ship circles around the teams who don't. Not because they're using AI and others aren't—everyone's using AI now—but because they're using AI to build the right things instead of the wrong things faster.

Why Early-Stage Teams Should Care

If you're at an early-stage company, you are in the business of learning what works. Every day you spend building infrastructure for scale you haven't achieved is a day you're not validating the thing that determines whether you'll achieve scale at all.

This isn't an argument against good architecture. Thoughtful schemas pay dividends. Type safety catches bugs. Real-time subscriptions improve UX. All of that matters.

But over-optimizing your service layers delays learning. Every hour spent designing a generic endpoint is an hour not spent discovering whether customers want the feature the endpoint serves.

Thoughtful schemas plus flexible endpoints: this keeps your options open, preserves correctness, and maximizes speed. You can always add abstractions later when you understand what needs abstracting. You can't easily recover the time you spent building abstractions you didn't need.

The Uncomfortable Epilogue

Look, I'm not proud of how long it took me to adjust my mental model. I literally watched this play out with GraphQL at DoorDash. I saw what happens when engineers force old patterns onto new tools. I understood, intellectually, that the problem was cognitive inertia.

And then I did the exact same thing. With an AI assistant making me more productive at doing it wrong.

The difference between knowledge and wisdom, apparently, is that knowledge is understanding the pattern and wisdom is noticing when you're in it. I had the knowledge. The wisdom came later, after I'd already burned a bunch of time over-engineering service layers for an app that might never have more than a hundred users.

So here's the takeaway, stated plainly:

Structure your data with care. Let your service layer respond to reality. Modern infrastructure makes this separation viable—probably more viable than it's ever been. And AI-assisted development makes the flexible approach not just viable but obviously correct, because the cost of "just build what you need now" has dropped to nearly nothing.

The teams who embrace this ship faster, refactor with confidence, and avoid throwing away good tools out of frustration.

And if you catch yourself designing elaborate abstractions for clients that don't exist—or prompting your AI to generate them—maybe take a breath. Ask if you're being disciplined or just performing discipline.

I'm still working on the difference myself.

Or: How I became the person I used to silently judge in architecture reviews

Engineers are like hoarders, but for patterns. We pick up mental models that served us well once, and then we clutch them like a security blanket through every job change, every framework migration, every paradigm shift. It takes actual conscious effort to put them down. Sometimes it takes getting slapped in the face with evidence that we've become the problem.

I know this because I recently caught myself doing it. Again.

The DoorDash Years (Or: GraphQL As a Very Expensive HTTP Wrapper)

Back at DoorDash, I watched a fascinating pattern emerge in real-time. GraphQL had just been adopted across the organization, and engineers were absolutely determined to make it behave exactly like REST.

Queries were designed to mirror existing service boundaries. Resolvers were basically one-to-one mappings to the endpoints they replaced. The schema looked like someone had taken the Swagger docs, removed the HTTP verbs, and called it a day. GraphQL's whole compositional model—the thing that makes it GraphQL and not just "REST with extra steps"—was being actively suppressed.

This is like buying a chainsaw and using it exclusively as a paperweight. Technically valid. Spiritually devastating.

The result was predictable: GraphQL became a stricter transport layer instead of a compositional one. All the overhead of maintaining a schema, none of the benefits of flexible queries. The infrastructure team eventually started exploring alternatives—not because GraphQL was ineffective, but because it was being used incorrectly at scale.

Here's the uncomfortable part: teaching people how to unlearn REST-style thinking turned out to be more expensive than just replacing the tool entirely. Sometimes teams discard a more powerful abstraction rather than adapt how they use it. I watched this happen and felt very smart and superior about it, like an anthropologist observing a fascinating but doomed tribe.

And then, years later, I became one of them.

The Mirror Moment (Character Growth, Delayed)

I've been working with Convex lately—one of those modern backends-as-a-service platforms that handles all the infrastructure stuff so you can focus on building features. Convex gives you strongly typed queries and mutations, real-time subscriptions out of the box, automatic caching, and server-side composition. It's genuinely good.

And I immediately tried to make it behave like a traditional three-tier architecture.

I designed rigid service layers. I treated backend functions like formal APIs with strict input/output contracts. I was optimizing for hypothetical future consumers who might want to call these endpoints. I was building interfaces for clients that didn't exist and probably never would.

The kicker? I felt disciplined doing it. I felt like I was being a Serious Engineer Who Doesn't Cut Corners. I was applying lessons I'd learned the hard way over years of building systems that scaled.

Except I wasn't building systems that scaled. I was building prototypes. I was building MVPs. I was building things where the only consumer was me, right now, and the primary risk wasn't that my service layer wouldn't accommodate the needs of ten different clients—it was that I'd run out of runway before proving the product worked at all.

I was being the GraphQL-as-REST guy. I had become the thing I once observed with detached academic interest.

The Actual Insight (Finally)

Here's what took me embarrassingly long to internalize: for startups infrastructure has become reliable, scalable, and largely turnkey. The hard problems have moved. And this changes where discipline is most valuable.

The traditional mental model treats durability as the priority everywhere. Design your database for correctness and extensibility. Design your service layers with the same rigidity. Build generic endpoints meant to satisfy all possible consumers. Everything should be robust, everything should be abstracted, everything should be Future-Proof™.

This made sense when infrastructure was hard. When spinning up a database was expensive. When changing a schema meant a deployment that could go sideways. When service boundaries were load-bearing decisions you'd live with for years.

But if you're using something like Convex—or Supabase, or Firebase, or any of the modern backends that handle scaling and reliability for you—the cost structure has shifted. You're no longer paying a huge price for flexibility at the infrastructure level. You're paying for premature abstraction instead.

The split I eventually landed on:

Data models should still be thoughtfully structured and extensible. This is where discipline pays dividends. Clear entity boundaries. Room for growth. Consistent invariants. Your database schema is a long-lived asset. Treat it that way.

Service layers should be flexible, targeted, and client-aware. This is where discipline becomes friction. Backend functions are changeable interfaces. They exist to serve actual clients, not theoretical ones. Build what you need right now. Add new ones when the need changes.

Durability at the data layer. Flexibility at the service layer. Stop conflating them.

What This Actually Looks Like

In practice, this means something like the backend-for-frontend pattern, but without the guilt.

When I know who my client is—which, at the early stages, is always just me and my one frontend—I can design backend functions that return exactly what that client needs. No overfetching, no underfetching, no elaborate transformation logic on the frontend. The endpoint exists to serve a specific screen. When the screen changes, the endpoint changes. It's fine. Endpoints are cheap.

This is terrifying if you come from a world where creating a new endpoint meant coordinating across teams, updating documentation, maintaining backwards compatibility, and probably sitting through a design review. In that world, you learn to make endpoints generic because changing them is expensive.

But that's not the world anymore. Or at least, it doesn't have to be.

Benefits I've actually experienced from loosening up:

  1. Frontend complexity drops dramatically: When your backend returns exactly what your UI needs, you stop writing elaborate data transformation code in your React components. The component receives props. The component renders props. The component doesn't need to understand the relationship between six different entities.
  2. Iteration speed increases: When changing an endpoint is as easy as changing a function, you stop treating endpoint design as a load-bearing decision. You try things. You adjust. You discover what the interface should be through use rather than speculation.
  3. Speculative abstraction disappears: You stop building for consumers that don't exist. This feels irresponsible at first—aren't we supposed to plan for the future?—but it turns out most speculative abstractions are wrong anyway. You're not saving future-you from rework. You're creating present-you extra work and future-you rework when the speculation doesn't pan out.

The useState Trap (A Brief Detour for React Developers)

If you're a fullstack dev living primarily in React-land, there's a related mental model worth examining: the tendency to think of "state" as something that lives inside components.

useState is great. useReducer is great. Context is fine when used appropriately. But here's the thing: by the time you're managing complex state in your React components, you've often already lost.

The question isn't just "where does this state live in my component tree?" The question is "where does this state live in my system?"

Consider: you have a list of items that a user can create, edit, and delete. The traditional React-brained approach is to fetch the data on mount, store it in local state, optimistically update that state when the user takes actions, and maybe sync it back to the server. You end up with elaborate state management—loading flags, error states, optimistic update logic, cache invalidation ceremonies.

But if your backend supports real-time subscriptions (Convex does, and so do others), that state doesn't need to live in your component. The database is your state. The component subscribes to it. When the data changes—whether from this client, another client, or a server-side process—the component re-renders with the new truth.

This is what I mean by thinking about state end-to-end. The traditional split is: "database is for persistence, component state is for UI." But modern infrastructure blurs this. When your backend can push updates in real-time with negligible latency, the database becomes the single source of truth for way more than it used to be.

The implications ripple through your architecture:

Local state shrinks dramatically. If the server can tell you what's true and update you instantly when it changes, you stop caching server state locally. useState becomes what it was always supposed to be: UI state. Is the modal open? Is the user hovering? What's the current form input before submission? That's local state. The list of todos is not local state. It's server state that you happen to be displaying.

Optimistic updates become optional, not mandatory. This is more intuitive with web applications using frameworks like NextJS. But regardless of where your UI is rendered, when server round-trips are fast enough, you often don't need to pretend the action succeeded before confirming it. This eliminates entire categories of state management complexity—no more reconciling optimistic state with server responses, no more rollback logic when things fail.

Your mental model shifts from "data flows down, events flow up" to "data flows from the database, events flow to the database." The component tree becomes a projection of database state, not a parallel copy of it.

This doesn't mean local state disappears entirely. Form inputs in progress, UI chrome, animation states—these still live in components. But the application state, the stuff your users actually care about persisting, lives in the database. And if your infrastructure lets you subscribe to that state in real-time, you get reactivity without the state management overhead.

I spent years building elaborate Redux stores to manage server state on the client. Tanstack Query was a revelation because it made caching and synchronization automatic. But the real paradigm shift is realizing that with the right backend, you might not need to cache server state at all. You just... subscribe to it.

Think about your state end-to-end. Not "where in my component tree" but "where in my system." The answer might simplify things more than you expect.

When Generalization Actually Makes Sense

I'm not arguing for chaos. There are real cases where shared, generalized services earn their keep.

When you have multiple distinct client types that genuinely need the same functionality. When you're exposing endpoints to external consumers who can't just ask you to change things. When you need to provide platform-level guarantees about behavior consistency.

The key distinction: these emerge from reality, not anticipation.

When you find yourself copy-pasting the same query logic into your third backend function, that's a signal. When an external partner asks for API access and you realize your current functions assume internal context, that's a signal. When your mobile app and web app need subtly different transformations of the same underlying data, that's a signal.

Generalization becomes an optimization step, not a starting point. You earn the right to abstract by proving the abstraction is real.

The Convex-Specific Stuff (Since I Mentioned It)

Convex happens to be the system I'm using, so I'll be specific about how this plays out there.

Convex gives you strongly typed schemas with runtime validation. Your data model has structure and constraints. Changes to the schema are intentional, reviewed, and versioned. This is the durable layer—exactly where you want discipline.

Convex also gives you server-side functions that run next to your data. Creating a new query or mutation is literally writing a TypeScript function. There's no deployment ceremony, no infrastructure to provision, no coordination overhead. This is the flexible layer—exactly where you want speed.

My initial instinct was to treat those server-side functions like microservice endpoints. Design them generically. Document their contracts. Build them to last.

My adjusted approach: design them for the client that exists. Usually that's a specific React component on a specific screen. When that screen changes, the function can change. When I need to share logic, I extract it into a helper. When I need a new consumer, I can build a new function that shares underlying queries but returns a different shape.

Convex doesn't mandate this philosophy—you can absolutely build rigid service layers if you want. But the system rewards flexibility. The cost of creating or changing a function is so low that you're only hurting yourself by over-engineering.

Why Early-Stage Teams Should Care

If you're at an early-stage company, you are in the business of learning what works. Every day you spend building infrastructure for scale you haven't achieved is a day you're not validating the thing that determines whether you'll achieve scale at all.

This isn't an argument against good architecture. Thoughtful schemas pay dividends. Type safety catches bugs. Real-time subscriptions improve UX. All of that matters.

But over-optimizing your service layers delays learning. Every hour spent designing a generic endpoint is an hour not spent discovering whether customers want the feature the endpoint serves.

Thoughtful schemas plus flexible endpoints: this keeps your options open, preserves correctness, and maximizes speed. You can always add abstractions later when you understand what needs abstracting. You can't easily recover the time you spent building abstractions you didn't need.

The Uncomfortable Epilogue

Look, I'm not proud of how long it took me to adjust my mental model. I literally watched this play out with GraphQL at DoorDash. I saw what happens when engineers force old patterns onto new tools. I understood, intellectually, that the problem was cognitive inertia.

And then I did the exact same thing.

The difference between knowledge and wisdom, apparently, is that knowledge is understanding the pattern and wisdom is noticing when you're in it. I had the knowledge. The wisdom came later, after I'd already burned a bunch of time over-engineering service layers for an app that might never have more than a hundred users.

So here's the takeaway, stated plainly:

Structure your data with care. Let your service layer respond to reality. Modern infrastructure makes this separation viable—probably more viable than it's ever been. The teams who embrace this ship faster, refactor with confidence, and avoid throwing away good tools out of frustration.

And if you catch yourself designing elaborate abstractions for clients that don't exist, maybe take a breath. Ask if you're being disciplined or just performing discipline.

I'm still working on the difference myself.