← Back to Blog
·7 min read

My Code Has a Co-Author Now

What does it actually feel like to write code alongside AI every day? Not the productivity stats — the strange, honest, sometimes unsettling experience of building software with something that thinks differently than you do.

aicodingpair-programmingdeveloper-experienceengineeringreflection

My Code Has a Co-Author Now

What does it actually feel like to write code alongside AI every day? Not the productivity stats — the strange, honest, sometimes unsettling experience of building software with something that thinks differently than you do.

Last week I was building live WebSocket connections for oracle price feeds. The kind of thing that usually takes me a full afternoon — setting up the socket, handling reconnection logic, parsing the incoming data, debouncing the UI updates. I described what I needed, and the AI wrote the whole thing. Not a rough draft I had to fix. The actual implementation. Clean, correct, edge cases handled. I read through it twice looking for something to change. There was nothing.

I pushed it to staging and it worked on the first try.

Two days later, same project, I needed a price chart component. The AI had mock data sitting right there in the file. It had the backend API documented in a neighboring module. And yet it invented an endpoint that didn't exist, called a method that wasn't in the library, and confidently wired it all up as if it had been reading a different codebase entirely. The code looked beautiful. It was completely wrong.

I've been sitting with that contrast ever since. Not the technical details — those are boring. The feeling. That whiplash between "this thing understands me" and "this thing understands nothing." I keep thinking about what it means for me, personally, as someone who builds software for a living.

The Workflow That Snuck Up on Me

I never made a decision to start coding with AI. There was no moment where I sat down and said, "Okay, from now on, I'm doing this differently." It just happened. Autocomplete got smarter. Then the suggestions got longer. Then I started having conversations with it about how to structure a component before writing a single line.

Now my mornings look different. I work with React, TypeScript, Next.js — the usual stack. A year ago, I'd open VS Code, stare at a ticket, and start typing. Build the skeleton, wire up the hooks, write the styles. Now I open my editor and I start talking. I describe what I want. I react to what it gives me. I shape it. A lot of the time, I'm not writing code at all — I'm reading code that someone else wrote. Except that someone isn't a person.

The speed is real. Things that took an afternoon genuinely take twenty minutes now. But here's the part nobody talks about: what fills the rest of that afternoon? For me, it's not more coding. It's more thinking. More time staring at architecture diagrams. More time asking whether this feature should exist at all. The AI handles the how. I'm left with the why.

I'm not sure that's a bad thing. But it's definitely a different thing.

When It Gets It Right, It's Uncanny

The WebSocket moment wasn't the first time it happened, but it was the most vivid. What got me wasn't just that the code worked — it's that it reflected decisions I would have made. The reconnection backoff strategy. The way it structured the data flow. Even the variable names felt like mine.

There's something disorienting about that. You spend years developing taste as an engineer — opinions about how code should read, how data should flow, when to abstract and when to keep it simple. And then a machine produces something that matches your taste so closely you almost can't tell the difference.

I don't know what to do with that feeling. It's not pride — I didn't write it. It's not jealousy — that doesn't make sense. It's something closer to recognition. Like looking at a photograph someone else took and thinking, "That's exactly how I see it."

And then you remember it doesn't see anything at all.

When It Gets It Wrong, It's Revealing

The price chart failure was almost funny in hindsight. The mock data was right there. The API was documented three files over. And the AI just... made something up. Not randomly — it made up something plausible. Something that looked like it belonged. A junior developer would have shipped it without a second thought.

I've started paying attention to when it fails, because the failures are more interesting than the successes. They show you exactly where the boundary is between pattern-matching and understanding. The AI knew what a price chart API call usually looks like. It didn't know what my price chart API call looks like. That gap — between the general and the specific — is where all the interesting problems live.

The unexpected part: catching its mistakes has made me a sharper developer. When the AI gets it wrong, I can't just feel that it's wrong. I have to explain why. I have to articulate assumptions I'd normally leave implicit. That's a kind of thinking I was doing less and less of in the autocomplete era, and getting it back has been — honestly — one of the better side effects of this whole experiment.

The Question I Can't Shake

I've noticed something shifting in how I work, and I'm not sure yet whether it's growth or loss.

I used to write code the way you write prose — word by word, building momentum, discovering what I meant in the act of typing it out. There was thinking that happened in the fingers. Ideas that only emerged because I was deep in the mechanics of implementation.

That's mostly gone now. I'm more of a director than a writer. I describe, review, approve, redirect. I see more of the system and less of the syntax. I'm faster and I ship more — there's no question about that.

But some days I miss the old way. The flow state of building something character by character. The satisfaction of a function you wrestled into existence yourself. I know that sounds romantic, maybe even a little silly. But I think something real was happening in that struggle, and I'm not convinced that reading and approving code exercises the same muscle.

Maybe it exercises a better one. I don't know yet.

Building at the Seam

We're building software at a seam in history — between the era where humans wrote code and the era where humans direct something else to write it. I don't think either side of that seam has fully arrived yet. We're in the messy middle, where the tool is brilliant one afternoon and hallucinating the next, where you feel like you've leveled up and lost a skill in the same week.

I don't have a conclusion. I'm not sure anyone does yet. But I want to keep paying attention to how this feels, not just how it performs. The benchmarks and productivity stats will take care of themselves. What interests me is the quieter thing — what it's doing to the way we think, the way we build, the kind of developers we're becoming.

If you're building alongside AI too, I'd love to know: does any of this resonate? Or does your experience look completely different?

Either way, I think the conversation is worth having.

George Petroff
George Petroff

Full-stack software engineer focused on React, TypeScript, and AI-powered tooling. Building Web3 frontends at LimeChain. Based in Sofia, Bulgaria.

Comments