Coding with Code: My Week Pair-Programming with an AI Agent


Over the years I’ve tried plenty of “AI assistants” for coding. Most of them feel like autocomplete on steroids—useful for snippets, but not something I’d call a partner. This past week, though, I tried something different: Claude Code (or simply Code, as I like to call him).

Unlike traditional copilots, Code doesn’t just spit out guesses at the cursor. He reads the actual codebase, understands its structure, and can directly modify files. That distinction turned out to be huge.

The Library in Question

I’ve been working on a .NET library of mine for a while now. It’s a mixed bag of functionality, but the big piece is data access. SQL Server is the primary target, with ODBC support as a fallback for “run of the mill” use cases.

The library leans on SqlBulkCopy for heavy lifting—bulk insert, update, delete. That’s SQL Server only, of course. ODBC users get the standard toolkit, but not bulk operations. A limitation, yes, but an honest one.

That’s the environment Code and I stepped into.

Block by Block

Instead of trying to test the whole library at once, we tackled it in blocks. Easy functions first: pure logic, no dependencies. That gave us a foundation and boosted coverage quickly.

Then we dove into the data access layer—the backbone of the library. Together we wrote unit tests that:

  • Ran against SQL Server.

  • Verified ODBC behavior using Access (perfect for shaking out quirks).

  • Covered multiple queries, parameter handling, and transaction cases.

  • Tested bulk import, upsert, and delete (SQL Server only).

The key was that Code could “see” what I had already written. He didn’t just hallucinate method signatures; he pulled them in, wired up realistic test scaffolding, and refactored when things didn’t compile.

The Results

In just a few days, we hit about 60% coverage of the entire library. That’s not just number padding either—this includes meaningful tests on the heaviest part of the codebase.

I’ve written tests on my own before. It’s slow, sometimes tedious work. With Code, it felt like pair programming with a junior developer who:

  • Never gets tired of writing boilerplate.

  • Catches missing cases before I do.

  • Stays consistent with naming and structure.

Sure, I still had to guide, review, and sometimes correct him. But the sheer throughput was unmatched.

Reflections on the Experience

The biggest difference is that Code wasn’t just a “suggestion engine.” He was an agent inside my project, reading and modifying code with context.

That changed the dynamic completely. Instead of thinking “how do I prompt this tool,” I thought “how do I explain this to a teammate.”

And the results speak for themselves: in one week, a library that had virtually no coverage now has a strong testing backbone, real-world database checks, and a clear path to 100%.

Final Thoughts

We’re still early in this era of AI coding. Tools like GitHub Copilot are useful, but they’re autocomplete with flair. What I got from Code this week felt more like collaboration.

If this is a glimpse of where we’re headed—AI agents that can read, reason about, and reshape entire projects—then coding as we know it is going to change. And I, for one, don’t mind having a tireless partner at my side.

Comments