A Tesla Supercharger, Two AI Agents, and a Real App: What I Learned as a Non-Developer
Key Takeaways
- Agentic coding is turning into infrastructure — one estimate puts Claude Code at ~4% of public GitHub commits today, projecting 20%+ by end-2026
- The unlock wasn’t “better prompting.” It was acting like a Product Owner: define “done,” assign roles, review hard, iterate fast
- A shared project notebook (specs, decisions, backlog, release notes) kept humans + agents aligned and prevented context reset
- Treat it like a real program: define the problem and success criteria, assign accountable owners, pressure test the output, then ship in tight loops. AI multiplies throughput; process protects quality
Over the holidays I built a concert calendar + trip-planning app. The first working demo came together in about 25 minutes—sitting at a Tesla Supercharger on a hotspot while my car charged. The polished version took a few evenings, because “real” meant auth, a database, deployments, and fixing what breaks when you actually use something.
The twist: the “team” was two AI agents with defined roles—and me acting like a Product Owner.
Quick backstory: every year I build a concert planning spreadsheet: tours I like, shows I’m committed to, what else fits. It works, but it’s pretty manual. I wanted something that pulls shows automatically and layers in trip planning. At that Supercharger stop, I described the product I wanted—and got a working demo before the charge finished.
That demo was enough to prove the concept. But once I started using it, the boring stuff mattered: authentication, a stable data model, APIs, and a dev → stage → prod path that didn’t fall apart.
I’m not a developer; the last time I wrote code was probably VBScript 20 years ago. But I learned how Cognito, Lambda, and DynamoDB fit together the same way most of us learn anything: build a first version, review outputs, ask pointed questions, try to break it, and iterate.
To get from “works” to “starting to feel professional,” I stopped trying to be a prompt engineer and started acting like a Product Owner. The unlock was clear roles + a loop that prevents drift.
Here’s the workflow:
- Designer (Gemini): UI flows, feature ideation, wireframes, and build-ready prompts optimized for the coding agent
- Builder (Claude Code): architecture, coding, deployment, defect remediation
- Auditor (Gemini CLI): QA the build against the original design intent
- Product Owner (Me): UAT — what’s right, what’s wrong, what’s missing
“I stopped trying to be a prompt engineer and started acting like a Product Owner.”
Underneath it all is a shared project notebook (backlog, specs, decision log, release notes) so nothing resets between sessions. I tried managing this with a Kanban board at first, then realized the agents did a better job when the notebook stayed authoritative.
I didn’t get faster by “prompting harder.” I got faster by building a system:
- The design agent generates prompts with full context, so I don’t write giant prompts myself.
- I direct outcomes (what “done” looks like), not implementation details.
- The notebook carries context forward, so sessions stay coherent even when models compress or forget.
What replaced ad-hoc tinkering was a repeatable loop:
ideate → design spec → build in dev → test in stage → ship to prod
And I didn’t let agents freestyle production—anything touching auth, permissions, secrets, or cost got human review.
External APIs: Ticketmaster, Setlist.fm, Google Places, Unsplash, Open-Meteo
And the punchline: it didn’t just ship code. It shipped a usable product. It even wrote my “New Features” modal and a walkthrough so a few friends could understand what changed. The distance between what I envisioned and what shipped was smaller than anything I’ve experienced, and I didn’t write a line of code.
This is why I think agentic AI is moving from novelty to infrastructure: it’s becoming a layer between intent and execution. SemiAnalysis estimates ~4% of public GitHub commits are already done by Claude Code, projecting 20%+ by end-2026.
If you’ve spent your career turning ambiguity into execution—defining “done,” pressuring assumptions, sequencing work, and running tight feedback loops—you’ve been training for this. That skill set is the bottleneck remover.
My takeaway isn’t “AI writes code now.” It’s that a lightweight delivery system—clear roles, a shared notebook, and ruthless feedback loops—turns agents from a parlor trick into a team. The differentiator won’t be access to models. It’ll be who can define “done,” run a clean release loop, and compound learning week over week.
These are my personal observations from tinkering on a side project—they don’t necessarily reflect the views of my firm. That said, we have some great AI tools and solutions, and I’d love to tell you about them.