Retrospective: Taking a second run at Robotbrains
Before diving back in, I’m taking time to review what happened last time. Retrospectives are a common practice for a reason—they turn “that didn’t work” into “here’s what I’ll do differently.” I’m not just starting over; I’m starting over with the benefit of hindsight.
The story
I had an idea that needed a backend, frontend, and LLM integration. I used Gemini to create a long list of tasks, downloaded a boilerplate from Google, and used vibe coding to update Terraform and add infrastructure for the front end.
I learnt how GCP works the hard way—the Terraform docs are sparse, and it was a grind to get everything talking.
UX Design
I tried wireframe services to design the front end, and honestly, they were terrible. Figma required sign-up and payment for poor results. Another service just produced templates that weren’t quite right.
I asked Gemini for a design and it produced a nice color palette. I used it as a blueprint (Material Design, of course). I remembered we used Storybook for front end components at Xero, so I installed it and got the AI to generate components. I fed it a screenshot from the Gemini design and it reproduced the look well.
Storybook was the big win from this round of work.
Frontend
I’m not a frontend developer, but I can find my way around TypeScript and React. Based on AI recommendations, I used Next.js with server-side rendering.
Scaffolding the site and incorporating Storybook components worked quite well, but it wasn’t perfect. The AI often forgot I was using the new SSR model and would generate old-style client fetch calls. Auth via headers was also painful to work out.
Aside from that, the frontend work was fast but felt like a bit of a black box. This might be scary to some, but it got the job done.
Backend
The boilerplate came with a demo already built, so I added a new endpoint and set up security. Integrating with Google authentication took forever and was a total grind.
Eventually, the chat endpoint worked. Sending the prompt and chat history into the LLM turned out to be quite simple. I then vibe coded file uploads—Cursor wrote all the code for me and it worked.
Passing uploaded files to the LLM was great. Multimodal Gemini could extract text from images, which suited my use case perfectly.
Compute & Infrastructure
For compute, I used Docker containers with Docker Compose. For local dev, I just had everything running in different terminals and relaxed security to keep things moving.
Terraform was the hardest part. Each change required a full deploy to test, and most of my time went into configuration. Service-to-service auth was particularly painful—it felt like GCP was changing how it worked right while I was building.
Database
I vibe coded Terraform to set up Firestore and a data model. In hindsight, it was over-engineered. I really just needed to store and retrieve everything each time. I tried to harden security too early, which was a pain I eventually stripped back for the MVP.
Problems
- GCP Tax: The documentation isn’t great, APIs change, and new features are often undocumented.
- SSR Hallucinations: When vibe coding the frontend, the AI didn’t always follow the Next.js SSR patterns.
- Model Drift: Switching LLM models mid-work produced inconsistent output.
Wins
- Full-Flow AI: When I needed file uploads, the AI knew the full flow—Terraform for a GCS bucket, then code for signed upload URLs.
- The Mono-repo: Keeping frontend, backend, and Terraform in one repo gave the AI enough context to handle the integration.
Strategy for Take Two
The focus for the second take is simple: I’m skipping the deploys until I’ve got a good local working demo. I’ll deal with deploy, compute, and security as a big “Phase 2.” The goal for the Phase 1 MVP is to prove the idea by getting the chat loop, verification, and client error handling down to a nice thing locally, not to have it running in the cloud.
So, did this retrospective help?
Getting this all down on paper helps me actually see the ground I’m standing on so I can make better decisions for Take Two. There’s a certain beauty in the honesty of the grind—just learning, failing, and iterating until the thing finally holds its own weight.
— Brad
Disclaimer: This project is conducted in my personal time. All thoughts and opinions expressed here are solely my own and do not represent any current or former employers.