I Built a SaaS in a Weekend With AI -- Here's What Actually Happened
Every "I built X with AI" post glosses over the hard parts. This one won't.
Here's what it actually looks like to build a working SaaS application in a weekend using an AI agent on real infrastructure. The good, the broken, and the parts where I had to step in.
The Idea
A receipt manager. Upload photos of receipts, extract the data with OCR, categorize expenses, and export reports. Simple enough to build in a weekend, complex enough to be genuinely useful.
The tech stack: Python/FastAPI backend, PostgreSQL database, S3-compatible object storage for receipt images, and a clean HTML frontend. No React, no build step, no npm. Server-rendered templates with a bit of JavaScript where needed.
Hour 1-2: Project Setup
Created a YokeDev project. Within 60 seconds I had a VM with Docker, Git, and a live URL. Connected Claude to the MCP server and started talking.
First instruction: "Build a receipt management app. FastAPI backend, PostgreSQL, server-rendered HTML. Users can upload receipt photos, the app extracts date, vendor, amount, and category."
The AI scaffolded the project: Dockerfile, docker-compose.yml with PostgreSQL, FastAPI app structure, SQLAlchemy models, Alembic for migrations. It deployed on the first try.
What worked: The initial scaffold was solid. Standard patterns, well-organized code, proper separation of concerns.
What surprised me: It set up Alembic migrations correctly on the first pass. Database migration setup is usually where tutorials lose people.
Hour 3-6: Core Features
This is where the AI earned its keep. I described features one at a time:
"Add user registration and login with session cookies."
"Add a receipt upload endpoint. Store the image in the uploads directory. Extract text with Tesseract OCR and parse out vendor, date, total, and category."
"Add a dashboard page showing all receipts in a table, sorted by date. Include a running total at the top."
Each feature took 10-20 minutes. The AI would write the code, run the tests, deploy, and verify. When OCR extraction was messy (it always is), it iterated on the parsing logic until common receipt formats worked.
What broke: The first version of OCR parsing was too aggressive with regex. It found "dates" in serial numbers and "totals" in phone numbers. I told the AI the specific problems I was seeing, and it rewrote the parser with smarter heuristics. Three iterations to get it solid.
Hour 7-10: The Tedious Stuff
This is the part nobody talks about in "I built X with AI" posts. The boring infrastructure work that makes or breaks a real application.
"Add proper error handling. If OCR fails, show a friendly message and let the user manually enter the data."
"Add CSRF protection to all forms."
"Set up proper file upload limits. Max 10MB per receipt."
"Add pagination to the dashboard. 25 receipts per page."
"Add a CSV export endpoint for all receipts in a date range."
Each of these is 5-15 minutes of AI work. Individually boring. Collectively, they're the difference between a demo and something you'd actually use.
Hour 11-14: Making It Look Good
The initial HTML was functional but ugly. I spent more time on this than anything else, not because the AI couldn't write CSS, but because design is subjective.
"Make the dashboard look professional. Clean layout, good spacing, readable typography. Dark theme."
The first version was okay. The second was better. I gave specific feedback: "The upload button is too small. The table columns are too wide for the data. The pagination links should be centered."
Lesson learned: AI is excellent at implementing specific visual feedback. It's mediocre at making aesthetic choices from scratch. The fastest workflow is to let it generate something, then give targeted feedback.
Hour 15-18: The Features That Make It Real
"Add receipt categories: food, transport, office, entertainment, other. Let users recategorize with a dropdown on the dashboard."
"Add a monthly summary view with a bar chart showing spending by category."
"Add email reports. Every Monday, email the user a summary of last week's receipts." (This one needed a mail service credential, which I added through the YokeDev dashboard.)
"Add a search bar to the dashboard. Search by vendor name, category, or amount range."
At this point, the app was genuinely useful. Not a toy. Not a demo. Something I'd actually use to track business expenses.
What I Learned
AI is great at the boring parts. CRUD endpoints, form validation, error handling, database migrations, Docker configuration. These are solved problems with standard patterns. AI handles them faster and more consistently than I would.
AI needs direction on architecture. Left to its own devices, the AI will build whatever seems simplest. If you want a specific pattern (say, repository pattern for database access, or a specific folder structure), say so upfront.
The "last 20%" takes 80% of the time -- but AI compresses it. Pagination, error handling, CSRF, file upload limits, proper logging. These small details are what separate a demo from a product. AI handles them in minutes each instead of hours.
Real infrastructure matters. The app uses PostgreSQL, file storage, background email sending, and OCR processing. None of these work in a browser-based sandbox. Having a real VM with Docker meant I could install Tesseract, configure PostgreSQL, and send emails without hitting platform limitations.
Total time to production app: about 18 hours spread over a weekend.
The Stack
For anyone who wants to replicate this:
- Backend: Python 3.12, FastAPI, SQLAlchemy, Alembic
- Database: PostgreSQL 16 (Docker container)
- OCR: Tesseract via pytesseract
- Frontend: Jinja2 templates, vanilla CSS, Chart.js for the spending chart
- Deployment: Docker Compose on a YokeDev VM
- Email: Resend API
Everything is standard. Nothing is proprietary. The entire project exports as a Docker Compose setup that runs anywhere.
Try It Yourself
Start a free YokeDev trial and see how far you get in 48 hours. You might be surprised.