Conclusion
What we built, what we learned, and what comes next
The finished application
- A walk through the complete app: data loading, browsing, searching, analysis, visualization, and tests
- What 100 prompts produced
What LLMs are good at
- Scaffolding, boilerplate, and explaining unfamiliar APIs
- Translating error messages into plain language
- Concrete examples from the tutorial
What LLMs are not good at
- Hallucinating library APIs and function signatures
- Introducing subtle security flaws (missing CSRF tokens, SQL injection)
- Understanding your specific requirements without very precise prompts
- Concrete examples from the tutorial
Concerns about LLM tools
- Environmental cost: energy and water consumption of training and inference
- Deskilling: what happens when you can generate code you cannot read
- Labor displacement: honest acknowledgment without pretending there are easy answers
Where to go from here
- Deployment: running the app on a server others can reach
- Database migration: changing the schema without losing data
- Making the app collaborative: multiple users, roles, and audit logs
- Reading list