Decentralized Team Collaboration in AI Agent Development
AI Agents Hackathon — LabLab × MindsDB
196 teams, 77 MVPs, blind judging — remote and hybrid teams competing with onsite groups in San Francisco.
Case study
Overview
The AI Agents Hackathon (LabLab and MindsDB) stress-tested autonomous agents and whether remote-first and hybrid teams could match or beat onsite execution.
Case study
Objectives
Ship functional AI agents for real workflows
Measure decentralized collaboration
Use cloud tooling and frameworks for distributed work
Validate production-style MVPs on a short clock
Case study
Format — September 2024
Hybrid: onsite in San Francisco and global remote
4,000+ registered; 196 teams; 926 remote / 188 onsite
77 final MVPs; 20+ expert judges
Case study
Top three (blind evaluation)
1st — Aquinas (fully remote)
Social media engagement agent — distributed time zones, async throughput, strong API usage.
2nd — DEV AI AGENT (onsite)
App-building agent on Llama 3.1 — fast iteration and low-latency local work.
3rd — TEMO (hybrid)
Emotional-support AI for children with autism — Composio, Upstage Solar Pro, coordination across zones.
Case study
Partners and stack
Upstage — credits, mentors, workshops
Composio — 100+ tools, GitHub automation
Together AI — credits, GPU
AI/ML API — broad model access
Meta (Llama 3.1) — judges, mentors, Llama Impact Grants
| Metric | Value |
|---|---|
| Teams | 196 |
| MVPs delivered | 77 |
| Remote teams in top 3 | 2 |
| Onsite teams in top 3 | 1 |
| Judging | Blind to team location |
Case study
Takeaways
Remote-first delivery worked for advanced agent systems
Hybrid needs strong coordination but can win
Judges could not infer location from output quality
The event set a precedent for remote innovation — AI agents weren’t just built; they were built better, faster, and together.