Europe's Biggest AI Agent Hackathon, OpenClaw

Europe's biggest student AI hackathon - when will humans be in the loop?

Flocker Team @Harry Martin March 9, 2026 5 min read

OpenClaw was the theme of Europe’s largest student AI hackathon this past week. Hundreds of student builders at Imperial College London spent seven days trying to figure out what to actually build with it.

The UK AI Agent Hackathon Ep4 x OpenClaw ran from March 1st - 7th and brought together over 1,200 registered participants to tackle for the next big ticket AI problem: multi-agent orchestration (with $13,000 in prizes!)

What is OpenClaw#

If you’ve been avoiding everyone in the AI space this year, you might have missed 2026’s biggest tech story so far.

OpenClaw is an open-source AI agent framework that runs locally, connects to LLMs (Claude, DeepSeek, GPT), and is accessed through messaging apps you already use, like WhatsApp, Telegram, Signal and Discord. It stores context locally, learns your preferences, and has become the personal assistant framework of choice.

Three days after the project picked up its final name in January, it was pulling 710 GitHub stars per hour. By early March, it had 247,000 stars and nearly 48,000 forks.

For context, that’s faster than React, faster than VS Code, faster than anything with a comparable trajectory.

This is what the hackathon was built around.

Who is Peter Steinberger?#

Peter Steinberger spent 13 years bootstrapping PSPDFKit, sold it for over €100 million, and then spent three years in post-exit burnout. He came back to coding with AI tools and built 43 projects that went nowhere. Project 44, a weekend hack originally called “WhatsApp Relay”, after a couple rebrands (Moltbot, Clawd) became OpenClaw.

Steinberger joined OpenAI in February 2026 to “drive the next generation of personal agents.” The project moved to an open-source foundation. The tooling lives on. The community is enormous.

Hackathon Results#

The UK AI Agent Hackathon has been running for four episodes now, organized by the Imperial Blockchain Society and Imperial AI Group. Previous editions explored AI and Web3, DeFi applications, and decentralized infrastructure. Ep4 brought it back to something more immediately practical: building production-ready AI agents.

The format was hybrid and intensive: opening conference on 1 Mar, workshops through 3-4 Mar, a build sprint from 5-6 Mar, then demo day on 7 Mar in front of judges and investors.

Speakers included Steinberger himself, still riding the wave from OpenClaw’s launch before his OpenAI move, alongside Thomas Wolf (Hugging Face Co-Founder), Emad Mostaque (former Stability AI CEO), and Simon Squibb. Partners from Ada Ventures, EWOR, and Fabric Ventures rounded out the investor side.

Gold sponsors included FLock.io (the federated learning platform, not to be confused with Flocker), Sierra.ai, Z.ai, and Cantor8. Participants got free OpenAI API credits, Lovable subscriptions, and FLock API access to build with.

Top teams won from a $13,000 prize pool and got fast-tracked to post-hackathon incubation with Animoca Brands, X Ventures, and Delphi Ventures supporting.

What this means if you’re building with agents now#

The hackathon was for students. But the questions it surfaces apply to anyone running agents in production, or trying to.

OpenClaw’s architecture points to where personal agents are going. Local execution, persistent memory, messaging-native interfaces, self-improving skill sets. These aren’t hypothetical features. They’re in production today, being used by developers and companies across the world. If you’re not watching this space, you’re behind.

The skill ecosystem problem is real, and it’s yours to solve. Whether you’re using OpenClaw or building custom agents, the same question applies: how do you know what your agent is actually doing? What it’s allowed to do? What it did last Tuesday? Vibe-coded demos are fine. Production deployments need answers.

The human-vs-agent framing is a useful forcing function. When you’re designing an agent workflow, the right question isn’t “can it run autonomously?” It’s “at what point does a human need to review this?” That decision changes your architecture completely, and getting it wrong in either direction has real costs.

The talent pool is here. A thousand-person student hackathon focused entirely on AI agents, at one of Europe’s top engineering schools, producing teams good enough for VC fast-tracking. This is the builder community taking agents seriously. The tooling is available, the talent is engaged, and the experiments are running.

What’s next#

Demo day results from Ep4 are expected around 14 Mar. We’ll follow the top projects. Building with OpenClaw tools to produce agent-based products in seven days is a useful real-world test of what the tooling can actually support.

For the OpenClaw project itself, the open-source foundation structure means the community continues regardless of Steinberger’s OpenAI move. The Chinese government’s draft support policy for OpenClaw (published 8 Mar 2026) suggests the adoption curve is only steepening.

And for builders thinking about their own agent infrastructure: the questions raised at Ep4 (autonomy vs. oversight, skill vetting, production readiness) are the ones worth spending time on now, before your demo becomes a deployment.

Running multiple agents on a project? Flocker orchestrates parallel AI agents with worktree isolation, pre-authorization controls, and a real-time dashboard. Join early access.

Back to Blog