/images/portrait.jpg

Simone Vellei

👨 Senior Backend Developer at Cybus | ☁️ Cloud Adept | 🐧Linux/IoT Expert | 🏝️ Full-remote Addicted

The supervisor-blackboard pattern: coordinating multi-agent AI workflows

Most multi-agent examples keep agents isolated. Each one gets a prompt, produces output, and hands it to the next step. That works when the data flows in one direction. But some workflows need agents to build on each other’s work incrementally, reading and writing to a shared context. This is the blackboard pattern.

The idea comes from AI research in the 1970s. Multiple knowledge sources (agents) read from and write to a shared data structure (the blackboard). A control component (the supervisor) decides which agent to activate next. Each agent contributes partial results that other agents can use. The blackboard accumulates context over time.

The evaluator-optimizer pattern in Go: iterate until good enough

Ask an LLM to write something once and you get a first draft. Ask it to revise based on specific feedback and the second draft is measurably better. This isn’t surprising. It’s how human writing works too. What’s interesting is that you can automate both sides: one agent writes, another evaluates, and a Go loop connects them.

This is the evaluator-optimizer pattern, described in Anthropic’s Building effective agents guide. A generator produces output. An evaluator scores it and gives feedback. If the score is below a threshold, the generator revises. The loop continues until the output is good enough or you run out of attempts.

Building a multi agent debate committee in Go

A single LLM call gives you one perspective. Ask the same question twice with different system prompts and you’ll get meaningfully different answers, different assumptions, different blind spots, different strengths. This isn’t a bug. It’s the foundation of a useful multi-agent pattern.

The idea is old. Juries deliberate. Academic peer review works because reviewers disagree. Design reviews surface risks that the original author missed. The mechanism is always the same: independent reasoning followed by structured synthesis. LLMs are well-suited to both steps.

Building a conversational agent in Go

Large Language Models are stateless by design. Each API call starts from scratch. The model has no idea what you said thirty seconds ago unless you explicitly pass the conversation history back in. This is a fundamental constraint of the request/response paradigm: the model is a function, not a process.

But real conversations need memory. The user says “my name is Alice” in turn one, and expects the assistant to remember it in turn ten. They build on previous answers, refer back to earlier context, and assume continuity. Bridging this gap between stateless inference and stateful dialogue is one of the first problems you hit when building any conversational system.

Build a support agent that routes itself

A lot of support bots fail for the same reason. They try to do everything with one prompt.

Billing questions, outage reports, refund requests, API errors, onboarding questions. It all goes into one agent, and the prompt turns into a long list of rules, exceptions, and fallback behavior. At first it feels convenient. A few weeks later it feels fragile.

This is where multi agent systems stop being a buzzword and start being useful.

Go is the right language for production AI agents

AI agents are graduating from demos to production, and the infrastructure choices you make today will shape your system’s reliability, cost, and scalability for years. The language you build on matters.

Go was designed for exactly the kind of systems that AI agents are: networked, concurrent, long-running services that need to be fast, small, and maintainable at scale. Yet most AI tooling still defaults to languages built for interactive data work, not production services.