/images/portrait.jpg

Simone Vellei

👨 Senior Backend Developer at Cybus | ☁️ Cloud Adept | 🐧Linux/IoT Expert | 🏝️ Full-remote Addicted

Build a support agent that routes itself

A lot of support bots fail for the same reason. They try to do everything with one prompt.

Billing questions, outage reports, refund requests, API errors, onboarding questions. It all goes into one agent, and the prompt turns into a long list of rules, exceptions, and fallback behavior. At first it feels convenient. A few weeks later it feels fragile.

This is where multi agent systems stop being a buzzword and start being useful.

Go is the right language for production AI agents

AI agents are graduating from demos to production, and the infrastructure choices you make today will shape your system’s reliability, cost, and scalability for years. The language you build on matters.

Go was designed for exactly the kind of systems that AI agents are: networked, concurrent, long-running services that need to be fast, small, and maintainable at scale. Yet most AI tooling still defaults to languages built for interactive data work, not production services.

From Lingoose to Phero: why i rebuilt my Go AI framework from scratch

There’s a moment every developer knows. You’re staring at a codebase you built with your own hands, something you poured months into, something that works, and you realize it’s fighting you. Not because it’s broken, but because the world around it moved somewhere your original design never anticipated.

That moment came for me with LinGoose.


A bit of history

I built LinGoose in 2023 as a Go framework for LLM-powered applications. It grew, people used it, and I was proud of it. But after a while I hit a wall I couldn’t design my way around: LinGoose was built around pipelines. A single flow, a single thread, steps executing in sequence.

Designing multi-agent architectures in Go

In the previous article, we built a capable agent with multiple tools with real capabilities (Python, HTTP, time). It worked.

But there’s a problem with that design.

As you add more tools, the agent needs to:

  • Track all tool definitions
  • Reason about which tool to use
  • Handle all execution contexts
  • Maintain one giant conversation history

This doesn’t scale well.

In this article, we’re going to refactor it into something much more powerful:

Building a stateful multi-tool agent in Go

In the previous article, we built a minimal agent:

  • One tool
  • In-memory conversation
  • A clean reasoning loop

Now we’re going to level it up.

This version introduces two major upgrades:

  1. Persistent memory across sessions
  2. Multiple tools with different capabilities

We are still using go-openai as our client library, compatible with OpenAI and ollama.

But now the agent feels much closer to something you’d actually deploy.

What’s new in this version?

Compared to the previous article, this main.go adds:

How to build an AI agent from scratch in Go

Hi, I’m Simone Vellei, you might remember me from such Go-and-AI adventures as “Leveraging Go and Redis for Efficient Retrieval Augmented Generation” and “Empowering Go: unveiling the synergy of AI and Q&A pipelines

I’m the creator of LinGoose, an open-source framework built to make developing AI-powered applications in Go clean, modular, and production-friendly. I built it because I love Go’s simplicity and performance, and I wanted the same elegance when working with large language models.