What it does
The agent package provides a minimal chat-loop orchestration layer.
An Agent is configured with a name and a system prompt (description). When you call Run, it:
- Builds a session (system prompt + optional memory + user input)
- Calls the LLM
- Executes any requested tool calls
- Appends tool results and repeats until the model returns a final message
Typical workflow
Most examples follow the same flow:
- Create an
llm.LLMclient (any backend that implements the interface) - Create one or more tools with
llm.NewTool - Create an agent with a clear system prompt
- Add tools (optional) and memory (optional)
- Call
Runfor each user input
Example: minimal agent + tool
This is the essence of the Simple Agent example: define a Go function, wrap it as an LLM tool, add it to the agent, then run.
package main
import (
"context"
"fmt"
"github.com/henomis/phero/agent"
"github.com/henomis/phero/llm"
)
type CalculatorInput struct {
Operation string `json:"operation"`
A float64 `json:"a"`
B float64 `json:"b"`
}
type CalculatorOutput struct {
Result float64 `json:"result"`
Error string `json:"error,omitempty"`
}
func calculate(_ context.Context, input *CalculatorInput) (*CalculatorOutput, error) {
switch input.Operation {
case "add":
return &CalculatorOutput{Result: input.A + input.B}, nil
default:
return &CalculatorOutput{Error: "unknown operation"}, nil
}
}
func main() {
ctx := context.Background()
// Any llm.LLM works here (OpenAI-compatible, local, etc.)
llmClient := /* ... */
calcTool, err := llm.NewTool(
"calculator",
"Performs basic arithmetic operations",
calculate,
)
if err != nil {
panic(err)
}
a, err := agent.New(
llmClient,
"Math Assistant",
"You are a helpful math assistant. Use the calculator tool to perform calculations accurately.",
)
if err != nil {
panic(err)
}
if err := a.AddTool(calcTool); err != nil {
panic(err)
}
result, err := a.Run(ctx, "If I have 15 apples and give away 7, then buy 23 more, how many do I have?")
if err != nil {
panic(err)
}
fmt.Println(result.Content)
}
Agent handoffs
An agent can hand work to another agent at runtime. When the model decides a handoff is needed,
Run returns with Result.HandoffAgent set to the target agent — the caller can then
invoke it with the same or a derived input.
Register handoff targets with AddHandoff. Phero automatically creates a synthetic tool
named handoff_to_<agent> whose description is the target agent's system prompt,
so the model knows when and why to route work there.
// Two specialist agents
researcher, _ := agent.New(llmClient, "researcher", "You retrieve and summarize facts.")
writer, _ := agent.New(llmClient, "writer", "You write polished prose from notes.")
orchestrator, _ := agent.New(llmClient, "orchestrator", "Decide whether to research or write.")
_ = orchestrator.AddHandoff(researcher)
_ = orchestrator.AddHandoff(writer)
result, err := orchestrator.Run(ctx, "Write a short bio of Marie Curie.")
if err != nil {
panic(err)
}
if result.HandoffAgent != nil {
// The orchestrator decided to hand off; continue with the target agent
result, err = result.HandoffAgent.Run(ctx, result.Content)
if err != nil {
panic(err)
}
}
fmt.Println(result.Content)
See examples/handoff for a full working example.
Run result
Agent.Run returns a *Result with three fields:
Content string: the model's final text responseHandoffAgent *Agent: set when the model triggered a handoff to another agent; nil otherwiseSummary *trace.RunSummary: aggregated observability data for the run (token usage, latency, tool call counts, etc.)
result, err := a.Run(ctx, input)
if err != nil {
panic(err)
}
fmt.Println(result.Content)
if result.Summary != nil {
fmt.Printf("tokens used: in=%d out=%d\n",
result.Summary.Usage.InputTokens,
result.Summary.Usage.OutputTokens)
fmt.Printf("llm latency: %s\n", result.Summary.Latency.LLM)
}
Tracing
Agents support opt-in observability via SetTracer. When a tracer is attached, the agent emits typed events for
start and end, each loop iteration, tool calls and results, plus memory retrieval and persistence.
import (
"os"
"github.com/henomis/phero/trace/text"
)
a.SetTracer(text.New(os.Stderr))
The built-in text tracer is intended for local development and debugging. For raw LLM tracing outside the agent loop, see trace.NewLLM.
Memory
If you attach a memory.Memory, the agent will retrieve messages before each run and will save the
conversation at the end of the call.
// From examples/conversational-agent (edited for brevity)
conversationMemory := memory.New(20)
a.SetMemory(conversationMemory)
// Guardrail against tool-call loops
a.SetMaxIterations(10)
Tools
A tool is a Go function wrapped as an *llm.Tool. When the LLM requests a tool call, the agent:
- Looks up the tool by name
- Calls its handler with the JSON arguments
- Appends the tool result as a
toolrole message
Tool handlers can return any Go value; non-string results are JSON-marshaled before being added to the chat.
Common errors
ErrUndefinedLLM: creating an agent with a nil LLM clientErrNameRequired: empty agent nameErrDescriptionRequired: empty system prompt / descriptionToolAlreadyExistsError: tool name collision onAddToolorAddHandoffToolUnknownError: model requested a tool that is not registeredErrMaxIterationsReached: iteration cap hit (often due to repeated tool calls)
Run an example
Try the conversational agent example (REPL with memory). Follow the example’s README for provider setup.
# from repo root
go run ./examples/simple-agent
go run ./examples/conversational-agent
# Agent handoff pattern
go run ./examples/handoff