What it does
The agent package provides a minimal chat-loop orchestration layer.
An Agent is configured with a name and a system prompt (description). When you call Run, it:
- Builds a session (system prompt + optional memory + user input)
- Calls the LLM
- Executes any requested tool calls
- Appends tool results and repeats until the model returns a final message
Typical workflow
Most examples follow the same flow:
- Create an
llm.LLMclient (any backend that implements the interface) - Create one or more tools with
llm.NewTool - Create an agent with a clear system prompt
- Add tools (optional) and memory (optional)
- Call
Runfor each user input
Example: minimal agent + tool
This is the essence of the Simple Agent example: define a Go function, wrap it as an LLM tool, add it to the agent, then run.
package main
import (
"context"
"fmt"
"github.com/henomis/phero/agent"
"github.com/henomis/phero/llm"
)
type CalculatorInput struct {
Operation string `json:"operation"`
A float64 `json:"a"`
B float64 `json:"b"`
}
type CalculatorOutput struct {
Result float64 `json:"result"`
Error string `json:"error,omitempty"`
}
func calculate(_ context.Context, input *CalculatorInput) (*CalculatorOutput, error) {
switch input.Operation {
case "add":
return &CalculatorOutput{Result: input.A + input.B}, nil
default:
return &CalculatorOutput{Error: "unknown operation"}, nil
}
}
func main() {
ctx := context.Background()
// Any llm.LLM works here (OpenAI-compatible, local, etc.)
llmClient := /* ... */
calcTool, err := llm.NewTool(
"calculator",
"Performs basic arithmetic operations",
calculate,
)
if err != nil {
panic(err)
}
a, err := agent.New(
llmClient,
"Math Assistant",
"You are a helpful math assistant. Use the calculator tool to perform calculations accurately.",
)
if err != nil {
panic(err)
}
if err := a.AddTool(calcTool); err != nil {
panic(err)
}
answer, err := a.Run(ctx, "If I have 15 apples and give away 7, then buy 23 more, how many do I have?")
if err != nil {
panic(err)
}
fmt.Println(answer)
}
Memory
If you attach a memory.Memory, the agent will retrieve messages before each run and will save the
conversation at the end of the call.
// From examples/conversational-agent (edited for brevity)
conversationMemory := memory.New(20)
a.SetMemory(conversationMemory)
// Guardrail against tool-call loops
a.SetMaxIterations(10)
Tools
A tool is a Go function wrapped as an *llm.Tool. When the LLM requests a tool call, the agent:
- Looks up the tool by name
- Calls its handler with the JSON arguments
- Appends the tool result as a
toolrole message
Tool handlers can return any Go value; non-string results are JSON-marshaled before being added to the chat.
Common errors
ErrUndefinedLLM: creating an agent with a nil LLM clientErrNameRequired: empty agent nameErrDescriptionRequired: empty system prompt / descriptionToolAlreadyExistsError: tool name collision onAddToolToolUnknownError: model requested a tool that is not registeredErrMaxIterationsReached: iteration cap hit (often due to repeated tool calls)
Run an example
Try the conversational agent example (REPL with memory). Follow the example’s README for provider setup.
# from repo root
go run ./examples/simple-agent
go run ./examples/conversational-agent