What it is
The llm package is the thin waist of Phero: a minimal interface for chat models plus a small tool system.
Higher-level packages (like agent) build on this to orchestrate multi-turn loops and tool execution.
For convenience, Phero re-exports a few OpenAI chat types (messages, tool calls, and role constants), so most of the framework can depend on a single import.
Using an LLM backend
Any backend that implements llm.LLM can power agents and tools.
Phero includes an OpenAI-compatible client at llm/openai and an Anthropic Messages API client at llm/anthropic.
- Choose a backend (e.g.
llm/openai,llm/anthropic, or your own implementation) - Pass that client into agent.New
- Optionally attach tools created via
llm.NewTool
Anthropic backend
llm/anthropic implements llm.LLM using Anthropic's Messages API.
It keeps the same OpenAI-shaped message/tool types at the boundary, so the rest of the framework (agents, tools, memory)
doesn't need to change.
import (
"os"
"github.com/henomis/phero/llm/anthropic"
)
llmClient := anthropic.New(
os.Getenv("ANTHROPIC_API_KEY"),
anthropic.WithModel("claude-sonnet-4-6"),
anthropic.WithMaxTokens(2048),
)
If you pass an empty API key, the underlying Anthropic SDK will fall back to its environment variable configuration.
Function tools
The main way you integrate capabilities is via function tools.
In examples/conversational-agent,
a get_current_time tool is exposed to the agent.
type TimeInput struct{}
type TimeOutput struct {
CurrentTime string `json:"current_time" jsonschema:"description=The current local time in RFC3339 format"`
}
func getCurrentTime(_ context.Context, _ *TimeInput) (*TimeOutput, error) {
return &TimeOutput{CurrentTime: time.Now().Format(time.RFC3339)}, nil
}
tool, err := llm.NewTool(
"get_current_time",
"Get the current local time",
getCurrentTime,
)
if err != nil {
panic(err)
}
Tools are added to an agent with AddTool, and the agent will run them when the model requests a tool call.
Tool middleware
Tools support middleware via tool.Use(...).
This is the place to add validation, permission checks, logging, or other cross-cutting behavior without baking it into each tool handler.
timeTool, err := llm.NewTool(
"get_current_time",
"Get the current local time",
getCurrentTime,
)
if err != nil {
panic(err)
}
timeTool.Use(func(tool *llm.Tool, next llm.ToolHandler) llm.ToolHandler {
return func(ctx context.Context, arguments string) (any, error) {
fmt.Printf("running %s with args %s\n", tool.Name(), arguments)
return next(ctx, arguments)
}
})
Middleware order is preserved: if you call tool.Use(m1, m2), then m1 runs before m2.
This replaces older per-tool validation helpers and keeps approval logic at wiring time.
Tracing raw LLM calls
The trace package can wrap any llm.LLM with trace.NewLLM. This is useful when you want
observability around direct Execute calls without going through an agent.
import (
"github.com/henomis/phero/trace"
"github.com/henomis/phero/trace/text"
)
traced := trace.NewLLM(llmClient, text.New(os.Stderr))
result, err := traced.Execute(ctx, messages, tools)
When called inside an agent, request and response events are automatically annotated with the agent name and iteration number.
Putting it together
The minimal loop is: create an LLM client, register one or more tools, then run an agent.
This is the core pattern used throughout examples/.
# from repo root
go run ./examples/simple-agent
go run ./examples/conversational-agent