What it is
The llm package is the thin waist of Phero: a minimal interface for chat models plus a small tool system.
Higher-level packages (like agent) build on this to orchestrate multi-turn loops and tool execution.
For convenience, Phero re-exports a few OpenAI chat types (messages, tool calls, and role constants), so most of the framework can depend on a single import.
Using an LLM backend
Any backend that implements llm.LLM can power agents and tools.
Phero includes an OpenAI-compatible client at llm/openai.
- Choose a backend (e.g.
llm/openaior your own implementation) - Pass that client into agent.New
- Optionally attach tools created via
llm.NewTool
Function tools
The main way you integrate capabilities is via function tools.
In examples/conversational-agent,
a get_current_time tool is exposed to the agent.
type TimeInput struct{}
type TimeOutput struct {
CurrentTime string `json:"current_time" jsonschema:"description=The current local time in RFC3339 format"`
}
func getCurrentTime(_ context.Context, _ *TimeInput) (*TimeOutput, error) {
return &TimeOutput{CurrentTime: time.Now().Format(time.RFC3339)}, nil
}
tool, err := llm.NewTool(
"get_current_time",
"Get the current local time",
getCurrentTime,
)
if err != nil {
panic(err)
}
Tools are added to an agent with AddTool, and the agent will run them when the model requests a tool call.
Putting it together
The minimal loop is: create an LLM client, register one or more tools, then run an agent.
This is the core pattern used throughout examples/.
# from repo root
go run ./examples/simple-agent
go run ./examples/conversational-agent