Skip to main content
Environment is the unified class for defining tools, connecting to services, and formatting for any LLM provider.

Environment

from hud import Environment

env = Environment("my-env")

Constructor

ParameterTypeDescriptionDefault
namestrEnvironment name"environment"
instructionsstr | NoneDescription/instructionsNone
conflict_resolutionConflictResolutionHow to handle tool name conflictsPREFIX

Context Manager

Environment must be used as an async context manager to connect:
async with env:
    tools = env.as_openai_chat_tools()
    result = await env.call_tool("my_tool", arg="value")

Defining Tools

@env.tool()

Register functions as callable tools:
@env.tool()
def count_letter(text: str, letter: str) -> int:
    """Count occurrences of a letter in text."""
    return text.lower().count(letter.lower())

@env.tool()
async def fetch_data(url: str) -> dict:
    """Fetch JSON data from URL."""
    async with httpx.AsyncClient() as client:
        response = await client.get(url)
        return response.json()
Tools are automatically documented from type hints and docstrings.

Scenarios

Scenarios define evaluation logic with two yields:
@env.scenario("checkout")
async def checkout_flow(product: str):
    # First yield: send prompt, receive answer
    answer = yield f"Add '{product}' to cart and checkout"
    
    # Second yield: return reward based on result
    order_exists = await check_order(product)
    yield 1.0 if order_exists else 0.0
Create Tasks from Scenarios:
task = env("checkout", product="laptop")

async with hud.eval(task) as ctx:
    await agent.run(ctx.prompt)
    await ctx.submit(agent.response)

Connectors

Connect to external services as tool sources.

connect_hub()

Connect to a deployed HUD environment:
env.connect_hub("browser", prefix="browser")
# Tools available as browser_navigate, browser_click, etc.

connect_fastapi()

Import FastAPI routes as tools:
from fastapi import FastAPI

api = FastAPI()

@api.get("/users/{user_id}", operation_id="get_user")
def get_user(user_id: int):
    return {"id": user_id, "name": "Alice"}

env.connect_fastapi(api)
# Tool available as get_user
ParameterTypeDescriptionDefault
appFastAPIFastAPI applicationRequired
namestr | NoneServer nameapp.title
prefixstr | NoneTool name prefixNone
include_hiddenboolInclude routes with include_in_schema=FalseTrue

connect_openapi()

Import from OpenAPI spec:
env.connect_openapi("https://api.example.com/openapi.json")

connect_server()

Mount an MCPServer or FastMCP directly:
from fastmcp import FastMCP

tools = FastMCP("tools")

@tools.tool
def greet(name: str) -> str:
    return f"Hello, {name}!"

env.connect_server(tools)

connect_mcp_config()

Connect via MCP config dict:
env.connect_mcp_config({
    "my-server": {
        "command": "uvx",
        "args": ["some-mcp-server"]
    }
})

connect_image()

Connect to a Docker image via stdio:
env.connect_image("mcp/fetch")

Tool Formatting

Convert tools to provider-specific formats.

OpenAI

# Chat Completions API
tools = env.as_openai_chat_tools()
response = await client.chat.completions.create(
    model="gpt-4o",
    messages=messages,
    tools=tools,
)

# Responses API
tools = env.as_openai_responses_tools()

# Agents SDK (requires openai-agents)
tools = env.as_openai_agent_tools()

Anthropic/Claude

tools = env.as_claude_tools()
response = await client.messages.create(
    model="claude-sonnet-4-5",
    messages=messages,
    tools=tools,
)

Gemini

tools = env.as_gemini_tools()
config = env.as_gemini_tool_config()

LangChain

# Requires langchain-core
tools = env.as_langchain_tools()

LlamaIndex

# Requires llama-index-core
tools = env.as_llamaindex_tools()

Google ADK

# Requires google-adk
tools = env.as_adk_tools()

Calling Tools

call_tool()

Execute tools with auto-format detection:
# Simple call
result = await env.call_tool("my_tool", arg="value")

# From OpenAI tool call
result = await env.call_tool(response.choices[0].message.tool_calls[0])

# From Claude tool use
result = await env.call_tool(response.content[0])  # tool_use block
Returns result in matching format (OpenAI tool call → OpenAI tool message, etc.).

Mock Mode

Test without real connections:
env.mock()  # Enable mock mode

# Set specific mock outputs
env.mock_tool("navigate", "Navigation successful")
env.mock_tool("screenshot", b"fake_image_data")

async with env:
    result = await env.call_tool("navigate", url="https://example.com")
    # Returns "Navigation successful" instead of actually navigating

env.unmock()  # Disable mock mode
MethodDescription
mock(enable=True)Enable/disable mock mode
unmock()Disable mock mode
mock_tool(name, output)Set specific mock output
is_mockCheck if mock mode is enabled

Serving as MCP Server

Environment can serve its tools over MCP protocols, either standalone or mounted on an existing server.

serve()

Start a standalone MCP server:
from hud import Environment

env = Environment("my-env")

@env.tool()
def greet(name: str) -> str:
    return f"Hello, {name}!"

# Run as MCP server (blocking)
env.serve()
ParameterTypeDescriptionDefault
transportLiteral["stdio", "sse", "streamable-http"]Transport protocol"streamable-http"
hoststrHost address to bind"0.0.0.0"
portintPort to bind8000
# Serve over stdio (for CLI tools)
env.serve(transport="stdio")

# Serve over HTTP on custom port
env.serve(transport="streamable-http", host="0.0.0.0", port=8765)

http_app()

Get a Starlette/ASGI app to mount on an existing FastAPI server. This is inherited from FastMCP and enables deployment on platforms like Railway, Fly.io, or Vercel.
from contextlib import asynccontextmanager
from fastapi import FastAPI
from hud import Environment

env = Environment("my-env")

@env.tool()
def my_tool(arg: str) -> str:
    return f"Got: {arg}"

# Create the MCP app with stateless_http=True for multi-replica deployments
mcp_app = env.http_app(path="/", stateless_http=True)

@asynccontextmanager
async def lifespan(app: FastAPI):
    # Enter BOTH the environment context AND the MCP app's lifespan
    async with env, mcp_app.router.lifespan_context(mcp_app):
        yield

app = FastAPI(lifespan=lifespan, redirect_slashes=False)

# Mount the MCP app
app.mount("/mcp", mcp_app)

# Your other FastAPI routes work normally
@app.get("/health")
def health():
    return {"status": "ok"}
ParameterTypeDescriptionDefault
pathstr | NoneInternal path for the MCP endpoint"/"
stateless_httpboolStateless mode for multi-replica deploymentsFalse
middlewarelist[ASGIMiddleware] | NoneStarlette middlewareNone
Lifespan is critical. You must enter both env (the Environment context) and mcp_app.router.lifespan_context(mcp_app) (the MCP session manager). Missing either will cause tools to fail or sessions to not initialize.

Stateless HTTP Mode

Enable stateless_http=True when deploying to platforms with multiple replicas (Railway, Fly.io, etc.). This ensures each request creates a fresh transport context, eliminating session affinity requirements:
# For single-replica or sticky sessions
mcp_app = env.http_app(path="/")

# For multi-replica deployments (Railway, Fly.io, Vercel)
mcp_app = env.http_app(path="/", stateless_http=True)

Authentication via Headers

For authenticated tools, use FastMCP’s get_http_headers() to extract the API key:
from fastmcp.server.dependencies import get_http_headers

@env.tool()
async def protected_tool(query: str) -> dict:
    """A tool that requires authentication."""
    headers = get_http_headers()
    auth_header = headers.get("authorization", "")
    
    if not auth_header.startswith("Bearer "):
        return {"error": "Missing API key"}
    
    api_key = auth_header[7:]  # Remove "Bearer " prefix
    # Validate api_key and proceed...
    return {"result": "authenticated"}
MCP clients can then connect at http://your-server/mcp:
# Client connecting to mounted environment
env.connect_url("http://localhost:8000/mcp")

Properties

PropertyTypeDescription
namestrEnvironment name
promptstr | NoneDefault prompt (set by scenarios or agent code)
is_connectedboolTrue if in context
connectionsdict[str, Connector]Active connections

Creating Tasks

Call the environment to create a Task:
# With scenario
task = env("checkout", product="laptop")

# Without scenario (just the environment)
task = env()
Then run with hud.eval():
async with hud.eval(task, variants={"model": ["gpt-4o"]}) as ctx:
    ...

See Also