Skip to content

Integrations

FastAgentic integrates with best-of-breed tools for observability, guardrails, memory, evaluation, and more. Rather than building everything from scratch, we provide hooks and providers to connect your agents with specialized solutions.

Philosophy

FastAgentic owns the deployment layer. Specialized tools handle their domains better.

We build hooks. You choose the tools.

Integration Categories

Observability

Track LLM calls, token usage, costs, and trace agent execution.

Integration What It Does Status
Langfuse LLM tracing, prompt analytics, cost tracking v0.3
Logfire PydanticAI native observability, structured logging v0.3
Datadog APM integration, dashboards, alerting v0.3
OTEL (built-in) OpenTelemetry span export v0.1

Guardrails & Security

Protect against prompt injection, validate outputs, enforce content policies.

Integration What It Does Status
Lakera Prompt injection detection, content moderation v0.3
Guardrails AI Output validation with RAIL specs v0.3
NeMo Guardrails Conversational guardrails, topic control v0.3

Memory

Persistent user memory, session context, and conversation history.

Integration What It Does Status
Mem0 Persistent user memory across sessions v0.3
Zep Session memory with auto-summarization v0.3
Redis (built-in) Simple key-value memory v0.2

LLM Gateway

Rate limiting, fallbacks, caching, and multi-provider routing.

Integration What It Does Status
Portkey Gateway with fallbacks, caching, load balancing v0.3
LiteLLM Multi-provider routing, unified API v0.3
Simple limiter (built-in) Basic RPM/TPM rate limiting v0.3

Evaluation

Score agent outputs, track experiments, measure quality.

Integration What It Does Status
Braintrust Experiment tracking, scoring, datasets v0.4
LangSmith Trace-based evaluation, feedback v0.4
Maxim Production eval pipelines v0.4

Human-in-the-Loop

Approval workflows, human escalation, interactive sessions.

Integration What It Does Status
HumanLayer Multi-channel approval (Slack, Email) v0.4
Webhooks (built-in) Custom approval endpoints v0.4

Prompt Management

Version control, A/B testing, and prompt deployment.

Integration What It Does Status
PromptLayer Versioning, A/B testing, analytics v0.4
Latitude Prompt CMS, publishing workflow v0.4
Agenta Prompt + eval workflow v0.4

Quick Start

Install Integration

# Observability
pip install fastagentic[langfuse]
pip install fastagentic[logfire]

# Guardrails
pip install fastagentic[lakera]
pip install fastagentic[guardrails]

# Memory
pip install fastagentic[mem0]
pip install fastagentic[zep]

# All first-class integrations
pip install fastagentic[integrations]

Configure Hooks

from fastagentic import App
from fastagentic.integrations.langfuse import LangfuseHook
from fastagentic.integrations.lakera import LakeraHook
from fastagentic.integrations.mem0 import Mem0Provider

app = App(
    title="My Agent",
    version="1.0.0",

    # Global hooks (apply to all endpoints)
    hooks=[
        LangfuseHook(
            public_key="pk-...",
            secret_key="sk-...",
        ),
        LakeraHook(
            api_key="...",
            on_failure="warn",  # fail-open
        ),
    ],

    # Memory provider
    memory=Mem0Provider(api_key="..."),
)

Per-Endpoint Hooks

from fastagentic import agent_endpoint
from fastagentic.integrations.guardrails import GuardrailsAIHook
from fastagentic.integrations.braintrust import BraintrustHook

@agent_endpoint(
    path="/triage",
    runnable=...,
    post_hooks=[
        GuardrailsAIHook(rail="triage_output.rail"),
    ],
    eval_hooks=[
        BraintrustHook(project="support-triage"),
    ],
)
async def triage(ticket: TicketIn) -> TicketOut:
    ...

Configuration Patterns

Environment Variables

All integrations support environment variable configuration:

# Langfuse
export LANGFUSE_PUBLIC_KEY="pk-..."
export LANGFUSE_SECRET_KEY="sk-..."

# Lakera
export LAKERA_API_KEY="..."

# Mem0
export MEM0_API_KEY="..."

# Portkey
export PORTKEY_API_KEY="..."
from fastagentic.integrations.langfuse import LangfuseHook

# Automatically reads from environment
app = App(hooks=[LangfuseHook()])

Configuration File

# config/settings.yaml
integrations:
  langfuse:
    public_key: ${LANGFUSE_PUBLIC_KEY}
    secret_key: ${LANGFUSE_SECRET_KEY}
    host: https://cloud.langfuse.com  # or self-hosted

  lakera:
    api_key: ${LAKERA_API_KEY}
    on_failure: warn  # warn | reject

  mem0:
    api_key: ${MEM0_API_KEY}
from fastagentic import App
from fastagentic.config import load_config

config = load_config("config/settings.yaml")
app = App.from_config(config)

Hook Execution Model

Blocking vs Non-Blocking

Hook Type Execution Use Case
pre_hooks Blocking Input validation, guardrails
post_hooks Blocking Output filtering, transformation
eval_hooks Non-blocking Async scoring, doesn't delay response

Error Handling

Configure fail-open or fail-closed per integration:

# Fail-closed: Block if Lakera fails
LakeraHook(api_key="...", on_failure="reject")

# Fail-open: Log warning, continue if Lakera fails
LakeraHook(api_key="...", on_failure="warn")

Building Custom Integrations

Hook Interface

from fastagentic.hooks import BaseHook, HookContext

class MyCustomHook(BaseHook):
    hooks = ["on_llm_start", "on_llm_end"]

    def __init__(self, api_key: str):
        self.api_key = api_key
        self.client = MyClient(api_key)

    async def on_llm_start(self, ctx: HookContext):
        # Pre-LLM logic
        pass

    async def on_llm_end(self, ctx: HookContext):
        # Post-LLM logic
        await self.client.track(
            model=ctx.model,
            tokens=ctx.usage.total_tokens,
        )

Provider Interface

from fastagentic.memory import MemoryProvider

class MyMemoryProvider(MemoryProvider):
    async def get(self, user_id: str, key: str) -> Any:
        ...

    async def set(self, user_id: str, key: str, value: Any) -> None:
        ...

    async def search(self, user_id: str, query: str) -> list[dict]:
        ...

Publishing

Community integrations can be published as separate packages:

fastagentic-myintegration/
├── fastagentic_myintegration/
│   ├── __init__.py
│   └── hook.py
├── pyproject.toml
└── README.md

Register in pyproject.toml:

[project.entry-points."fastagentic.integrations"]
myintegration = "fastagentic_myintegration:MyHook"

Integration Matrix

Integration on_request on_llm_* on_tool_* on_response on_error Memory
Langfuse
Logfire
Datadog
Lakera
Guardrails AI
NeMo
Mem0
Zep
Portkey
Braintrust

Next Steps

Choose an integration to get started:

  • Langfuse — Most popular for LLM observability
  • Lakera — Essential for production security
  • Mem0 — Best for persistent user memory

Or read the Hooks Architecture to understand how integrations work under the hood.