Skip to content

Guides

Decision-focused guides to help you choose the right tools and patterns for your use case. These guides explain why you should use specific features and when they apply.


Quick Decision Trees

What adapter should I use?

Start Here
    ├─ "I use PydanticAI"
    │       → PydanticAIAdapter (native streaming, Logfire support)
    ├─ "I have complex state machines"
    │       → LangGraphAdapter (node checkpoints, conditional routing)
    ├─ "I need multiple specialized agents"
    │       → CrewAIAdapter (role-based, task delegation)
    ├─ "I use LangChain chains/agents"
    │       → LangChainAdapter (LCEL support, tool binding)
    └─ "I have custom agent logic"
            → BaseAdapter (full control, minimal overhead)

Full Adapter Guide →


What integrations do I need?

Production Deployment
    ├─ "I need to see what my agents are doing"
    │       → Observability: Langfuse, Logfire, or Datadog
    ├─ "I'm worried about prompt injection"
    │       → Guardrails: Lakera (recommended) or Guardrails AI
    ├─ "My agents need to remember users"
    │       → Memory: Mem0 (long-term) or Zep (session)
    ├─ "I need fallbacks when OpenAI is down"
    │       → LLM Gateway: Portkey or LiteLLM
    └─ "I need to measure agent quality"
            → Evaluation: Braintrust or LangSmith

Full Integration Guide →


Am I ready for production?

Production Readiness
    ├─ [ ] Can I see what's happening? (Observability)
    ├─ [ ] Is my agent secure? (Guardrails)
    ├─ [ ] What happens when the LLM fails? (Reliability)
    ├─ [ ] How do I control costs? (Rate limiting, budgets)
    ├─ [ ] Can I recover from crashes? (Durability)
    └─ [ ] How do I deploy updates safely? (DevOps)

Full Production Checklist →


Guide Index

Getting Started

Guide Description When to Read
Getting Started First steps with FastAgentic New to FastAgentic
Why FastAgentic Problems we solve Evaluating tools
Comparison How we compare to alternatives Making decisions

Architecture & Concepts

Guide Description When to Read
Architecture System design and layers Understanding the framework
Decorators @tool, @resource, @prompt, @agent_endpoint Writing agent code
Hooks Lifecycle hooks for customization Adding integrations

Choosing the Right Tools

Guide Description When to Read
Choosing an Adapter Which agent framework adapter Starting a new project
Choosing Integrations Which tools to integrate Planning production
Production Checklist What you need before go-live Before deployment

Deep Dives

Guide Description When to Read
Reliability Retries, circuit breakers, timeouts Building resilient agents
Memory Session and long-term memory Adding personalization
Protocols MCP and A2A protocols Agent interoperability

The FastAgentic Philosophy

Build vs Integrate

FastAgentic follows a clear philosophy: we own the deployment layer, not the entire stack.

We Build We Integrate
Protocol hosting (REST + MCP + A2A) Observability (Langfuse, Datadog)
Schema fusion (Pydantic → OpenAPI/MCP/A2A) Guardrails (Lakera, Guardrails AI)
Framework adapters Memory (Mem0, Zep)
Durability (checkpoints, resume) Evaluation (Braintrust, LangSmith)
Auth (OIDC bridge) LLM Gateway (Portkey, LiteLLM)

Why this matters:

  1. Best-of-breed tools: Langfuse does observability better than we could. Lakera does prompt injection detection better. We integrate with them.

  2. Your choice: Don't like Langfuse? Use Datadog. Don't need memory? Skip it. You're not locked in.

  3. Focused excellence: We're experts at deployment, not at building 50 different features poorly.


Common Questions

"Do I need all the integrations?"

No. Start minimal:

# Minimum viable production
from fastagentic import App
from fastagentic.integrations.langfuse import LangfuseHook

app = App(
    title="My Agent",
    hooks=[LangfuseHook()],  # Just observability
)

Add integrations as you need them. Most teams start with: 1. Observability (always — you need to see what's happening) 2. Guardrails (if user-facing) 3. Reliability (if production traffic)

"Which adapter is best?"

There's no "best" — each fits different use cases:

  • PydanticAI: Best for Pydantic-native apps, type safety
  • LangGraph: Best for complex workflows, state machines
  • CrewAI: Best for multi-agent collaboration
  • LangChain: Best if you already use LangChain

See Choosing an Adapter for details.

"Should I use built-in reliability or Portkey?"

Use Case Recommendation
Simple retry/timeout Built-in RetryPolicy, Timeout
Multi-provider fallback Portkey or LiteLLM
Semantic caching Portkey
Cross-provider load balancing Portkey

See Reliability for the full breakdown.

"Mem0 vs Zep vs Redis?"

Use Case Recommendation
Long-term user personalization Mem0
Session memory with auto-summarization Zep
Simple key-value, no semantic search Redis

See Memory for details.


Learning Paths

Path 1: New to AI Agents

  1. Getting Started
  2. Decorators
  3. Choosing an Adapter
  4. Your first template

Path 2: Going to Production

  1. Production Checklist
  2. Choosing Integrations
  3. Reliability
  4. Deployment

Path 3: Advanced Patterns

  1. Architecture
  2. Hooks
  3. Protocols
  4. Custom Adapters

Next Steps