This guide helps you decide which integrations to add for your FastAgentic deployment. We'll explain why you need each category and which tool to choose.
fromfastagenticimportAppfromfastagentic.integrations.langfuseimportLangfuseHookfromfastagentic.integrations.lakeraimportLakeraHookfromfastagentic.integrations.portkeyimportPortkeyGatewayfromfastagentic.integrations.mem0importMem0Providerapp=App(hooks=[LangfuseHook(),LakeraHook(),],llm_gateway=PortkeyGateway(...),# Fallbacks, cachingmemory=Mem0Provider(...),# User memory)
Without observability, your agent is a black box: - You can't debug issues - You don't know what's slow - You can't track costs - You can't improve quality
Users will try to: - Inject prompts ("Ignore your instructions and...") - Extract system prompts - Make your agent do harmful things - Bypass safety measures
If your agent is user-facing, you need guardrails.
What do you need to protect against?
│
├─► "Prompt injection attacks"
│ └─► Lakera — Best-in-class prompt injection detection
│
├─► "Output format validation"
│ └─► Guardrails AI — RAIL specs, structured validation
│
├─► "Conversational safety"
│ └─► NeMo Guardrails — Dialog-level policies
│
└─► "All of the above"
└─► Lakera + Guardrails AI — Layer them
# Lakera (recommended for user-facing)fromfastagentic.integrations.lakeraimportLakeraHookLakeraHook(categories=["prompt_injection","jailbreak","pii"],on_detection="reject",# Block attackson_failure="reject",# Fail-closed in production)# Guardrails AI (for output validation)fromfastagentic.integrations.guardrailsimportGuardrailsAIHookGuardrailsAIHook(rail_spec="...",on_failure="retry",# Try again with valid output)
Without memory, your agent: - Can't remember user preferences - Treats every conversation as new - Can't provide personalized responses - Has no long-term context
What type of memory do you need?
│
├─► "Remember users across sessions"
│ └─► Mem0 — Long-term memory with semantic search
│
├─► "Summarize long conversations"
│ └─► Zep — Session memory with auto-summarization
│
├─► "Simple key-value storage"
│ └─► Redis — Fast, no semantic search
│
└─► "All user data in my control"
└─► Mem0 self-hosted or Postgres
# Mem0 (recommended for personalization)fromfastagentic.integrations.mem0importMem0ProviderMem0Provider(api_key="...",# Automatically extracts and stores user context)# Zep (recommended for sessions)fromfastagentic.integrations.zepimportZepProviderZepProvider(api_key="...",auto_summarize=True,# Handle long conversations)# Redis (simple option)fromfastagentic.memoryimportRedisProviderRedisProvider(url="redis://localhost:6379",ttl_seconds=3600,# Expire after 1 hour)
Direct LLM calls have risks: - OpenAI goes down → your app breaks - Rate limits hit → requests fail - No caching → redundant costs - Single provider → no negotiating leverage
Without evaluation: - You don't know if your agent is improving - No data for prompt optimization - Can't catch regressions - No way to compare approaches
What do you need to evaluate?
│
├─► "Track experiments, A/B test prompts"
│ └─► Braintrust — Experiment tracking, scoring
│
├─► "Evaluate based on traces"
│ └─► LangSmith — Trace-based evaluation
│
├─► "Inline quality checks"
│ └─► Custom LLMJudge — Real-time scoring
│
└─► "Production monitoring"
└─► Any of the above with sampling