Docs update/add truefoundry (#3245)

* Add TrueFoundry observability integration documentation

- Added comprehensive TrueFoundry integration guide for CrewAI
- Included AI Gateway overview with key features
- Added technical architecture details for Traceloop SDK integration
- Provided step-by-step setup instructions
- Added advanced configuration examples
- Included tracing dashboard screenshot
- Added support contact and documentation links

* Update TrueFoundry integration documentation

Major improvements and fixes:
- Fixed integration pattern to follow LLM provider approach (base_url + api_key)
- Added technical architecture details showing LLM provider and observability flows
- Updated model names to use correct TrueFoundry format (openai-main/gpt-4o, anthropic/claude-3.5-sonnet)
- Added unified-code-tfy.png image for visual code example
- Reorganized document structure with better section placement
- Moved Additional Tracing section to better position
- Added link to TrueFoundry quick start guide
- Added comprehensive observability details and dashboard explanation
- Removed complex tracing setup in favor of simpler LLM provider integration

* Finalize TrueFoundry integration documentation

Key improvements:
- Updated base_url references to use placeholder from code snippet
- Added gateway-metrics.png image for observability dashboard
- Formatted metrics description with proper bullet points and bold headers
- Added link to TrueFoundry tracing overview documentation
- Improved readability and consistency throughout the documentation
- Updated Portuguese translation (pt-BR) version

* added truefoundry.mdx

* updated tfy mdx

* Update docs/en/observability/truefoundry.mdx

Co-authored-by: Nikhil Popli <97437109+nikp1172@users.noreply.github.com>

* Update truefoundry.mdx

* Update truefoundry.mdx

-minor updates

* Update truefoundry.mdx

* updated truefoundry.mdx PT-BR

---------

Co-authored-by: Nikhil Popli <97437109+nikp1172@users.noreply.github.com>
This commit is contained in:
rishiraj
2025-08-13 19:38:15 +05:30
committed by GitHub
parent a0eadf783b
commit 57c787f919
6 changed files with 295 additions and 2 deletions

View File

@@ -238,7 +238,8 @@
"en/observability/opik",
"en/observability/patronus-evaluation",
"en/observability/portkey",
"en/observability/weave"
"en/observability/weave",
"en/observability/truefoundry"
]
},
{
@@ -575,7 +576,8 @@
"pt-BR/observability/opik",
"pt-BR/observability/patronus-evaluation",
"pt-BR/observability/portkey",
"pt-BR/observability/weave"
"pt-BR/observability/weave",
"pt-BR/observability/truefoundry"
]
},
{

View File

@@ -0,0 +1,146 @@
---
title: TrueFoundry Integration
icon: chart-line
---
TrueFoundry provides an enterprise-ready [AI Gateway](https://www.truefoundry.com/ai-gateway) which can integrate with agentic frameworks like CrewAI and provides governance and observability for your AI Applications. TrueFoundry AI Gateway serves as a unified interface for LLM access, providing:
- **Unified API Access**: Connect to 250+ LLMs (OpenAI, Claude, Gemini, Groq, Mistral) through one API
- **Low Latency**: Sub-3ms internal latency with intelligent routing and load balancing
- **Enterprise Security**: SOC 2, HIPAA, GDPR compliance with RBAC and audit logging
- **Quota and cost management**: Token-based quotas, rate limiting, and comprehensive usage tracking
- **Observability**: Full request/response logging, metrics, and traces with customizable retention
## How TrueFoundry Integrates with CrewAI
### Installation & Setup
<Steps>
<Step title="Install CrewAI">
```bash
pip install crewai
```
</Step>
<Step title="Get TrueFoundry Access Token">
1. Sign up for a [TrueFoundry account](https://www.truefoundry.com/register)
2. Follow the steps here in [Quick start](https://docs.truefoundry.com/gateway/quick-start)
</Step>
<Step title="Configure CrewAI with TrueFoundry">
![TrueFoundry Code Configuration](/images/new-code-snippet.png)
```python
from crewai import LLM
# Create an LLM instance with TrueFoundry AI Gateway
truefoundry_llm = LLM(
model="openai-main/gpt-4o", # Similarly, you can call any model from any provider
base_url="your_truefoundry_gateway_base_url",
api_key="your_truefoundry_api_key"
)
# Use in your CrewAI agents
from crewai import Agent
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
llm=truefoundry_llm,
verbose=True
)
```
</Step>
</Steps>
### Complete CrewAI Example
```python
from crewai import Agent, Task, Crew, LLM
# Configure LLM with TrueFoundry
llm = LLM(
model="openai-main/gpt-4o",
base_url="your_truefoundry_gateway_base_url",
api_key="your_truefoundry_api_key"
)
# Create agents
researcher = Agent(
role='Research Analyst',
goal='Conduct detailed market research',
backstory='Expert market analyst with attention to detail',
llm=llm,
verbose=True
)
writer = Agent(
role='Content Writer',
goal='Create comprehensive reports',
backstory='Experienced technical writer',
llm=llm,
verbose=True
)
# Create tasks
research_task = Task(
description='Research AI market trends for 2024',
agent=researcher,
expected_output='Comprehensive research summary'
)
writing_task = Task(
description='Create a market research report',
agent=writer,
expected_output='Well-structured report with insights',
context=[research_task]
)
# Create and execute crew
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
verbose=True
)
result = crew.kickoff()
```
### Observability and Governance
Monitor your CrewAI agents through TrueFoundry's metrics tab:
![TrueFoundry metrics](/images/gateway-metrics.png)
With Truefoundry's AI gateway, you can monitor and analyze:
- **Performance Metrics**: Track key latency metrics like Request Latency, Time to First Token (TTFS), and Inter-Token Latency (ITL) with P99, P90, and P50 percentiles
- **Cost and Token Usage**: Gain visibility into your application's costs with detailed breakdowns of input/output tokens and the associated expenses for each model
- **Usage Patterns**: Understand how your application is being used with detailed analytics on user activity, model distribution, and team-based usage
- **Rate limit and Load balancing**: You can set up rate limiting, load balancing and fallback for your models
## Tracing
For a more detailed understanding on tracing, please see [getting-started-tracing](https://docs.truefoundry.com/docs/tracing/tracing-getting-started).For tracing, you can add the Traceloop SDK:
For tracing, you can add the Traceloop SDK:
```bash
pip install traceloop-sdk
```
```python
from traceloop.sdk import Traceloop
# Initialize enhanced tracing
Traceloop.init(
api_endpoint="https://your-truefoundry-endpoint/api/tracing",
headers={
"Authorization": f"Bearer {your_truefoundry_pat_token}",
"TFY-Tracing-Project": "your_project_name",
},
)
```
This provides additional trace correlation across your entire CrewAI workflow.
![TrueFoundry CrewAI Tracing](/images/tracing_crewai.png)

Binary file not shown.

After

Width:  |  Height:  |  Size: 530 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 554 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

View File

@@ -0,0 +1,145 @@
---
title: Integração com a TrueFoundry
icon: chart-line
---
A TrueFoundry fornece um [AI Gateway](https://www.truefoundry.com/ai-gateway) pronto para uso empresarial, que pode ser usado para governança e observabilidade em frameworks agentivos como o CrewAI. O AI Gateway da TrueFoundry funciona como uma interface unificada para acesso a LLMs, oferecendo:
- **Acesso unificado à API**: Conecte-se a 250+ LLMs (OpenAI, Claude, Gemini, Groq, Mistral) por meio de uma única API
- **Baixa latência**: Latência interna abaixo de 3 ms com roteamento inteligente e balanceamento de carga
- **Segurança corporativa**: Conformidade com SOC 2, HIPAA e GDPR, com RBAC e auditoria de logs
- **Gestão de cotas e custos**: Cotas baseadas em tokens, rate limiting e rastreamento abrangente de uso
- **Observabilidade**: Registro completo de requisições/respostas, métricas e traces com retenção personalizável
## Como a TrueFoundry se integra ao CrewAI
### Instalação e configuração
<Steps>
<Step title="Instalar o CrewAI">
```bash
pip install crewai
```
</Step>
<Step title="Obter o token de acesso da TrueFoundry">
1. Crie uma conta na [TrueFoundry](https://www.truefoundry.com/register)
2. Siga os passos do [Início rápido](https://docs.truefoundry.com/gateway/quick-start)
</Step>
<Step title="Configurar o CrewAI com a TrueFoundry">
![Configuração de código da TrueFoundry](/images/new-code-snippet.png)
```python
from crewai import LLM
# Criar uma instância de LLM com o AI Gateway da TrueFoundry
truefoundry_llm = LLM(
model="openai-main/gpt-4o", # Da mesma forma, você pode chamar qualquer modelo de qualquer provedor
base_url="your_truefoundry_gateway_base_url",
api_key="your_truefoundry_api_key"
)
# Usar nos seus agentes do CrewAI
from crewai import Agent
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
llm=truefoundry_llm,
verbose=True
)
```
</Step>
</Steps>
### Exemplo completo do CrewAI
```python
from crewai import Agent, Task, Crew, LLM
# Configurar o LLM com a TrueFoundry
llm = LLM(
model="openai-main/gpt-4o",
base_url="your_truefoundry_gateway_base_url",
api_key="your_truefoundry_api_key"
)
# Criar agentes
researcher = Agent(
role='Analista de Pesquisa',
goal='Conduzir pesquisa de mercado detalhada',
backstory='Analista de mercado especialista com atenção aos detalhes',
llm=llm,
verbose=True
)
writer = Agent(
role='Redator de Conteúdo',
goal='Criar relatórios abrangentes',
backstory='Redator técnico experiente',
llm=llm,
verbose=True
)
# Criar tarefas
research_task = Task(
description='Pesquisar tendências do mercado de IA para 2024',
agent=researcher,
expected_output='Resumo de pesquisa abrangente'
)
writing_task = Task(
description='Criar um relatório de pesquisa de mercado',
agent=writer,
expected_output='Relatório bem estruturado com insights',
context=[research_task]
)
# Criar e executar a crew
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
verbose=True
)
result = crew.kickoff()
```
### Observabilidade e governança
Monitore seus agentes do CrewAI pela aba de métricas da TrueFoundry:
![Métricas da TrueFoundry](/images/gateway-metrics.png)
Com o AI Gateway da TrueFoundry, você pode monitorar e analisar:
- **Métricas de desempenho**: Acompanhe métricas-chave de latência como Latência da Requisição, Tempo até o Primeiro Token (TTFS) e Latência entre Tokens (ITL), com percentis P99, P90 e P50
- **Custos e uso de tokens**: Tenha visibilidade dos custos da sua aplicação com detalhamento de tokens de entrada/saída e das despesas associadas a cada modelo
- **Padrões de uso**: Entenda como sua aplicação está sendo utilizada com análises detalhadas sobre atividade de usuários, distribuição de modelos e uso por equipe
- **Limite de taxa e balanceamento de carga**: Você pode configurar rate limiting, balanceamento de carga e fallback para seus modelos
## Rastreamento
Para uma compreensão mais detalhada sobre rastreamento, consulte [getting-started-tracing](https://docs.truefoundry.com/docs/tracing/tracing-getting-started). Para rastreamento, você pode adicionar o SDK do Traceloop:
```bash
pip install traceloop-sdk
```
```python
from traceloop.sdk import Traceloop
# Inicializar rastreamento avançado
Traceloop.init(
api_endpoint="https://your-truefoundry-endpoint/api/tracing",
headers={
"Authorization": f"Bearer {your_truefoundry_pat_token}",
"TFY-Tracing-Project": "your_project_name",
},
)
```
Isso oferece correlação adicional de rastreamentos em todo o seu fluxo de trabalho com o CrewAI.
![Rastreamento do CrewAI na TrueFoundry](/images/tracing_crewai.png)