chore(docs): bring AMP doc refresh from release/v1.0.0 into main (#3637)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled

* WIP: v1 docs (#3626)

(cherry picked from commit d46e20fa09bcd2f5916282f5553ddeb7183bd92c)

* docs: parity for all translations

* docs: full name of acronym AMP

* docs: fix lingering unused code

* docs: expand contextual options in docs.json

* docs: add contextual action to request feature on GitHub

* chore: tidy docs formatting
This commit is contained in:
Tony Kipkemboi
2025-10-02 11:36:04 -04:00
committed by GitHub
parent f47e0c82c4
commit bf9e0423f2
242 changed files with 8999 additions and 3637 deletions

View File

@@ -85,10 +85,9 @@ GitHub에 따르면, Neatlogs는:
### 🔍 전체 데모 (4분)
<iframe
width="100%"
height="315"
className="w-full aspect-video rounded-xl"
src="https://www.youtube.com/embed/8KDme9T2I7Q?si=b8oHteaBwFNs_Duk"
title="YouTube video player"
title="NeatLogs 개요"
frameBorder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowFullScreen
@@ -126,5 +125,5 @@ You can now capture, understand, share, and act on your CrewAI agent runs in sec
No setup overhead. Full trace transparency. Full team collaboration.
```
이제 몇 초 만에 CrewAI agent 실행을 캡처, 이해, 공유하고 바로 조치할 수 있습니다.
별도의 설정이 필요하지 않습니다. 완전한 트레이스 투명성. 전체 팀 협업 지원.
이제 몇 초 만에 CrewAI agent 실행을 캡처, 이해, 공유하고 바로 조치할 수 있습니다.
별도의 설정이 필요하지 않습니다. 완전한 트레이스 투명성. 전체 팀 협업 지원.

View File

@@ -0,0 +1,147 @@
---
title: TrueFoundry Integration
icon: chart-line
mode: "wide"
---
TrueFoundry provides an enterprise-ready [AI Gateway](https://www.truefoundry.com/ai-gateway) which can integrate with agentic frameworks like CrewAI and provides governance and observability for your AI Applications. TrueFoundry AI Gateway serves as a unified interface for LLM access, providing:
- **Unified API Access**: Connect to 250+ LLMs (OpenAI, Claude, Gemini, Groq, Mistral) through one API
- **Low Latency**: Sub-3ms internal latency with intelligent routing and load balancing
- **Enterprise Security**: SOC 2, HIPAA, GDPR compliance with RBAC and audit logging
- **Quota and cost management**: Token-based quotas, rate limiting, and comprehensive usage tracking
- **Observability**: Full request/response logging, metrics, and traces with customizable retention
## How TrueFoundry Integrates with CrewAI
### Installation & Setup
<Steps>
<Step title="Install CrewAI">
```bash
pip install crewai
```
</Step>
<Step title="Get TrueFoundry Access Token">
1. Sign up for a [TrueFoundry account](https://www.truefoundry.com/register)
2. Follow the steps here in [Quick start](https://docs.truefoundry.com/gateway/quick-start)
</Step>
<Step title="Configure CrewAI with TrueFoundry">
![TrueFoundry Code Configuration](/images/new-code-snippet.png)
```python
from crewai import LLM
# Create an LLM instance with TrueFoundry AI Gateway
truefoundry_llm = LLM(
model="openai-main/gpt-4o", # Similarly, you can call any model from any provider
base_url="your_truefoundry_gateway_base_url",
api_key="your_truefoundry_api_key"
)
# Use in your CrewAI agents
from crewai import Agent
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
llm=truefoundry_llm,
verbose=True
)
```
</Step>
</Steps>
### Complete CrewAI Example
```python
from crewai import Agent, Task, Crew, LLM
# Configure LLM with TrueFoundry
llm = LLM(
model="openai-main/gpt-4o",
base_url="your_truefoundry_gateway_base_url",
api_key="your_truefoundry_api_key"
)
# Create agents
researcher = Agent(
role='Research Analyst',
goal='Conduct detailed market research',
backstory='Expert market analyst with attention to detail',
llm=llm,
verbose=True
)
writer = Agent(
role='Content Writer',
goal='Create comprehensive reports',
backstory='Experienced technical writer',
llm=llm,
verbose=True
)
# Create tasks
research_task = Task(
description='Research AI market trends for 2024',
agent=researcher,
expected_output='Comprehensive research summary'
)
writing_task = Task(
description='Create a market research report',
agent=writer,
expected_output='Well-structured report with insights',
context=[research_task]
)
# Create and execute crew
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
verbose=True
)
result = crew.kickoff()
```
### Observability and Governance
Monitor your CrewAI agents through TrueFoundry's metrics tab:
![TrueFoundry metrics](/images/gateway-metrics.png)
With Truefoundry's AI gateway, you can monitor and analyze:
- **Performance Metrics**: Track key latency metrics like Request Latency, Time to First Token (TTFS), and Inter-Token Latency (ITL) with P99, P90, and P50 percentiles
- **Cost and Token Usage**: Gain visibility into your application's costs with detailed breakdowns of input/output tokens and the associated expenses for each model
- **Usage Patterns**: Understand how your application is being used with detailed analytics on user activity, model distribution, and team-based usage
- **Rate limit and Load balancing**: You can set up rate limiting, load balancing and fallback for your models
## Tracing
For a more detailed understanding on tracing, please see [getting-started-tracing](https://docs.truefoundry.com/docs/tracing/tracing-getting-started).For tracing, you can add the Traceloop SDK:
For tracing, you can add the Traceloop SDK:
```bash
pip install traceloop-sdk
```
```python
from traceloop.sdk import Traceloop
# Initialize enhanced tracing
Traceloop.init(
api_endpoint="https://your-truefoundry-endpoint/api/tracing",
headers={
"Authorization": f"Bearer {your_truefoundry_pat_token}",
"TFY-Tracing-Project": "your_project_name",
},
)
```
This provides additional trace correlation across your entire CrewAI workflow.
![TrueFoundry CrewAI Tracing](/images/tracing_crewai.png)