Fix markdown table rendering

This commit is contained in:
DarshanDeshpande
2024-12-30 20:38:15 -05:00
parent 0e40983c77
commit b76f271adf
3 changed files with 2 additions and 5 deletions

View File

@@ -1,4 +1,3 @@
---
title: Patronus Eval Tool
description: The `PatronusEvalTool` is designed to evaluate agent inputs, outputs and context with a contextually selected criteria and log results to app.patronus.ai

View File

@@ -1,4 +1,3 @@
---
title: Patronus Local Evaluator Tool
description: The `PatronusLocalEvaluatorTool` is designed to evaluate agent inputs,outputs and context based on a user defined function and log evaluation results to [app.patronus.ai](http://app.patronus.ai)

View File

@@ -1,6 +1,5 @@
---
title: Patronus Eval Tool
title: Patronus Predefined Criteria Eval Tool
description: The `PatronusPredefinedCriteriaEvalTool` is designed to evaluate agent outputs for a specific criteria on the Patronus platform. The evaluation results for this are logged to [app.patronus.ai](https://app.patronus.ai)
icon: shield
---
@@ -61,5 +60,5 @@ crew.kickoff()
## Conclusion
Using `PatronusPredefinedCriteriaEvalTool`, users can conveniently evaluate the inputs, outputs and context provided to the agent.
Using patronus.ai, agents can choose from several of the pre-defined or custom defined criteria from the platform and evaluate their outputs, making it easier to debug agentic pipelines.
Using patronus.ai, agents can choose from several of the pre-defined or custom defined criteria from the Patronus platform and evaluate their outputs, making it easier to debug agentic pipelines.
In the case where the user wants the agent to contextually select the criteria from the list available at [app.patronus.ai](https://app.patronus.ai) or if a local evaluation function is preferred (guide [here](https://docs.patronus.ai/docs/experiment-evaluators)), it is encouraged to use the `PatronusEvalTool` and `PatronusLocalEvaluatorTool` respectively.