mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-02-12 17:08:14 +00:00
Compare commits
17 Commits
devin/1770
...
codex/nl2s
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
fb0a8c4549 | ||
|
|
6c0fb7f970 | ||
|
|
db0c8ff468 | ||
|
|
cde33fd981 | ||
|
|
1f8cfe3282 | ||
|
|
2ed0c2c043 | ||
|
|
0341e5aee7 | ||
|
|
397d14c772 | ||
|
|
fc3e86e9a3 | ||
|
|
2882df5daf | ||
|
|
3a22e80764 | ||
|
|
9b585a934d | ||
|
|
46e1b02154 | ||
|
|
87675b49fd | ||
|
|
a3bee66be8 | ||
|
|
f6fa04528a | ||
|
|
7d498b29be |
1429
.cursorrules
1429
.cursorrules
File diff suppressed because it is too large
Load Diff
5
.github/codeql/codeql-config.yml
vendored
5
.github/codeql/codeql-config.yml
vendored
@@ -14,13 +14,18 @@ paths-ignore:
|
||||
- "lib/crewai/src/crewai/experimental/a2a/**"
|
||||
|
||||
paths:
|
||||
# Include GitHub Actions workflows/composite actions for CodeQL actions analysis
|
||||
- ".github/workflows/**"
|
||||
- ".github/actions/**"
|
||||
# Include all Python source code from workspace packages
|
||||
- "lib/crewai/src/**"
|
||||
- "lib/crewai-tools/src/**"
|
||||
- "lib/crewai-files/src/**"
|
||||
- "lib/devtools/src/**"
|
||||
# Include tests (but exclude cassettes via paths-ignore)
|
||||
- "lib/crewai/tests/**"
|
||||
- "lib/crewai-tools/tests/**"
|
||||
- "lib/crewai-files/tests/**"
|
||||
- "lib/devtools/tests/**"
|
||||
|
||||
# Configure specific queries or packs if needed
|
||||
|
||||
4
.github/workflows/codeql.yml
vendored
4
.github/workflows/codeql.yml
vendored
@@ -69,7 +69,7 @@ jobs:
|
||||
|
||||
# Initializes the CodeQL tools for scanning.
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@v3
|
||||
uses: github/codeql-action/init@v4
|
||||
with:
|
||||
languages: ${{ matrix.language }}
|
||||
build-mode: ${{ matrix.build-mode }}
|
||||
@@ -98,6 +98,6 @@ jobs:
|
||||
exit 1
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@v3
|
||||
uses: github/codeql-action/analyze@v4
|
||||
with:
|
||||
category: "/language:${{matrix.language}}"
|
||||
|
||||
@@ -111,6 +111,13 @@
|
||||
"en/guides/flows/mastering-flow-state"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Coding Tools",
|
||||
"icon": "terminal",
|
||||
"pages": [
|
||||
"en/guides/coding-tools/agents-md"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Advanced",
|
||||
"icon": "gear",
|
||||
@@ -1571,4 +1578,4 @@
|
||||
"reddit": "https://www.reddit.com/r/crewAIInc/"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -46,7 +46,7 @@ crew = Crew(
|
||||
## Task Attributes
|
||||
|
||||
| Attribute | Parameters | Type | Description |
|
||||
| :------------------------------------- | :---------------------- | :-------------------------- | :-------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- |
|
||||
| :------------------------------------- | :---------------------- | :-------------------------- | :-------------------------------------------------------------------------------------------------------------- |
|
||||
| **Description** | `description` | `str` | A clear, concise statement of what the task entails. |
|
||||
| **Expected Output** | `expected_output` | `str` | A detailed description of what the task's completion looks like. |
|
||||
| **Name** _(optional)_ | `name` | `Optional[str]` | A name identifier for the task. |
|
||||
@@ -63,7 +63,7 @@ crew = Crew(
|
||||
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | A Pydantic model for task output. |
|
||||
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | Function/object to be executed after task completion. |
|
||||
| **Guardrail** _(optional)_ | `guardrail` | `Optional[Callable]` | Function to validate task output before proceeding to next task. |
|
||||
| **Guardrails** _(optional)_ | `guardrails` | `Optional[List[Callable] | List[str]]` | List of guardrails to validate task output before proceeding to next task. |
|
||||
| **Guardrails** _(optional)_ | `guardrails` | `Optional[List[Callable]]` | List of guardrails to validate task output before proceeding to next task. |
|
||||
| **Guardrail Max Retries** _(optional)_ | `guardrail_max_retries` | `Optional[int]` | Maximum number of retries when guardrail validation fails. Defaults to 3. |
|
||||
|
||||
<Note type="warning" title="Deprecated: max_retries">
|
||||
|
||||
@@ -224,6 +224,60 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
- `groupFields` (string, optional): Fields to include (e.g., 'name,memberCount,clientData'). Default: name,memberCount
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_contacts/get_contact_group">
|
||||
**Description:** Get a specific contact group by resource name.
|
||||
|
||||
**Parameters:**
|
||||
- `resourceName` (string, required): The resource name of the contact group (e.g., 'contactGroups/myContactGroup')
|
||||
- `maxMembers` (integer, optional): Maximum number of members to include. Minimum: 0, Maximum: 20000
|
||||
- `groupFields` (string, optional): Fields to include (e.g., 'name,memberCount,clientData'). Default: name,memberCount
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_contacts/create_contact_group">
|
||||
**Description:** Create a new contact group (label).
|
||||
|
||||
**Parameters:**
|
||||
- `name` (string, required): The name of the contact group
|
||||
- `clientData` (array, optional): Client-specific data
|
||||
```json
|
||||
[
|
||||
{
|
||||
"key": "data_key",
|
||||
"value": "data_value"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_contacts/update_contact_group">
|
||||
**Description:** Update a contact group's information.
|
||||
|
||||
**Parameters:**
|
||||
- `resourceName` (string, required): The resource name of the contact group (e.g., 'contactGroups/myContactGroup')
|
||||
- `name` (string, required): The name of the contact group
|
||||
- `clientData` (array, optional): Client-specific data
|
||||
```json
|
||||
[
|
||||
{
|
||||
"key": "data_key",
|
||||
"value": "data_value"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_contacts/delete_contact_group">
|
||||
**Description:** Delete a contact group.
|
||||
|
||||
**Parameters:**
|
||||
- `resourceName` (string, required): The resource name of the contact group to delete (e.g., 'contactGroups/myContactGroup')
|
||||
- `deleteContacts` (boolean, optional): Whether to delete contacts in the group as well. Default: false
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Usage Examples
|
||||
|
||||
@@ -132,6 +132,297 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
- `endIndex` (integer, required): The end index of the range.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/create_document_with_content">
|
||||
**Description:** Create a new Google Document with content in one action.
|
||||
|
||||
**Parameters:**
|
||||
- `title` (string, required): The title for the new document. Appears at the top of the document and in Google Drive.
|
||||
- `content` (string, optional): The text content to insert into the document. Use `\n` for new paragraphs.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/append_text">
|
||||
**Description:** Append text to the end of a Google Document. Automatically inserts at the document end without needing to specify an index.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID from create_document response or URL.
|
||||
- `text` (string, required): Text to append at the end of the document. Use `\n` for new paragraphs.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_text_bold">
|
||||
**Description:** Make text bold or remove bold formatting in a Google Document.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `startIndex` (integer, required): Start position of text to format.
|
||||
- `endIndex` (integer, required): End position of text to format (exclusive).
|
||||
- `bold` (boolean, required): Set `true` to make bold, `false` to remove bold.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_text_italic">
|
||||
**Description:** Make text italic or remove italic formatting in a Google Document.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `startIndex` (integer, required): Start position of text to format.
|
||||
- `endIndex` (integer, required): End position of text to format (exclusive).
|
||||
- `italic` (boolean, required): Set `true` to make italic, `false` to remove italic.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_text_underline">
|
||||
**Description:** Add or remove underline formatting from text in a Google Document.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `startIndex` (integer, required): Start position of text to format.
|
||||
- `endIndex` (integer, required): End position of text to format (exclusive).
|
||||
- `underline` (boolean, required): Set `true` to underline, `false` to remove underline.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_text_strikethrough">
|
||||
**Description:** Add or remove strikethrough formatting from text in a Google Document.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `startIndex` (integer, required): Start position of text to format.
|
||||
- `endIndex` (integer, required): End position of text to format (exclusive).
|
||||
- `strikethrough` (boolean, required): Set `true` to add strikethrough, `false` to remove.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_font_size">
|
||||
**Description:** Change the font size of text in a Google Document.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `startIndex` (integer, required): Start position of text to format.
|
||||
- `endIndex` (integer, required): End position of text to format (exclusive).
|
||||
- `fontSize` (number, required): Font size in points. Common sizes: 10, 11, 12, 14, 16, 18, 24, 36.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_text_color">
|
||||
**Description:** Change the color of text using RGB values (0-1 scale) in a Google Document.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `startIndex` (integer, required): Start position of text to format.
|
||||
- `endIndex` (integer, required): End position of text to format (exclusive).
|
||||
- `red` (number, required): Red component (0-1). Example: `1` for full red.
|
||||
- `green` (number, required): Green component (0-1). Example: `0.5` for half green.
|
||||
- `blue` (number, required): Blue component (0-1). Example: `0` for no blue.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/create_hyperlink">
|
||||
**Description:** Turn existing text into a clickable hyperlink in a Google Document.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `startIndex` (integer, required): Start position of text to make into a link.
|
||||
- `endIndex` (integer, required): End position of text to make into a link (exclusive).
|
||||
- `url` (string, required): The URL the link should point to. Example: `"https://example.com"`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/apply_heading_style">
|
||||
**Description:** Apply a heading or paragraph style to a text range in a Google Document.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `startIndex` (integer, required): Start position of paragraph(s) to style.
|
||||
- `endIndex` (integer, required): End position of paragraph(s) to style.
|
||||
- `style` (string, required): The style to apply. Enum: `NORMAL_TEXT`, `TITLE`, `SUBTITLE`, `HEADING_1`, `HEADING_2`, `HEADING_3`, `HEADING_4`, `HEADING_5`, `HEADING_6`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_paragraph_alignment">
|
||||
**Description:** Set text alignment for paragraphs in a Google Document.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `startIndex` (integer, required): Start position of paragraph(s) to align.
|
||||
- `endIndex` (integer, required): End position of paragraph(s) to align.
|
||||
- `alignment` (string, required): Text alignment. Enum: `START` (left), `CENTER`, `END` (right), `JUSTIFIED`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_line_spacing">
|
||||
**Description:** Set line spacing for paragraphs in a Google Document.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `startIndex` (integer, required): Start position of paragraph(s).
|
||||
- `endIndex` (integer, required): End position of paragraph(s).
|
||||
- `lineSpacing` (number, required): Line spacing as percentage. `100` = single, `115` = 1.15x, `150` = 1.5x, `200` = double.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/create_paragraph_bullets">
|
||||
**Description:** Convert paragraphs to a bulleted or numbered list in a Google Document.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `startIndex` (integer, required): Start position of paragraphs to convert to list.
|
||||
- `endIndex` (integer, required): End position of paragraphs to convert to list.
|
||||
- `bulletPreset` (string, required): Bullet/numbering style. Enum: `BULLET_DISC_CIRCLE_SQUARE`, `BULLET_DIAMONDX_ARROW3D_SQUARE`, `BULLET_CHECKBOX`, `BULLET_ARROW_DIAMOND_DISC`, `BULLET_STAR_CIRCLE_SQUARE`, `NUMBERED_DECIMAL_ALPHA_ROMAN`, `NUMBERED_DECIMAL_ALPHA_ROMAN_PARENS`, `NUMBERED_DECIMAL_NESTED`, `NUMBERED_UPPERALPHA_ALPHA_ROMAN`, `NUMBERED_UPPERROMAN_UPPERALPHA_DECIMAL`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/delete_paragraph_bullets">
|
||||
**Description:** Remove bullets or numbering from paragraphs in a Google Document.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `startIndex` (integer, required): Start position of list paragraphs.
|
||||
- `endIndex` (integer, required): End position of list paragraphs.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/insert_table_with_content">
|
||||
**Description:** Insert a table with content into a Google Document in one action. Provide content as a 2D array.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `rows` (integer, required): Number of rows in the table.
|
||||
- `columns` (integer, required): Number of columns in the table.
|
||||
- `index` (integer, optional): Position to insert the table. If not provided, the table is inserted at the end of the document.
|
||||
- `content` (array, required): Table content as a 2D array. Each inner array is a row. Example: `[["Year", "Revenue"], ["2023", "$43B"], ["2024", "$45B"]]`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/insert_table_row">
|
||||
**Description:** Insert a new row above or below a reference cell in an existing table.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `tableStartIndex` (integer, required): The start index of the table. Get from get_document.
|
||||
- `rowIndex` (integer, required): Row index (0-based) of reference cell.
|
||||
- `columnIndex` (integer, optional): Column index (0-based) of reference cell. Default is `0`.
|
||||
- `insertBelow` (boolean, optional): If `true`, insert below the reference row. If `false`, insert above. Default is `true`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/insert_table_column">
|
||||
**Description:** Insert a new column left or right of a reference cell in an existing table.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `tableStartIndex` (integer, required): The start index of the table.
|
||||
- `rowIndex` (integer, optional): Row index (0-based) of reference cell. Default is `0`.
|
||||
- `columnIndex` (integer, required): Column index (0-based) of reference cell.
|
||||
- `insertRight` (boolean, optional): If `true`, insert to the right. If `false`, insert to the left. Default is `true`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/delete_table_row">
|
||||
**Description:** Delete a row from an existing table in a Google Document.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `tableStartIndex` (integer, required): The start index of the table.
|
||||
- `rowIndex` (integer, required): Row index (0-based) to delete.
|
||||
- `columnIndex` (integer, optional): Column index (0-based) of any cell in the row. Default is `0`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/delete_table_column">
|
||||
**Description:** Delete a column from an existing table in a Google Document.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `tableStartIndex` (integer, required): The start index of the table.
|
||||
- `rowIndex` (integer, optional): Row index (0-based) of any cell in the column. Default is `0`.
|
||||
- `columnIndex` (integer, required): Column index (0-based) to delete.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/merge_table_cells">
|
||||
**Description:** Merge a range of table cells into a single cell. Content from all cells is preserved.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `tableStartIndex` (integer, required): The start index of the table.
|
||||
- `rowIndex` (integer, required): Starting row index (0-based) for the merge.
|
||||
- `columnIndex` (integer, required): Starting column index (0-based) for the merge.
|
||||
- `rowSpan` (integer, required): Number of rows to merge.
|
||||
- `columnSpan` (integer, required): Number of columns to merge.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/unmerge_table_cells">
|
||||
**Description:** Unmerge previously merged table cells back into individual cells.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `tableStartIndex` (integer, required): The start index of the table.
|
||||
- `rowIndex` (integer, required): Row index (0-based) of the merged cell.
|
||||
- `columnIndex` (integer, required): Column index (0-based) of the merged cell.
|
||||
- `rowSpan` (integer, required): Number of rows the merged cell spans.
|
||||
- `columnSpan` (integer, required): Number of columns the merged cell spans.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/insert_inline_image">
|
||||
**Description:** Insert an image from a public URL into a Google Document. The image must be publicly accessible, under 50MB, and in PNG/JPEG/GIF format.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `uri` (string, required): Public URL of the image. Must be accessible without authentication.
|
||||
- `index` (integer, optional): Position to insert the image. If not provided, the image is inserted at the end of the document. Default is `1`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/insert_section_break">
|
||||
**Description:** Insert a section break to create document sections with different formatting.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `index` (integer, required): Position to insert the section break.
|
||||
- `sectionType` (string, required): The type of section break. Enum: `CONTINUOUS` (stays on same page), `NEXT_PAGE` (starts a new page).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/create_header">
|
||||
**Description:** Create a header for the document. Returns a headerId which can be used with insert_text to add header content.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `type` (string, optional): Header type. Enum: `DEFAULT`. Default is `DEFAULT`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/create_footer">
|
||||
**Description:** Create a footer for the document. Returns a footerId which can be used with insert_text to add footer content.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `type` (string, optional): Footer type. Enum: `DEFAULT`. Default is `DEFAULT`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/delete_header">
|
||||
**Description:** Delete a header from the document. Use get_document to find the headerId.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `headerId` (string, required): The header ID to delete. Get from get_document response.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/delete_footer">
|
||||
**Description:** Delete a footer from the document. Use get_document to find the footerId.
|
||||
|
||||
**Parameters:**
|
||||
- `documentId` (string, required): The document ID.
|
||||
- `footerId` (string, required): The footer ID to delete. Get from get_document response.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Usage Examples
|
||||
|
||||
@@ -62,6 +62,22 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/get_presentation_metadata">
|
||||
**Description:** Get lightweight metadata about a presentation (title, slide count, slide IDs). Use this first before fetching full content.
|
||||
|
||||
**Parameters:**
|
||||
- `presentationId` (string, required): The ID of the presentation to retrieve.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/get_presentation_text">
|
||||
**Description:** Extract all text content from a presentation. Returns slide IDs and text from shapes and tables only (no formatting).
|
||||
|
||||
**Parameters:**
|
||||
- `presentationId` (string, required): The ID of the presentation.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/get_presentation">
|
||||
**Description:** Retrieves a presentation by ID.
|
||||
|
||||
@@ -96,6 +112,15 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/get_slide_text">
|
||||
**Description:** Extract text content from a single slide. Returns only text from shapes and tables (no formatting or styling).
|
||||
|
||||
**Parameters:**
|
||||
- `presentationId` (string, required): The ID of the presentation.
|
||||
- `pageObjectId` (string, required): The ID of the slide/page to get text from.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/get_page">
|
||||
**Description:** Retrieves a specific page by its ID.
|
||||
|
||||
@@ -114,6 +139,120 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/create_slide">
|
||||
**Description:** Add an additional blank slide to a presentation. New presentations already have one blank slide - check get_presentation_metadata first. For slides with title/body areas, use create_slide_with_layout instead.
|
||||
|
||||
**Parameters:**
|
||||
- `presentationId` (string, required): The ID of the presentation.
|
||||
- `insertionIndex` (integer, optional): Where to insert the slide (0-based). If omitted, adds at the end.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/create_slide_with_layout">
|
||||
**Description:** Create a slide with a predefined layout containing placeholder areas for title, body, etc. This is better than create_slide for structured content. After creating, use get_page to find placeholder IDs, then insert text into them.
|
||||
|
||||
**Parameters:**
|
||||
- `presentationId` (string, required): The ID of the presentation.
|
||||
- `layout` (string, required): Layout type. One of: `BLANK`, `TITLE`, `TITLE_AND_BODY`, `TITLE_AND_TWO_COLUMNS`, `TITLE_ONLY`, `SECTION_HEADER`, `ONE_COLUMN_TEXT`, `MAIN_POINT`, `BIG_NUMBER`. TITLE_AND_BODY is best for title+description. TITLE for title-only slides. SECTION_HEADER for section dividers.
|
||||
- `insertionIndex` (integer, optional): Where to insert (0-based). Omit to add at end.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/create_text_box">
|
||||
**Description:** Create a text box on a slide with content. Use this for titles, descriptions, paragraphs - not tables. Optionally specify position (x, y) and size (width, height) in EMU units (914400 EMU = 1 inch).
|
||||
|
||||
**Parameters:**
|
||||
- `presentationId` (string, required): The ID of the presentation.
|
||||
- `slideId` (string, required): The ID of the slide to add the text box to.
|
||||
- `text` (string, required): The text content for the text box.
|
||||
- `x` (integer, optional): X position in EMU (914400 = 1 inch). Default: 914400 (1 inch from left).
|
||||
- `y` (integer, optional): Y position in EMU (914400 = 1 inch). Default: 914400 (1 inch from top).
|
||||
- `width` (integer, optional): Width in EMU. Default: 7315200 (~8 inches).
|
||||
- `height` (integer, optional): Height in EMU. Default: 914400 (~1 inch).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/delete_slide">
|
||||
**Description:** Remove a slide from the presentation. Use get_presentation first to find the slide ID.
|
||||
|
||||
**Parameters:**
|
||||
- `presentationId` (string, required): The ID of the presentation.
|
||||
- `slideId` (string, required): The object ID of the slide to delete. Get from get_presentation.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/duplicate_slide">
|
||||
**Description:** Create a copy of an existing slide. The duplicate is inserted immediately after the original.
|
||||
|
||||
**Parameters:**
|
||||
- `presentationId` (string, required): The ID of the presentation.
|
||||
- `slideId` (string, required): The object ID of the slide to duplicate. Get from get_presentation.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/move_slides">
|
||||
**Description:** Reorder slides by moving them to a new position. Slide IDs must be in their current presentation order (no duplicates).
|
||||
|
||||
**Parameters:**
|
||||
- `presentationId` (string, required): The ID of the presentation.
|
||||
- `slideIds` (array of strings, required): Array of slide IDs to move. Must be in current presentation order.
|
||||
- `insertionIndex` (integer, required): Target position (0-based). 0 = beginning, slide count = end.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/insert_youtube_video">
|
||||
**Description:** Embed a YouTube video on a slide. The video ID is the value after "v=" in YouTube URLs (e.g., for youtube.com/watch?v=abc123, use "abc123").
|
||||
|
||||
**Parameters:**
|
||||
- `presentationId` (string, required): The ID of the presentation.
|
||||
- `slideId` (string, required): The ID of the slide to add the video to. Get from get_presentation.
|
||||
- `videoId` (string, required): The YouTube video ID (the value after v= in the URL).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/insert_drive_video">
|
||||
**Description:** Embed a video from Google Drive on a slide. The file ID can be found in the Drive file URL.
|
||||
|
||||
**Parameters:**
|
||||
- `presentationId` (string, required): The ID of the presentation.
|
||||
- `slideId` (string, required): The ID of the slide to add the video to. Get from get_presentation.
|
||||
- `fileId` (string, required): The Google Drive file ID of the video.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/set_slide_background_image">
|
||||
**Description:** Set a background image for a slide. The image URL must be publicly accessible.
|
||||
|
||||
**Parameters:**
|
||||
- `presentationId` (string, required): The ID of the presentation.
|
||||
- `slideId` (string, required): The ID of the slide to set the background for. Get from get_presentation.
|
||||
- `imageUrl` (string, required): Publicly accessible URL of the image to use as background.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/create_table">
|
||||
**Description:** Create an empty table on a slide. To create a table with content, use create_table_with_content instead.
|
||||
|
||||
**Parameters:**
|
||||
- `presentationId` (string, required): The ID of the presentation.
|
||||
- `slideId` (string, required): The ID of the slide to add the table to. Get from get_presentation.
|
||||
- `rows` (integer, required): Number of rows in the table.
|
||||
- `columns` (integer, required): Number of columns in the table.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/create_table_with_content">
|
||||
**Description:** Create a table with content in one action. Provide content as a 2D array where each inner array is a row. Example: [["Header1", "Header2"], ["Row1Col1", "Row1Col2"]].
|
||||
|
||||
**Parameters:**
|
||||
- `presentationId` (string, required): The ID of the presentation.
|
||||
- `slideId` (string, required): The ID of the slide to add the table to. Get from get_presentation.
|
||||
- `rows` (integer, required): Number of rows in the table.
|
||||
- `columns` (integer, required): Number of columns in the table.
|
||||
- `content` (array, required): Table content as 2D array. Each inner array is a row. Example: [["Year", "Revenue"], ["2023", "$10M"]].
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/import_data_from_sheet">
|
||||
**Description:** Imports data from a Google Sheet into a presentation.
|
||||
|
||||
|
||||
@@ -169,6 +169,16 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/get_table_data">
|
||||
**Description:** Get data from a specific table in an Excel worksheet.
|
||||
|
||||
**Parameters:**
|
||||
- `file_id` (string, required): The ID of the Excel file
|
||||
- `worksheet_name` (string, required): Name of the worksheet
|
||||
- `table_name` (string, required): Name of the table
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/create_chart">
|
||||
**Description:** Create a chart in an Excel worksheet.
|
||||
|
||||
@@ -201,6 +211,15 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/get_used_range_metadata">
|
||||
**Description:** Get the used range metadata (dimensions only, no data) of an Excel worksheet.
|
||||
|
||||
**Parameters:**
|
||||
- `file_id` (string, required): The ID of the Excel file
|
||||
- `worksheet_name` (string, required): Name of the worksheet
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/list_charts">
|
||||
**Description:** Get all charts in an Excel worksheet.
|
||||
|
||||
|
||||
@@ -151,6 +151,49 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
- `item_id` (string, required): The ID of the file.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/list_files_by_path">
|
||||
**Description:** List files and folders in a specific OneDrive path.
|
||||
|
||||
**Parameters:**
|
||||
- `folder_path` (string, required): The folder path (e.g., 'Documents/Reports').
|
||||
- `top` (integer, optional): Number of items to retrieve (max 1000). Default is `50`.
|
||||
- `orderby` (string, optional): Order by field (e.g., "name asc", "lastModifiedDateTime desc"). Default is "name asc".
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/get_recent_files">
|
||||
**Description:** Get recently accessed files from OneDrive.
|
||||
|
||||
**Parameters:**
|
||||
- `top` (integer, optional): Number of items to retrieve (max 200). Default is `25`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/get_shared_with_me">
|
||||
**Description:** Get files and folders shared with the user.
|
||||
|
||||
**Parameters:**
|
||||
- `top` (integer, optional): Number of items to retrieve (max 200). Default is `50`.
|
||||
- `orderby` (string, optional): Order by field. Default is "name asc".
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/get_file_by_path">
|
||||
**Description:** Get information about a specific file or folder by path.
|
||||
|
||||
**Parameters:**
|
||||
- `file_path` (string, required): The file or folder path (e.g., 'Documents/report.docx').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/download_file_by_path">
|
||||
**Description:** Download a file from OneDrive by its path.
|
||||
|
||||
**Parameters:**
|
||||
- `file_path` (string, required): The file path (e.g., 'Documents/report.docx').
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Usage Examples
|
||||
|
||||
@@ -133,6 +133,74 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
- `companyName` (string, optional): Contact's company name.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/get_message">
|
||||
**Description:** Get a specific email message by ID.
|
||||
|
||||
**Parameters:**
|
||||
- `message_id` (string, required): The unique identifier of the message. Obtain from get_messages action.
|
||||
- `select` (string, optional): Comma-separated list of properties to return. Example: "id,subject,body,from,receivedDateTime". Default is "id,subject,body,from,toRecipients,receivedDateTime".
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/reply_to_email">
|
||||
**Description:** Reply to an email message.
|
||||
|
||||
**Parameters:**
|
||||
- `message_id` (string, required): The unique identifier of the message to reply to. Obtain from get_messages action.
|
||||
- `comment` (string, required): The reply message content. Can be plain text or HTML. The original message will be quoted below this content.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/forward_email">
|
||||
**Description:** Forward an email message.
|
||||
|
||||
**Parameters:**
|
||||
- `message_id` (string, required): The unique identifier of the message to forward. Obtain from get_messages action.
|
||||
- `to_recipients` (array, required): Array of recipient email addresses to forward to. Example: ["john@example.com", "jane@example.com"].
|
||||
- `comment` (string, optional): Optional message to include above the forwarded content. Can be plain text or HTML.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/mark_message_read">
|
||||
**Description:** Mark a message as read or unread.
|
||||
|
||||
**Parameters:**
|
||||
- `message_id` (string, required): The unique identifier of the message. Obtain from get_messages action.
|
||||
- `is_read` (boolean, required): Set to true to mark as read, false to mark as unread.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/delete_message">
|
||||
**Description:** Delete an email message.
|
||||
|
||||
**Parameters:**
|
||||
- `message_id` (string, required): The unique identifier of the message to delete. Obtain from get_messages action.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/update_event">
|
||||
**Description:** Update an existing calendar event.
|
||||
|
||||
**Parameters:**
|
||||
- `event_id` (string, required): The unique identifier of the event. Obtain from get_calendar_events action.
|
||||
- `subject` (string, optional): New subject/title for the event.
|
||||
- `start_time` (string, optional): New start time in ISO 8601 format (e.g., "2024-01-20T10:00:00"). REQUIRED: Must also provide start_timezone when using this field.
|
||||
- `start_timezone` (string, optional): Timezone for start time. REQUIRED when updating start_time. Examples: "Pacific Standard Time", "Eastern Standard Time", "UTC".
|
||||
- `end_time` (string, optional): New end time in ISO 8601 format. REQUIRED: Must also provide end_timezone when using this field.
|
||||
- `end_timezone` (string, optional): Timezone for end time. REQUIRED when updating end_time. Examples: "Pacific Standard Time", "Eastern Standard Time", "UTC".
|
||||
- `location` (string, optional): New location for the event.
|
||||
- `body` (string, optional): New body/description for the event. Supports HTML formatting.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/delete_event">
|
||||
**Description:** Delete a calendar event.
|
||||
|
||||
**Parameters:**
|
||||
- `event_id` (string, required): The unique identifier of the event to delete. Obtain from get_calendar_events action.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Usage Examples
|
||||
|
||||
@@ -78,6 +78,17 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_drives">
|
||||
**Description:** List all document libraries (drives) in a SharePoint site. Use this to discover available libraries before using file operations.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `top` (integer, optional): Maximum number of drives to return per page (1-999). Default is 100
|
||||
- `skip_token` (string, optional): Pagination token from a previous response to fetch the next page of results
|
||||
- `select` (string, optional): Comma-separated list of properties to return (e.g., 'id,name,webUrl,driveType')
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_site_lists">
|
||||
**Description:** Get all lists in a SharePoint site.
|
||||
|
||||
@@ -159,20 +170,317 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_drive_items">
|
||||
**Description:** Get files and folders from a SharePoint document library.
|
||||
<Accordion title="microsoft_sharepoint/list_files">
|
||||
**Description:** Retrieve files and folders from a SharePoint document library. By default lists the root folder, but you can navigate into subfolders by providing a folder_id.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The ID of the SharePoint site
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `folder_id` (string, optional): The ID of the folder to list contents from. Use 'root' for the root folder, or provide a folder ID from a previous list_files call. Default is 'root'
|
||||
- `top` (integer, optional): Maximum number of items to return per page (1-1000). Default is 50
|
||||
- `skip_token` (string, optional): Pagination token from a previous response to fetch the next page of results
|
||||
- `orderby` (string, optional): Sort order for results (e.g., 'name asc', 'size desc', 'lastModifiedDateTime desc'). Default is 'name asc'
|
||||
- `filter` (string, optional): OData filter to narrow results (e.g., 'file ne null' for files only, 'folder ne null' for folders only)
|
||||
- `select` (string, optional): Comma-separated list of fields to return (e.g., 'id,name,size,folder,file,webUrl,lastModifiedDateTime')
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/delete_drive_item">
|
||||
**Description:** Delete a file or folder from SharePoint document library.
|
||||
<Accordion title="microsoft_sharepoint/delete_file">
|
||||
**Description:** Delete a file or folder from a SharePoint document library. For folders, all contents are deleted recursively. Items are moved to the site recycle bin.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The ID of the SharePoint site
|
||||
- `item_id` (string, required): The ID of the file or folder to delete
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the file or folder to delete. Obtain from list_files
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/list_files_by_path">
|
||||
**Description:** List files and folders in a SharePoint document library folder by its path. More efficient than multiple list_files calls for deep navigation.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `folder_path` (string, required): The full path to the folder without leading/trailing slashes (e.g., 'Documents', 'Reports/2024/Q1')
|
||||
- `top` (integer, optional): Maximum number of items to return per page (1-1000). Default is 50
|
||||
- `skip_token` (string, optional): Pagination token from a previous response to fetch the next page of results
|
||||
- `orderby` (string, optional): Sort order for results (e.g., 'name asc', 'size desc'). Default is 'name asc'
|
||||
- `select` (string, optional): Comma-separated list of fields to return (e.g., 'id,name,size,folder,file,webUrl,lastModifiedDateTime')
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/download_file">
|
||||
**Description:** Download raw file content from a SharePoint document library. Use only for plain text files (.txt, .csv, .json). For Excel files, use the Excel-specific actions. For Word files, use get_word_document_content.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the file to download. Obtain from list_files or list_files_by_path
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_file_info">
|
||||
**Description:** Retrieve detailed metadata for a specific file or folder in a SharePoint document library, including name, size, created/modified dates, and author information.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the file or folder. Obtain from list_files or list_files_by_path
|
||||
- `select` (string, optional): Comma-separated list of properties to return (e.g., 'id,name,size,createdDateTime,lastModifiedDateTime,webUrl,createdBy,lastModifiedBy')
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/create_folder">
|
||||
**Description:** Create a new folder in a SharePoint document library. By default creates the folder in the root; use parent_id to create subfolders.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `folder_name` (string, required): Name for the new folder. Cannot contain: \ / : * ? " < > |
|
||||
- `parent_id` (string, optional): The ID of the parent folder. Use 'root' for the document library root, or provide a folder ID from list_files. Default is 'root'
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/search_files">
|
||||
**Description:** Search for files and folders in a SharePoint document library by keywords. Searches file names, folder names, and file contents for Office documents. Do not use wildcards or special characters.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `query` (string, required): Search keywords (e.g., 'report', 'budget 2024'). Wildcards like *.txt are not supported
|
||||
- `top` (integer, optional): Maximum number of results to return per page (1-1000). Default is 50
|
||||
- `skip_token` (string, optional): Pagination token from a previous response to fetch the next page of results
|
||||
- `select` (string, optional): Comma-separated list of fields to return (e.g., 'id,name,size,folder,file,webUrl,lastModifiedDateTime')
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/copy_file">
|
||||
**Description:** Copy a file or folder to a new location within SharePoint. The original item remains unchanged. The copy operation is asynchronous for large files.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the file or folder to copy. Obtain from list_files or search_files
|
||||
- `destination_folder_id` (string, required): The ID of the destination folder. Use 'root' for the root folder, or a folder ID from list_files
|
||||
- `new_name` (string, optional): New name for the copy. If not provided, the original name is used
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/move_file">
|
||||
**Description:** Move a file or folder to a new location within SharePoint. The item is removed from its original location. For folders, all contents are moved as well.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the file or folder to move. Obtain from list_files or search_files
|
||||
- `destination_folder_id` (string, required): The ID of the destination folder. Use 'root' for the root folder, or a folder ID from list_files
|
||||
- `new_name` (string, optional): New name for the moved item. If not provided, the original name is kept
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_worksheets">
|
||||
**Description:** List all worksheets (tabs) in an Excel workbook stored in a SharePoint document library. Use the returned worksheet name with other Excel actions.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
|
||||
- `select` (string, optional): Comma-separated list of properties to return (e.g., 'id,name,position,visibility')
|
||||
- `filter` (string, optional): OData filter expression (e.g., "visibility eq 'Visible'" to exclude hidden sheets)
|
||||
- `top` (integer, optional): Maximum number of worksheets to return. Minimum: 1, Maximum: 999
|
||||
- `orderby` (string, optional): Sort order (e.g., 'position asc' to return sheets in tab order)
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/create_excel_worksheet">
|
||||
**Description:** Create a new worksheet (tab) in an Excel workbook stored in a SharePoint document library. The new sheet is added at the end of the tab list.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
|
||||
- `name` (string, required): Name for the new worksheet. Maximum 31 characters. Cannot contain: \ / * ? : [ ]. Must be unique within the workbook
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_range_data">
|
||||
**Description:** Retrieve cell values from a specific range in an Excel worksheet stored in SharePoint. For reading all data without knowing dimensions, use get_excel_used_range instead.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
|
||||
- `worksheet_name` (string, required): Name of the worksheet (tab) to read from. Obtain from get_excel_worksheets. Case-sensitive
|
||||
- `range` (string, required): Cell range in A1 notation (e.g., 'A1:C10', 'A:C', '1:5', 'A1')
|
||||
- `select` (string, optional): Comma-separated list of properties to return (e.g., 'address,values,formulas,numberFormat,text')
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/update_excel_range_data">
|
||||
**Description:** Write values to a specific range in an Excel worksheet stored in SharePoint. Overwrites existing cell contents. The values array dimensions must match the range dimensions exactly.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
|
||||
- `worksheet_name` (string, required): Name of the worksheet (tab) to update. Obtain from get_excel_worksheets. Case-sensitive
|
||||
- `range` (string, required): Cell range in A1 notation where values will be written (e.g., 'A1:C3' for a 3x3 block)
|
||||
- `values` (array, required): 2D array of values (rows containing cells). Example for A1:B2: [["Header1", "Header2"], ["Value1", "Value2"]]. Use null to clear a cell
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_used_range_metadata">
|
||||
**Description:** Return only the metadata (address and dimensions) of the used range in a worksheet, without the actual cell values. Ideal for large files to understand spreadsheet size before reading data in chunks.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
|
||||
- `worksheet_name` (string, required): Name of the worksheet (tab) to read. Obtain from get_excel_worksheets. Case-sensitive
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_used_range">
|
||||
**Description:** Retrieve all cells containing data in a worksheet stored in SharePoint. Do not use for files larger than 2MB. For large files, use get_excel_used_range_metadata first, then get_excel_range_data to read in smaller chunks.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
|
||||
- `worksheet_name` (string, required): Name of the worksheet (tab) to read. Obtain from get_excel_worksheets. Case-sensitive
|
||||
- `select` (string, optional): Comma-separated list of properties to return (e.g., 'address,values,formulas,numberFormat,text,rowCount,columnCount')
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_cell">
|
||||
**Description:** Retrieve the value of a single cell by row and column index from an Excel file in SharePoint. Indices are 0-based (row 0 = Excel row 1, column 0 = column A).
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
|
||||
- `worksheet_name` (string, required): Name of the worksheet (tab). Obtain from get_excel_worksheets. Case-sensitive
|
||||
- `row` (integer, required): 0-based row index (row 0 = Excel row 1). Valid range: 0-1048575
|
||||
- `column` (integer, required): 0-based column index (column 0 = A, column 1 = B). Valid range: 0-16383
|
||||
- `select` (string, optional): Comma-separated list of properties to return (e.g., 'address,values,formulas,numberFormat,text')
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/add_excel_table">
|
||||
**Description:** Convert a cell range into a formatted Excel table with filtering, sorting, and structured data capabilities. Tables enable add_excel_table_row for appending data.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
|
||||
- `worksheet_name` (string, required): Name of the worksheet containing the data range. Obtain from get_excel_worksheets
|
||||
- `range` (string, required): Cell range to convert into a table, including headers and data (e.g., 'A1:D10' where A1:D1 contains column headers)
|
||||
- `has_headers` (boolean, optional): Set to true if the first row contains column headers. Default is true
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_tables">
|
||||
**Description:** List all tables in a specific Excel worksheet stored in SharePoint. Returns table properties including id, name, showHeaders, and showTotals.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
|
||||
- `worksheet_name` (string, required): Name of the worksheet to get tables from. Obtain from get_excel_worksheets
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/add_excel_table_row">
|
||||
**Description:** Append a new row to the end of an Excel table in a SharePoint file. The values array must have the same number of elements as the table has columns.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
|
||||
- `worksheet_name` (string, required): Name of the worksheet containing the table. Obtain from get_excel_worksheets
|
||||
- `table_name` (string, required): Name of the table to add the row to (e.g., 'Table1'). Obtain from get_excel_tables. Case-sensitive
|
||||
- `values` (array, required): Array of cell values for the new row, one per column in table order (e.g., ["John Doe", "john@example.com", 25])
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_table_data">
|
||||
**Description:** Get all rows from an Excel table in a SharePoint file as a data range. Easier than get_excel_range_data when working with structured tables since you don't need to know the exact range.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
|
||||
- `worksheet_name` (string, required): Name of the worksheet containing the table. Obtain from get_excel_worksheets
|
||||
- `table_name` (string, required): Name of the table to get data from (e.g., 'Table1'). Obtain from get_excel_tables. Case-sensitive
|
||||
- `select` (string, optional): Comma-separated list of properties to return (e.g., 'address,values,formulas,numberFormat,text')
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/create_excel_chart">
|
||||
**Description:** Create a chart visualization in an Excel worksheet stored in SharePoint from a data range. The chart is embedded in the worksheet.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
|
||||
- `worksheet_name` (string, required): Name of the worksheet where the chart will be created. Obtain from get_excel_worksheets
|
||||
- `chart_type` (string, required): Chart type (e.g., 'ColumnClustered', 'ColumnStacked', 'Line', 'LineMarkers', 'Pie', 'Bar', 'BarClustered', 'Area', 'Scatter', 'Doughnut')
|
||||
- `source_data` (string, required): Data range for the chart in A1 notation, including headers (e.g., 'A1:B10')
|
||||
- `series_by` (string, optional): How data series are organized: 'Auto', 'Columns', or 'Rows'. Default is 'Auto'
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/list_excel_charts">
|
||||
**Description:** List all charts embedded in an Excel worksheet stored in SharePoint. Returns chart properties including id, name, chartType, height, width, and position.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
|
||||
- `worksheet_name` (string, required): Name of the worksheet to list charts from. Obtain from get_excel_worksheets
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/delete_excel_worksheet">
|
||||
**Description:** Permanently remove a worksheet (tab) and all its contents from an Excel workbook stored in SharePoint. Cannot be undone. A workbook must have at least one worksheet.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
|
||||
- `worksheet_name` (string, required): Name of the worksheet to delete. Case-sensitive. All data, tables, and charts on this sheet will be permanently removed
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/delete_excel_table">
|
||||
**Description:** Remove a table from an Excel worksheet in SharePoint. This deletes the table structure (filtering, formatting, table features) but preserves the underlying cell data.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
|
||||
- `worksheet_name` (string, required): Name of the worksheet containing the table. Obtain from get_excel_worksheets
|
||||
- `table_name` (string, required): Name of the table to delete (e.g., 'Table1'). Obtain from get_excel_tables. The data in the cells will remain after table deletion
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/list_excel_names">
|
||||
**Description:** Retrieve all named ranges defined in an Excel workbook stored in SharePoint. Named ranges are user-defined labels for cell ranges (e.g., 'SalesData' for A1:D100).
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the Excel file in SharePoint. Obtain from list_files or search_files
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_word_document_content">
|
||||
**Description:** Download and extract text content from a Word document (.docx) stored in a SharePoint document library. This is the recommended way to read Word documents from SharePoint.
|
||||
|
||||
**Parameters:**
|
||||
- `site_id` (string, required): The full SharePoint site identifier from get_sites
|
||||
- `drive_id` (string, required): The ID of the document library. Call get_drives first to get valid drive IDs
|
||||
- `item_id` (string, required): The unique identifier of the Word document (.docx) in SharePoint. Obtain from list_files or search_files
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -108,6 +108,86 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
- `join_web_url` (string, required): The join web URL of the meeting to search for.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/search_online_meetings_by_meeting_id">
|
||||
**Description:** Search online meetings by external Meeting ID.
|
||||
|
||||
**Parameters:**
|
||||
- `join_meeting_id` (string, required): The meeting ID (numeric code) that attendees use to join. This is the joinMeetingId shown in meeting invitations, not the Graph API meeting id.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/get_meeting">
|
||||
**Description:** Get details of a specific online meeting.
|
||||
|
||||
**Parameters:**
|
||||
- `meeting_id` (string, required): The Graph API meeting ID (a long alphanumeric string). Obtain from create_meeting or search_online_meetings actions. Different from the numeric joinMeetingId.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/get_team_members">
|
||||
**Description:** Get members of a specific team.
|
||||
|
||||
**Parameters:**
|
||||
- `team_id` (string, required): The unique identifier of the team. Obtain from get_teams action.
|
||||
- `top` (integer, optional): Maximum number of members to retrieve per page (1-999). Default is `100`.
|
||||
- `skip_token` (string, optional): Pagination token from a previous response. When the response includes @odata.nextLink, extract the $skiptoken parameter value and pass it here to get the next page of results.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/create_channel">
|
||||
**Description:** Create a new channel in a team.
|
||||
|
||||
**Parameters:**
|
||||
- `team_id` (string, required): The unique identifier of the team. Obtain from get_teams action.
|
||||
- `display_name` (string, required): Name of the channel as displayed in Teams. Must be unique within the team. Max 50 characters.
|
||||
- `description` (string, optional): Optional description explaining the channel's purpose. Visible in channel details. Max 1024 characters.
|
||||
- `membership_type` (string, optional): Channel visibility. Enum: `standard`, `private`. "standard" = visible to all team members, "private" = visible only to specifically added members. Default is `standard`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/get_message_replies">
|
||||
**Description:** Get replies to a specific message in a channel.
|
||||
|
||||
**Parameters:**
|
||||
- `team_id` (string, required): The unique identifier of the team. Obtain from get_teams action.
|
||||
- `channel_id` (string, required): The unique identifier of the channel. Obtain from get_channels action.
|
||||
- `message_id` (string, required): The unique identifier of the parent message. Obtain from get_messages action.
|
||||
- `top` (integer, optional): Maximum number of replies to retrieve per page (1-50). Default is `50`.
|
||||
- `skip_token` (string, optional): Pagination token from a previous response. When the response includes @odata.nextLink, extract the $skiptoken parameter value and pass it here to get the next page of results.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/reply_to_message">
|
||||
**Description:** Reply to a message in a Teams channel.
|
||||
|
||||
**Parameters:**
|
||||
- `team_id` (string, required): The unique identifier of the team. Obtain from get_teams action.
|
||||
- `channel_id` (string, required): The unique identifier of the channel. Obtain from get_channels action.
|
||||
- `message_id` (string, required): The unique identifier of the message to reply to. Obtain from get_messages action.
|
||||
- `message` (string, required): The reply content. For HTML, include formatting tags. For text, plain text only.
|
||||
- `content_type` (string, optional): Content format. Enum: `html`, `text`. "text" for plain text, "html" for rich text with formatting. Default is `text`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/update_meeting">
|
||||
**Description:** Update an existing online meeting.
|
||||
|
||||
**Parameters:**
|
||||
- `meeting_id` (string, required): The unique identifier of the meeting. Obtain from create_meeting or search_online_meetings actions.
|
||||
- `subject` (string, optional): New meeting title.
|
||||
- `startDateTime` (string, optional): New start time in ISO 8601 format with timezone. Example: "2024-01-20T10:00:00-08:00".
|
||||
- `endDateTime` (string, optional): New end time in ISO 8601 format with timezone.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/delete_meeting">
|
||||
**Description:** Delete an online meeting.
|
||||
|
||||
**Parameters:**
|
||||
- `meeting_id` (string, required): The unique identifier of the meeting to delete. Obtain from create_meeting or search_online_meetings actions.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Usage Examples
|
||||
|
||||
@@ -98,6 +98,26 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
- `file_id` (string, required): The ID of the document to delete.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_word/copy_document">
|
||||
**Description:** Copy a document to a new location in OneDrive.
|
||||
|
||||
**Parameters:**
|
||||
- `file_id` (string, required): The ID of the document to copy
|
||||
- `name` (string, optional): New name for the copied document
|
||||
- `parent_id` (string, optional): The ID of the destination folder (defaults to root)
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_word/move_document">
|
||||
**Description:** Move a document to a new location in OneDrive.
|
||||
|
||||
**Parameters:**
|
||||
- `file_id` (string, required): The ID of the document to move
|
||||
- `parent_id` (string, required): The ID of the destination folder
|
||||
- `name` (string, optional): New name for the moved document
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Usage Examples
|
||||
|
||||
61
docs/en/guides/coding-tools/agents-md.mdx
Normal file
61
docs/en/guides/coding-tools/agents-md.mdx
Normal file
@@ -0,0 +1,61 @@
|
||||
---
|
||||
title: Coding Tools
|
||||
description: Use AGENTS.md to guide coding agents and IDEs across your CrewAI projects.
|
||||
icon: terminal
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
## Why AGENTS.md
|
||||
|
||||
`AGENTS.md` is a lightweight, repo-local instruction file that gives coding agents consistent, project-specific guidance. Keep it in the project root and treat it as the source of truth for how you want assistants to work: conventions, commands, architecture notes, and guardrails.
|
||||
|
||||
## Create a Project with the CLI
|
||||
|
||||
Use the CrewAI CLI to scaffold a project, then `AGENTS.md` will be automatically added at the root.
|
||||
|
||||
```bash
|
||||
# Crew
|
||||
crewai create crew my_crew
|
||||
|
||||
# Flow
|
||||
crewai create flow my_flow
|
||||
|
||||
# Tool repository
|
||||
crewai tool create my_tool
|
||||
```
|
||||
|
||||
## Tool Setup: Point Assistants to AGENTS.md
|
||||
|
||||
### Codex
|
||||
|
||||
Codex can be guided by `AGENTS.md` files placed in your repository. Use them to supply persistent project context such as conventions, commands, and workflow expectations.
|
||||
|
||||
### Claude Code
|
||||
|
||||
Claude Code stores project memory in `CLAUDE.md`. You can bootstrap it with `/init` and edit it using `/memory`. Claude Code also supports imports inside `CLAUDE.md`, so you can add a single line like `@AGENTS.md` to pull in the shared instructions without duplicating them.
|
||||
|
||||
You can simply use:
|
||||
|
||||
```bash
|
||||
mv AGENTS.md CLAUDE.md
|
||||
```
|
||||
|
||||
### Gemini CLI and Google Antigravity
|
||||
|
||||
Gemini CLI and Antigravity load a project context file (default: `GEMINI.md`) from the repo root and parent directories. You can configure it to read `AGENTS.md` instead (or in addition) by setting `context.fileName` in your Gemini CLI settings. For example, set it to `AGENTS.md` only, or include both `AGENTS.md` and `GEMINI.md` if you want to keep each tool’s format.
|
||||
|
||||
You can simply use:
|
||||
|
||||
```bash
|
||||
mv AGENTS.md GEMINI.md
|
||||
```
|
||||
|
||||
### Cursor
|
||||
|
||||
Cursor supports `AGENTS.md` as a project instruction file. Place it at the project root to provide guidance for Cursor’s coding assistant.
|
||||
|
||||
### Windsurf
|
||||
|
||||
Claude Code provides an official integration with Windsurf. If you use Claude Code inside Windsurf, follow the Claude Code guidance above and import `AGENTS.md` from `CLAUDE.md`.
|
||||
|
||||
If you are using Windsurf’s native assistant, configure its project rules or instructions feature (if available) to read from `AGENTS.md` or paste the contents directly.
|
||||
@@ -15,6 +15,29 @@ Along with that provides the ability for the Agent to update the database based
|
||||
|
||||
**Attention**: Make sure that the Agent has access to a Read-Replica or that is okay for the Agent to run insert/update queries on the database.
|
||||
|
||||
## Security Model
|
||||
|
||||
`NL2SQLTool` is an execution-capable tool. It runs model-generated SQL directly against the configured database connection.
|
||||
|
||||
This means risk depends on your deployment choices:
|
||||
|
||||
- Which credentials you provide in `db_uri`
|
||||
- Whether untrusted input can influence prompts
|
||||
- Whether you add tool-call guardrails before execution
|
||||
|
||||
If you route untrusted input to agents using this tool, treat it as a high-risk integration.
|
||||
|
||||
## Hardening Recommendations
|
||||
|
||||
Use all of the following in production:
|
||||
|
||||
- Use a read-only database user whenever possible
|
||||
- Prefer a read replica for analytics/retrieval workloads
|
||||
- Grant least privilege (no superuser/admin roles, no file/system-level capabilities)
|
||||
- Apply database-side resource limits (statement timeout, lock timeout, cost/row limits)
|
||||
- Add `before_tool_call` hooks to enforce allowed query patterns
|
||||
- Enable query logging and alerting for destructive statements
|
||||
|
||||
## Requirements
|
||||
|
||||
- SqlAlchemy
|
||||
|
||||
@@ -200,6 +200,25 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
- `clientData` (array, 선택사항): 클라이언트별 데이터. 각 항목은 `key` (string)와 `value` (string)가 있는 객체.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_contacts/update_contact_group">
|
||||
**설명:** 연락처 그룹의 정보를 업데이트합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `resourceName` (string, 필수): 연락처 그룹의 리소스 이름 (예: 'contactGroups/myContactGroup').
|
||||
- `name` (string, 필수): 연락처 그룹의 이름.
|
||||
- `clientData` (array, 선택사항): 클라이언트별 데이터. 각 항목은 `key` (string)와 `value` (string)가 있는 객체.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_contacts/delete_contact_group">
|
||||
**설명:** 연락처 그룹을 삭제합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `resourceName` (string, 필수): 삭제할 연락처 그룹의 리소스 이름 (예: 'contactGroups/myContactGroup').
|
||||
- `deleteContacts` (boolean, 선택사항): 그룹 내 연락처도 삭제할지 여부. 기본값: false
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## 사용 예제
|
||||
|
||||
@@ -131,6 +131,297 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
- `endIndex` (integer, 필수): 범위의 끝 인덱스.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/create_document_with_content">
|
||||
**설명:** 내용이 포함된 새 Google 문서를 한 번에 만듭니다.
|
||||
|
||||
**매개변수:**
|
||||
- `title` (string, 필수): 새 문서의 제목. 문서 상단과 Google Drive에 표시됩니다.
|
||||
- `content` (string, 선택사항): 문서에 삽입할 텍스트 내용. 새 단락에는 `\n`을 사용하세요.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/append_text">
|
||||
**설명:** Google 문서의 끝에 텍스트를 추가합니다. 인덱스를 지정할 필요 없이 자동으로 문서 끝에 삽입됩니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): create_document 응답 또는 URL에서 가져온 문서 ID.
|
||||
- `text` (string, 필수): 문서 끝에 추가할 텍스트. 새 단락에는 `\n`을 사용하세요.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_text_bold">
|
||||
**설명:** Google 문서에서 텍스트를 굵게 만들거나 굵게 서식을 제거합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `startIndex` (integer, 필수): 서식을 지정할 텍스트의 시작 위치.
|
||||
- `endIndex` (integer, 필수): 서식을 지정할 텍스트의 끝 위치 (배타적).
|
||||
- `bold` (boolean, 필수): 굵게 만들려면 `true`, 굵게를 제거하려면 `false`로 설정.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_text_italic">
|
||||
**설명:** Google 문서에서 텍스트를 기울임꼴로 만들거나 기울임꼴 서식을 제거합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `startIndex` (integer, 필수): 서식을 지정할 텍스트의 시작 위치.
|
||||
- `endIndex` (integer, 필수): 서식을 지정할 텍스트의 끝 위치 (배타적).
|
||||
- `italic` (boolean, 필수): 기울임꼴로 만들려면 `true`, 기울임꼴을 제거하려면 `false`로 설정.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_text_underline">
|
||||
**설명:** Google 문서에서 텍스트에 밑줄 서식을 추가하거나 제거합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `startIndex` (integer, 필수): 서식을 지정할 텍스트의 시작 위치.
|
||||
- `endIndex` (integer, 필수): 서식을 지정할 텍스트의 끝 위치 (배타적).
|
||||
- `underline` (boolean, 필수): 밑줄을 추가하려면 `true`, 밑줄을 제거하려면 `false`로 설정.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_text_strikethrough">
|
||||
**설명:** Google 문서에서 텍스트에 취소선 서식을 추가하거나 제거합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `startIndex` (integer, 필수): 서식을 지정할 텍스트의 시작 위치.
|
||||
- `endIndex` (integer, 필수): 서식을 지정할 텍스트의 끝 위치 (배타적).
|
||||
- `strikethrough` (boolean, 필수): 취소선을 추가하려면 `true`, 제거하려면 `false`로 설정.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_font_size">
|
||||
**설명:** Google 문서에서 텍스트의 글꼴 크기를 변경합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `startIndex` (integer, 필수): 서식을 지정할 텍스트의 시작 위치.
|
||||
- `endIndex` (integer, 필수): 서식을 지정할 텍스트의 끝 위치 (배타적).
|
||||
- `fontSize` (number, 필수): 포인트 단위의 글꼴 크기. 일반적인 크기: 10, 11, 12, 14, 16, 18, 24, 36.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_text_color">
|
||||
**설명:** Google 문서에서 RGB 값(0-1 스케일)을 사용하여 텍스트 색상을 변경합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `startIndex` (integer, 필수): 서식을 지정할 텍스트의 시작 위치.
|
||||
- `endIndex` (integer, 필수): 서식을 지정할 텍스트의 끝 위치 (배타적).
|
||||
- `red` (number, 필수): 빨강 구성 요소 (0-1). 예: `1`은 완전한 빨강.
|
||||
- `green` (number, 필수): 초록 구성 요소 (0-1). 예: `0.5`는 절반 초록.
|
||||
- `blue` (number, 필수): 파랑 구성 요소 (0-1). 예: `0`은 파랑 없음.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/create_hyperlink">
|
||||
**설명:** Google 문서에서 기존 텍스트를 클릭 가능한 하이퍼링크로 변환합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `startIndex` (integer, 필수): 링크로 만들 텍스트의 시작 위치.
|
||||
- `endIndex` (integer, 필수): 링크로 만들 텍스트의 끝 위치 (배타적).
|
||||
- `url` (string, 필수): 링크가 가리킬 URL. 예: `"https://example.com"`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/apply_heading_style">
|
||||
**설명:** Google 문서에서 텍스트 범위에 제목 또는 단락 스타일을 적용합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `startIndex` (integer, 필수): 스타일을 적용할 단락의 시작 위치.
|
||||
- `endIndex` (integer, 필수): 스타일을 적용할 단락의 끝 위치.
|
||||
- `style` (string, 필수): 적용할 스타일. 옵션: `NORMAL_TEXT`, `TITLE`, `SUBTITLE`, `HEADING_1`, `HEADING_2`, `HEADING_3`, `HEADING_4`, `HEADING_5`, `HEADING_6`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_paragraph_alignment">
|
||||
**설명:** Google 문서에서 단락의 텍스트 정렬을 설정합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `startIndex` (integer, 필수): 정렬할 단락의 시작 위치.
|
||||
- `endIndex` (integer, 필수): 정렬할 단락의 끝 위치.
|
||||
- `alignment` (string, 필수): 텍스트 정렬. 옵션: `START` (왼쪽), `CENTER`, `END` (오른쪽), `JUSTIFIED`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_line_spacing">
|
||||
**설명:** Google 문서에서 단락의 줄 간격을 설정합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `startIndex` (integer, 필수): 단락의 시작 위치.
|
||||
- `endIndex` (integer, 필수): 단락의 끝 위치.
|
||||
- `lineSpacing` (number, 필수): 백분율로 나타낸 줄 간격. `100` = 단일, `115` = 1.15배, `150` = 1.5배, `200` = 이중.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/create_paragraph_bullets">
|
||||
**설명:** Google 문서에서 단락을 글머리 기호 또는 번호 매기기 목록으로 변환합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `startIndex` (integer, 필수): 목록으로 변환할 단락의 시작 위치.
|
||||
- `endIndex` (integer, 필수): 목록으로 변환할 단락의 끝 위치.
|
||||
- `bulletPreset` (string, 필수): 글머리 기호/번호 매기기 스타일. 옵션: `BULLET_DISC_CIRCLE_SQUARE`, `BULLET_DIAMONDX_ARROW3D_SQUARE`, `BULLET_CHECKBOX`, `BULLET_ARROW_DIAMOND_DISC`, `BULLET_STAR_CIRCLE_SQUARE`, `NUMBERED_DECIMAL_ALPHA_ROMAN`, `NUMBERED_DECIMAL_ALPHA_ROMAN_PARENS`, `NUMBERED_DECIMAL_NESTED`, `NUMBERED_UPPERALPHA_ALPHA_ROMAN`, `NUMBERED_UPPERROMAN_UPPERALPHA_DECIMAL`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/delete_paragraph_bullets">
|
||||
**설명:** Google 문서에서 단락의 글머리 기호 또는 번호 매기기를 제거합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `startIndex` (integer, 필수): 목록 단락의 시작 위치.
|
||||
- `endIndex` (integer, 필수): 목록 단락의 끝 위치.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/insert_table_with_content">
|
||||
**설명:** Google 문서에 내용이 포함된 표를 한 번에 삽입합니다. 내용은 2D 배열로 제공하세요.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `rows` (integer, 필수): 표의 행 수.
|
||||
- `columns` (integer, 필수): 표의 열 수.
|
||||
- `index` (integer, 선택사항): 표를 삽입할 위치. 제공하지 않으면 문서 끝에 삽입됩니다.
|
||||
- `content` (array, 필수): 2D 배열로 된 표 내용. 각 내부 배열은 행입니다. 예: `[["Year", "Revenue"], ["2023", "$43B"], ["2024", "$45B"]]`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/insert_table_row">
|
||||
**설명:** 기존 표의 참조 셀 위 또는 아래에 새 행을 삽입합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `tableStartIndex` (integer, 필수): 표의 시작 인덱스. get_document에서 가져오세요.
|
||||
- `rowIndex` (integer, 필수): 참조 셀의 행 인덱스 (0 기반).
|
||||
- `columnIndex` (integer, 선택사항): 참조 셀의 열 인덱스 (0 기반). 기본값: `0`.
|
||||
- `insertBelow` (boolean, 선택사항): `true`이면 참조 행 아래에, `false`이면 위에 삽입. 기본값: `true`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/insert_table_column">
|
||||
**설명:** 기존 표의 참조 셀 왼쪽 또는 오른쪽에 새 열을 삽입합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `tableStartIndex` (integer, 필수): 표의 시작 인덱스.
|
||||
- `rowIndex` (integer, 선택사항): 참조 셀의 행 인덱스 (0 기반). 기본값: `0`.
|
||||
- `columnIndex` (integer, 필수): 참조 셀의 열 인덱스 (0 기반).
|
||||
- `insertRight` (boolean, 선택사항): `true`이면 오른쪽에, `false`이면 왼쪽에 삽입. 기본값: `true`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/delete_table_row">
|
||||
**설명:** Google 문서의 기존 표에서 행을 삭제합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `tableStartIndex` (integer, 필수): 표의 시작 인덱스.
|
||||
- `rowIndex` (integer, 필수): 삭제할 행 인덱스 (0 기반).
|
||||
- `columnIndex` (integer, 선택사항): 행의 아무 셀의 열 인덱스 (0 기반). 기본값: `0`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/delete_table_column">
|
||||
**설명:** Google 문서의 기존 표에서 열을 삭제합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `tableStartIndex` (integer, 필수): 표의 시작 인덱스.
|
||||
- `rowIndex` (integer, 선택사항): 열의 아무 셀의 행 인덱스 (0 기반). 기본값: `0`.
|
||||
- `columnIndex` (integer, 필수): 삭제할 열 인덱스 (0 기반).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/merge_table_cells">
|
||||
**설명:** 표 셀 범위를 단일 셀로 병합합니다. 모든 셀의 내용이 보존됩니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `tableStartIndex` (integer, 필수): 표의 시작 인덱스.
|
||||
- `rowIndex` (integer, 필수): 병합의 시작 행 인덱스 (0 기반).
|
||||
- `columnIndex` (integer, 필수): 병합의 시작 열 인덱스 (0 기반).
|
||||
- `rowSpan` (integer, 필수): 병합할 행 수.
|
||||
- `columnSpan` (integer, 필수): 병합할 열 수.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/unmerge_table_cells">
|
||||
**설명:** 이전에 병합된 표 셀을 개별 셀로 분리합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `tableStartIndex` (integer, 필수): 표의 시작 인덱스.
|
||||
- `rowIndex` (integer, 필수): 병합된 셀의 행 인덱스 (0 기반).
|
||||
- `columnIndex` (integer, 필수): 병합된 셀의 열 인덱스 (0 기반).
|
||||
- `rowSpan` (integer, 필수): 병합된 셀이 차지하는 행 수.
|
||||
- `columnSpan` (integer, 필수): 병합된 셀이 차지하는 열 수.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/insert_inline_image">
|
||||
**설명:** 공개 URL에서 Google 문서에 이미지를 삽입합니다. 이미지는 공개적으로 접근 가능해야 하고, 50MB 미만이며, PNG/JPEG/GIF 형식이어야 합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `uri` (string, 필수): 이미지의 공개 URL. 인증 없이 접근 가능해야 합니다.
|
||||
- `index` (integer, 선택사항): 이미지를 삽입할 위치. 제공하지 않으면 문서 끝에 삽입됩니다. 기본값: `1`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/insert_section_break">
|
||||
**설명:** 서로 다른 서식을 가진 문서 섹션을 만들기 위해 섹션 나누기를 삽입합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `index` (integer, 필수): 섹션 나누기를 삽입할 위치.
|
||||
- `sectionType` (string, 필수): 섹션 나누기의 유형. 옵션: `CONTINUOUS` (같은 페이지에 유지), `NEXT_PAGE` (새 페이지 시작).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/create_header">
|
||||
**설명:** 문서의 머리글을 만듭니다. insert_text를 사용하여 머리글 내용을 추가할 수 있는 headerId를 반환합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `type` (string, 선택사항): 머리글 유형. 옵션: `DEFAULT`. 기본값: `DEFAULT`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/create_footer">
|
||||
**설명:** 문서의 바닥글을 만듭니다. insert_text를 사용하여 바닥글 내용을 추가할 수 있는 footerId를 반환합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `type` (string, 선택사항): 바닥글 유형. 옵션: `DEFAULT`. 기본값: `DEFAULT`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/delete_header">
|
||||
**설명:** 문서에서 머리글을 삭제합니다. headerId를 찾으려면 get_document를 사용하세요.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `headerId` (string, 필수): 삭제할 머리글 ID. get_document 응답에서 가져오세요.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/delete_footer">
|
||||
**설명:** 문서에서 바닥글을 삭제합니다. footerId를 찾으려면 get_document를 사용하세요.
|
||||
|
||||
**매개변수:**
|
||||
- `documentId` (string, 필수): 문서 ID.
|
||||
- `footerId` (string, 필수): 삭제할 바닥글 ID. get_document 응답에서 가져오세요.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## 사용 예제
|
||||
|
||||
@@ -61,6 +61,22 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/get_presentation_metadata">
|
||||
**설명:** 프레젠테이션에 대한 가벼운 메타데이터(제목, 슬라이드 수, 슬라이드 ID)를 가져옵니다. 전체 콘텐츠를 가져오기 전에 먼저 사용하세요.
|
||||
|
||||
**매개변수:**
|
||||
- `presentationId` (string, 필수): 검색할 프레젠테이션의 ID.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/get_presentation_text">
|
||||
**설명:** 프레젠테이션에서 모든 텍스트 콘텐츠를 추출합니다. 슬라이드 ID와 도형 및 테이블의 텍스트만 반환합니다 (포맷팅 없음).
|
||||
|
||||
**매개변수:**
|
||||
- `presentationId` (string, 필수): 프레젠테이션의 ID.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/get_presentation">
|
||||
**설명:** ID로 프레젠테이션을 검색합니다.
|
||||
|
||||
@@ -80,6 +96,15 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/get_slide_text">
|
||||
**설명:** 단일 슬라이드에서 텍스트 콘텐츠를 추출합니다. 도형 및 테이블의 텍스트만 반환합니다 (포맷팅 또는 스타일 없음).
|
||||
|
||||
**매개변수:**
|
||||
- `presentationId` (string, 필수): 프레젠테이션의 ID.
|
||||
- `pageObjectId` (string, 필수): 텍스트를 가져올 슬라이드/페이지의 ID.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/get_page">
|
||||
**설명:** ID로 특정 페이지를 검색합니다.
|
||||
|
||||
@@ -98,6 +123,120 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/create_slide">
|
||||
**설명:** 프레젠테이션에 추가 빈 슬라이드를 추가합니다. 새 프레젠테이션에는 이미 빈 슬라이드가 하나 있습니다. 먼저 get_presentation_metadata를 확인하세요. 제목/본문 영역이 있는 슬라이드는 create_slide_with_layout을 사용하세요.
|
||||
|
||||
**매개변수:**
|
||||
- `presentationId` (string, 필수): 프레젠테이션의 ID.
|
||||
- `insertionIndex` (integer, 선택사항): 슬라이드를 삽입할 위치 (0 기반). 생략하면 맨 끝에 추가됩니다.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/create_slide_with_layout">
|
||||
**설명:** 제목, 본문 등의 플레이스홀더 영역이 있는 미리 정의된 레이아웃으로 슬라이드를 만듭니다. 구조화된 콘텐츠에는 create_slide보다 적합합니다. 생성 후 get_page로 플레이스홀더 ID를 찾고, 그 안에 텍스트를 삽입하세요.
|
||||
|
||||
**매개변수:**
|
||||
- `presentationId` (string, 필수): 프레젠테이션의 ID.
|
||||
- `layout` (string, 필수): 레이아웃 유형. 옵션: `BLANK`, `TITLE`, `TITLE_AND_BODY`, `TITLE_AND_TWO_COLUMNS`, `TITLE_ONLY`, `SECTION_HEADER`, `ONE_COLUMN_TEXT`, `MAIN_POINT`, `BIG_NUMBER`. 제목+설명은 TITLE_AND_BODY, 제목만은 TITLE, 섹션 구분은 SECTION_HEADER가 적합합니다.
|
||||
- `insertionIndex` (integer, 선택사항): 삽입할 위치 (0 기반). 생략하면 맨 끝에 추가됩니다.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/create_text_box">
|
||||
**설명:** 콘텐츠가 있는 텍스트 상자를 슬라이드에 만듭니다. 제목, 설명, 단락에 사용합니다. 테이블에는 사용하지 마세요. 선택적으로 EMU 단위로 위치(x, y)와 크기(width, height)를 지정할 수 있습니다 (914400 EMU = 1 인치).
|
||||
|
||||
**매개변수:**
|
||||
- `presentationId` (string, 필수): 프레젠테이션의 ID.
|
||||
- `slideId` (string, 필수): 텍스트 상자를 추가할 슬라이드의 ID.
|
||||
- `text` (string, 필수): 텍스트 상자의 텍스트 내용.
|
||||
- `x` (integer, 선택사항): EMU 단위 X 위치 (914400 = 1 인치). 기본값: 914400 (왼쪽에서 1 인치).
|
||||
- `y` (integer, 선택사항): EMU 단위 Y 위치 (914400 = 1 인치). 기본값: 914400 (위에서 1 인치).
|
||||
- `width` (integer, 선택사항): EMU 단위 너비. 기본값: 7315200 (약 8 인치).
|
||||
- `height` (integer, 선택사항): EMU 단위 높이. 기본값: 914400 (약 1 인치).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/delete_slide">
|
||||
**설명:** 프레젠테이션에서 슬라이드를 제거합니다. 슬라이드 ID를 찾으려면 먼저 get_presentation을 사용하세요.
|
||||
|
||||
**매개변수:**
|
||||
- `presentationId` (string, 필수): 프레젠테이션의 ID.
|
||||
- `slideId` (string, 필수): 삭제할 슬라이드의 객체 ID. get_presentation에서 가져옵니다.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/duplicate_slide">
|
||||
**설명:** 기존 슬라이드의 복사본을 만듭니다. 복사본은 원본 바로 다음에 삽입됩니다.
|
||||
|
||||
**매개변수:**
|
||||
- `presentationId` (string, 필수): 프레젠테이션의 ID.
|
||||
- `slideId` (string, 필수): 복제할 슬라이드의 객체 ID. get_presentation에서 가져옵니다.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/move_slides">
|
||||
**설명:** 슬라이드를 새 위치로 이동하여 순서를 변경합니다. 슬라이드 ID는 현재 프레젠테이션 순서대로 있어야 합니다 (중복 없음).
|
||||
|
||||
**매개변수:**
|
||||
- `presentationId` (string, 필수): 프레젠테이션의 ID.
|
||||
- `slideIds` (string 배열, 필수): 이동할 슬라이드 ID 배열. 현재 프레젠테이션 순서대로 있어야 합니다.
|
||||
- `insertionIndex` (integer, 필수): 대상 위치 (0 기반). 0 = 맨 앞, 슬라이드 수 = 맨 끝.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/insert_youtube_video">
|
||||
**설명:** 슬라이드에 YouTube 동영상을 삽입합니다. 동영상 ID는 YouTube URL의 "v=" 다음 값입니다 (예: youtube.com/watch?v=abc123의 경우 "abc123" 사용).
|
||||
|
||||
**매개변수:**
|
||||
- `presentationId` (string, 필수): 프레젠테이션의 ID.
|
||||
- `slideId` (string, 필수): 동영상을 추가할 슬라이드의 ID. get_presentation에서 가져옵니다.
|
||||
- `videoId` (string, 필수): YouTube 동영상 ID (URL의 v= 다음 값).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/insert_drive_video">
|
||||
**설명:** 슬라이드에 Google Drive의 동영상을 삽입합니다. 파일 ID는 Drive 파일 URL에서 찾을 수 있습니다.
|
||||
|
||||
**매개변수:**
|
||||
- `presentationId` (string, 필수): 프레젠테이션의 ID.
|
||||
- `slideId` (string, 필수): 동영상을 추가할 슬라이드의 ID. get_presentation에서 가져옵니다.
|
||||
- `fileId` (string, 필수): 동영상의 Google Drive 파일 ID.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/set_slide_background_image">
|
||||
**설명:** 슬라이드의 배경 이미지를 설정합니다. 이미지 URL은 공개적으로 액세스 가능해야 합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `presentationId` (string, 필수): 프레젠테이션의 ID.
|
||||
- `slideId` (string, 필수): 배경을 설정할 슬라이드의 ID. get_presentation에서 가져옵니다.
|
||||
- `imageUrl` (string, 필수): 배경으로 사용할 이미지의 공개적으로 액세스 가능한 URL.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/create_table">
|
||||
**설명:** 슬라이드에 빈 테이블을 만듭니다. 콘텐츠가 있는 테이블을 만들려면 create_table_with_content를 사용하세요.
|
||||
|
||||
**매개변수:**
|
||||
- `presentationId` (string, 필수): 프레젠테이션의 ID.
|
||||
- `slideId` (string, 필수): 테이블을 추가할 슬라이드의 ID. get_presentation에서 가져옵니다.
|
||||
- `rows` (integer, 필수): 테이블의 행 수.
|
||||
- `columns` (integer, 필수): 테이블의 열 수.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/create_table_with_content">
|
||||
**설명:** 한 번의 작업으로 콘텐츠가 있는 테이블을 만듭니다. 콘텐츠는 2D 배열로 제공하며, 각 내부 배열은 행을 나타냅니다. 예: [["Header1", "Header2"], ["Row1Col1", "Row1Col2"]].
|
||||
|
||||
**매개변수:**
|
||||
- `presentationId` (string, 필수): 프레젠테이션의 ID.
|
||||
- `slideId` (string, 필수): 테이블을 추가할 슬라이드의 ID. get_presentation에서 가져옵니다.
|
||||
- `rows` (integer, 필수): 테이블의 행 수.
|
||||
- `columns` (integer, 필수): 테이블의 열 수.
|
||||
- `content` (array, 필수): 2D 배열 형태의 테이블 콘텐츠. 각 내부 배열은 행입니다. 예: [["Year", "Revenue"], ["2023", "$10M"]].
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/import_data_from_sheet">
|
||||
**설명:** Google 시트에서 프레젠테이션으로 데이터를 가져옵니다.
|
||||
|
||||
|
||||
@@ -148,6 +148,16 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/get_table_data">
|
||||
**설명:** Excel 워크시트의 특정 테이블에서 데이터를 가져옵니다.
|
||||
|
||||
**매개변수:**
|
||||
- `file_id` (string, 필수): Excel 파일의 ID.
|
||||
- `worksheet_name` (string, 필수): 워크시트의 이름.
|
||||
- `table_name` (string, 필수): 테이블의 이름.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/create_chart">
|
||||
**설명:** Excel 워크시트에 차트를 만듭니다.
|
||||
|
||||
@@ -180,6 +190,15 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/get_used_range_metadata">
|
||||
**설명:** Excel 워크시트의 사용된 범위 메타데이터(크기만, 데이터 없음)를 가져옵니다.
|
||||
|
||||
**매개변수:**
|
||||
- `file_id` (string, 필수): Excel 파일의 ID.
|
||||
- `worksheet_name` (string, 필수): 워크시트의 이름.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/list_charts">
|
||||
**설명:** Excel 워크시트의 모든 차트를 가져옵니다.
|
||||
|
||||
|
||||
@@ -150,6 +150,49 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
- `item_id` (string, 필수): 파일의 ID.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/list_files_by_path">
|
||||
**설명:** 특정 OneDrive 경로의 파일과 폴더를 나열합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `folder_path` (string, 필수): 폴더 경로 (예: 'Documents/Reports').
|
||||
- `top` (integer, 선택사항): 검색할 항목 수 (최대 1000). 기본값: 50.
|
||||
- `orderby` (string, 선택사항): 필드별 정렬 (예: "name asc", "lastModifiedDateTime desc"). 기본값: "name asc".
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/get_recent_files">
|
||||
**설명:** OneDrive에서 최근에 액세스한 파일을 가져옵니다.
|
||||
|
||||
**매개변수:**
|
||||
- `top` (integer, 선택사항): 검색할 항목 수 (최대 200). 기본값: 25.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/get_shared_with_me">
|
||||
**설명:** 사용자와 공유된 파일과 폴더를 가져옵니다.
|
||||
|
||||
**매개변수:**
|
||||
- `top` (integer, 선택사항): 검색할 항목 수 (최대 200). 기본값: 50.
|
||||
- `orderby` (string, 선택사항): 필드별 정렬. 기본값: "name asc".
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/get_file_by_path">
|
||||
**설명:** 경로로 특정 파일 또는 폴더에 대한 정보를 가져옵니다.
|
||||
|
||||
**매개변수:**
|
||||
- `file_path` (string, 필수): 파일 또는 폴더 경로 (예: 'Documents/report.docx').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/download_file_by_path">
|
||||
**설명:** 경로로 OneDrive에서 파일을 다운로드합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `file_path` (string, 필수): 파일 경로 (예: 'Documents/report.docx').
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## 사용 예제
|
||||
@@ -183,6 +226,62 @@ crew = Crew(
|
||||
crew.kickoff()
|
||||
```
|
||||
|
||||
### 파일 업로드 및 관리
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# 파일 작업에 특화된 에이전트 생성
|
||||
file_operator = Agent(
|
||||
role="파일 운영자",
|
||||
goal="파일을 정확하게 업로드, 다운로드 및 관리",
|
||||
backstory="파일 처리 및 콘텐츠 관리에 능숙한 AI 어시스턴트.",
|
||||
apps=['microsoft_onedrive/upload_file', 'microsoft_onedrive/download_file', 'microsoft_onedrive/get_file_info']
|
||||
)
|
||||
|
||||
# 파일 업로드 및 관리 작업
|
||||
file_management_task = Task(
|
||||
description="'report.txt'라는 이름의 텍스트 파일을 'This is a sample report for the project.' 내용으로 업로드한 다음 업로드된 파일에 대한 정보를 가져오세요.",
|
||||
agent=file_operator,
|
||||
expected_output="파일이 성공적으로 업로드되고 파일 정보가 검색됨."
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[file_operator],
|
||||
tasks=[file_management_task]
|
||||
)
|
||||
|
||||
crew.kickoff()
|
||||
```
|
||||
|
||||
### 파일 정리 및 공유
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# 파일 정리 및 공유를 위한 에이전트 생성
|
||||
file_organizer = Agent(
|
||||
role="파일 정리자",
|
||||
goal="파일을 정리하고 협업을 위한 공유 링크 생성",
|
||||
backstory="파일 정리 및 공유 권한 관리에 뛰어난 AI 어시스턴트.",
|
||||
apps=['microsoft_onedrive/search_files', 'microsoft_onedrive/move_item', 'microsoft_onedrive/share_item', 'microsoft_onedrive/create_folder']
|
||||
)
|
||||
|
||||
# 파일 정리 및 공유 작업
|
||||
organize_share_task = Task(
|
||||
description="이름에 'presentation'이 포함된 파일을 검색하고, '프레젠테이션'이라는 폴더를 만든 다음, 찾은 파일을 이 폴더로 이동하고 폴더에 대한 읽기 전용 공유 링크를 생성하세요.",
|
||||
agent=file_organizer,
|
||||
expected_output="파일이 '프레젠테이션' 폴더로 정리되고 공유 링크가 생성됨."
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[file_organizer],
|
||||
tasks=[organize_share_task]
|
||||
)
|
||||
|
||||
crew.kickoff()
|
||||
```
|
||||
|
||||
## 문제 해결
|
||||
|
||||
### 일반적인 문제
|
||||
@@ -196,6 +295,30 @@ crew.kickoff()
|
||||
|
||||
- 파일 업로드 시 `file_name`과 `content`가 제공되는지 확인하세요.
|
||||
- 바이너리 파일의 경우 내용이 Base64로 인코딩되어야 합니다.
|
||||
- OneDrive에 대한 쓰기 권한이 있는지 확인하세요.
|
||||
|
||||
**파일/폴더 ID 문제**
|
||||
|
||||
- 특정 파일 또는 폴더에 액세스할 때 항목 ID가 올바른지 다시 확인하세요.
|
||||
- 항목 ID는 `list_files` 또는 `search_files`와 같은 다른 작업에서 반환됩니다.
|
||||
- 참조하는 항목이 존재하고 액세스 가능한지 확인하세요.
|
||||
|
||||
**검색 및 필터 작업**
|
||||
|
||||
- `search_files` 작업에 적절한 검색어를 사용하세요.
|
||||
- `filter` 매개변수의 경우 올바른 OData 문법을 사용하세요.
|
||||
|
||||
**파일 작업 (복사/이동)**
|
||||
|
||||
- `move_item`의 경우 `item_id`와 `parent_id`가 모두 제공되는지 확인하세요.
|
||||
- `copy_item`의 경우 `item_id`만 필요합니다. `parent_id`는 지정하지 않으면 루트로 기본 설정됩니다.
|
||||
- 대상 폴더가 존재하고 액세스 가능한지 확인하세요.
|
||||
|
||||
**공유 링크 생성**
|
||||
|
||||
- 공유 링크를 만들기 전에 항목이 존재하는지 확인하세요.
|
||||
- 공유 요구 사항에 따라 적절한 `type`과 `scope`를 선택하세요.
|
||||
- `anonymous` 범위는 로그인 없이 액세스를 허용합니다. `organization`은 조직 계정이 필요합니다.
|
||||
|
||||
### 도움 받기
|
||||
|
||||
|
||||
@@ -132,6 +132,74 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
- `companyName` (string, 선택사항): 연락처의 회사 이름.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/get_message">
|
||||
**설명:** ID로 특정 이메일 메시지를 가져옵니다.
|
||||
|
||||
**매개변수:**
|
||||
- `message_id` (string, 필수): 메시지의 고유 식별자. get_messages 작업에서 얻을 수 있습니다.
|
||||
- `select` (string, 선택사항): 반환할 속성의 쉼표로 구분된 목록. 예: "id,subject,body,from,receivedDateTime". 기본값: "id,subject,body,from,toRecipients,receivedDateTime".
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/reply_to_email">
|
||||
**설명:** 이메일 메시지에 회신합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `message_id` (string, 필수): 회신할 메시지의 고유 식별자. get_messages 작업에서 얻을 수 있습니다.
|
||||
- `comment` (string, 필수): 회신 메시지 내용. 일반 텍스트 또는 HTML 가능. 원본 메시지가 이 내용 아래에 인용됩니다.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/forward_email">
|
||||
**설명:** 이메일 메시지를 전달합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `message_id` (string, 필수): 전달할 메시지의 고유 식별자. get_messages 작업에서 얻을 수 있습니다.
|
||||
- `to_recipients` (array, 필수): 전달할 받는 사람의 이메일 주소 배열. 예: ["john@example.com", "jane@example.com"].
|
||||
- `comment` (string, 선택사항): 전달된 콘텐츠 위에 포함할 선택적 메시지. 일반 텍스트 또는 HTML 가능.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/mark_message_read">
|
||||
**설명:** 메시지를 읽음 또는 읽지 않음으로 표시합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `message_id` (string, 필수): 메시지의 고유 식별자. get_messages 작업에서 얻을 수 있습니다.
|
||||
- `is_read` (boolean, 필수): 읽음으로 표시하려면 true, 읽지 않음으로 표시하려면 false로 설정합니다.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/delete_message">
|
||||
**설명:** 이메일 메시지를 삭제합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `message_id` (string, 필수): 삭제할 메시지의 고유 식별자. get_messages 작업에서 얻을 수 있습니다.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/update_event">
|
||||
**설명:** 기존 캘린더 이벤트를 업데이트합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `event_id` (string, 필수): 이벤트의 고유 식별자. get_calendar_events 작업에서 얻을 수 있습니다.
|
||||
- `subject` (string, 선택사항): 이벤트의 새 제목/제목.
|
||||
- `start_time` (string, 선택사항): ISO 8601 형식의 새 시작 시간 (예: "2024-01-20T10:00:00"). 필수: 이 필드 사용 시 start_timezone도 제공해야 합니다.
|
||||
- `start_timezone` (string, 선택사항): 시작 시간의 시간대. start_time 업데이트 시 필수. 예: "Pacific Standard Time", "Eastern Standard Time", "UTC".
|
||||
- `end_time` (string, 선택사항): ISO 8601 형식의 새 종료 시간. 필수: 이 필드 사용 시 end_timezone도 제공해야 합니다.
|
||||
- `end_timezone` (string, 선택사항): 종료 시간의 시간대. end_time 업데이트 시 필수. 예: "Pacific Standard Time", "Eastern Standard Time", "UTC".
|
||||
- `location` (string, 선택사항): 이벤트의 새 위치.
|
||||
- `body` (string, 선택사항): 이벤트의 새 본문/설명. HTML 형식 지원.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/delete_event">
|
||||
**설명:** 캘린더 이벤트를 삭제합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `event_id` (string, 필수): 삭제할 이벤트의 고유 식별자. get_calendar_events 작업에서 얻을 수 있습니다.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## 사용 예제
|
||||
@@ -165,6 +233,62 @@ crew = Crew(
|
||||
crew.kickoff()
|
||||
```
|
||||
|
||||
### 이메일 관리 및 검색
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# 이메일 관리에 특화된 에이전트 생성
|
||||
email_manager = Agent(
|
||||
role="이메일 관리자",
|
||||
goal="이메일 메시지를 검색하고 가져와 정리",
|
||||
backstory="이메일 정리 및 관리에 능숙한 AI 어시스턴트.",
|
||||
apps=['microsoft_outlook/get_messages']
|
||||
)
|
||||
|
||||
# 이메일 검색 및 가져오기 작업
|
||||
search_emails_task = Task(
|
||||
description="최신 읽지 않은 이메일 20건을 가져와 가장 중요한 것들의 요약을 제공하세요.",
|
||||
agent=email_manager,
|
||||
expected_output="주요 읽지 않은 이메일의 요약과 핵심 세부 정보."
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[email_manager],
|
||||
tasks=[search_emails_task]
|
||||
)
|
||||
|
||||
crew.kickoff()
|
||||
```
|
||||
|
||||
### 캘린더 및 연락처 관리
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# 캘린더 및 연락처 관리를 위한 에이전트 생성
|
||||
scheduler = Agent(
|
||||
role="캘린더 및 연락처 관리자",
|
||||
goal="캘린더 이벤트를 관리하고 연락처 정보를 유지",
|
||||
backstory="일정 관리 및 연락처 정리를 담당하는 AI 어시스턴트.",
|
||||
apps=['microsoft_outlook/create_calendar_event', 'microsoft_outlook/get_calendar_events', 'microsoft_outlook/create_contact']
|
||||
)
|
||||
|
||||
# 회의 생성 및 연락처 추가 작업
|
||||
schedule_task = Task(
|
||||
description="내일 오후 2시 '팀 회의' 제목으로 '회의실 A' 장소의 캘린더 이벤트를 만들고, 'john.smith@example.com' 이메일과 '프로젝트 매니저' 직책으로 'John Smith'의 새 연락처를 추가하세요.",
|
||||
agent=scheduler,
|
||||
expected_output="캘린더 이벤트가 생성되고 새 연락처가 추가됨."
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[scheduler],
|
||||
tasks=[schedule_task]
|
||||
)
|
||||
|
||||
crew.kickoff()
|
||||
```
|
||||
|
||||
## 문제 해결
|
||||
|
||||
### 일반적인 문제
|
||||
@@ -173,11 +297,29 @@ crew.kickoff()
|
||||
|
||||
- Microsoft 계정이 이메일, 캘린더 및 연락처 액세스에 필요한 권한을 가지고 있는지 확인하세요.
|
||||
- 필요한 범위: `Mail.Read`, `Mail.Send`, `Calendars.Read`, `Calendars.ReadWrite`, `Contacts.Read`, `Contacts.ReadWrite`.
|
||||
- OAuth 연결에 필요한 모든 범위가 포함되어 있는지 확인하세요.
|
||||
|
||||
**이메일 보내기 문제**
|
||||
|
||||
- `send_email`에 `to_recipients`, `subject`, `body`가 제공되는지 확인하세요.
|
||||
- 이메일 주소가 올바르게 형식화되어 있는지 확인하세요.
|
||||
- 계정에 `Mail.Send` 권한이 있는지 확인하세요.
|
||||
|
||||
**캘린더 이벤트 생성**
|
||||
|
||||
- `subject`, `start_datetime`, `end_datetime`이 제공되는지 확인하세요.
|
||||
- 날짜/시간 필드에 적절한 ISO 8601 형식을 사용하세요 (예: '2024-01-20T10:00:00').
|
||||
- 이벤트가 잘못된 시간에 표시되는 경우 시간대 설정을 확인하세요.
|
||||
|
||||
**연락처 관리**
|
||||
|
||||
- `create_contact`의 경우 필수인 `displayName`이 제공되는지 확인하세요.
|
||||
- `emailAddresses`를 제공할 때 `address`와 `name` 속성이 있는 올바른 객체 형식을 사용하세요.
|
||||
|
||||
**검색 및 필터 문제**
|
||||
|
||||
- `filter` 매개변수에 올바른 OData 문법을 사용하세요.
|
||||
- 날짜 필터의 경우 ISO 8601 형식을 사용하세요 (예: "receivedDateTime ge '2024-01-01T00:00:00Z'").
|
||||
|
||||
### 도움 받기
|
||||
|
||||
|
||||
@@ -77,6 +77,17 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_drives">
|
||||
**설명:** SharePoint 사이트의 모든 문서 라이브러리(드라이브)를 나열합니다. 파일 작업을 사용하기 전에 사용 가능한 라이브러리를 찾으려면 이 작업을 사용하세요.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `top` (integer, 선택사항): 페이지당 반환할 최대 드라이브 수 (1-999). 기본값: 100
|
||||
- `skip_token` (string, 선택사항): 다음 결과 페이지를 가져오기 위한 이전 응답의 페이지네이션 토큰.
|
||||
- `select` (string, 선택사항): 반환할 속성의 쉼표로 구분된 목록 (예: 'id,name,webUrl,driveType').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_site_lists">
|
||||
**설명:** SharePoint 사이트의 모든 목록을 가져옵니다.
|
||||
|
||||
@@ -145,20 +156,317 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_drive_items">
|
||||
**설명:** SharePoint 문서 라이브러리에서 파일과 폴더를 가져옵니다.
|
||||
<Accordion title="microsoft_sharepoint/list_files">
|
||||
**설명:** SharePoint 문서 라이브러리에서 파일과 폴더를 가져옵니다. 기본적으로 루트 폴더를 나열하지만 folder_id를 제공하여 하위 폴더로 이동할 수 있습니다.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): SharePoint 사이트의 ID.
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `folder_id` (string, 선택사항): 내용을 나열할 폴더의 ID. 루트 폴더의 경우 'root'를 사용하거나 이전 list_files 호출에서 가져온 폴더 ID를 제공하세요. 기본값: 'root'
|
||||
- `top` (integer, 선택사항): 페이지당 반환할 최대 항목 수 (1-1000). 기본값: 50
|
||||
- `skip_token` (string, 선택사항): 다음 결과 페이지를 가져오기 위한 이전 응답의 페이지네이션 토큰.
|
||||
- `orderby` (string, 선택사항): 결과 정렬 순서 (예: 'name asc', 'size desc', 'lastModifiedDateTime desc'). 기본값: 'name asc'
|
||||
- `filter` (string, 선택사항): 결과를 좁히기 위한 OData 필터 (예: 'file ne null'은 파일만, 'folder ne null'은 폴더만).
|
||||
- `select` (string, 선택사항): 반환할 필드의 쉼표로 구분된 목록 (예: 'id,name,size,folder,file,webUrl,lastModifiedDateTime').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/delete_drive_item">
|
||||
**설명:** SharePoint 문서 라이브러리에서 파일 또는 폴더를 삭제합니다.
|
||||
<Accordion title="microsoft_sharepoint/delete_file">
|
||||
**설명:** SharePoint 문서 라이브러리에서 파일 또는 폴더를 삭제합니다. 폴더의 경우 모든 내용이 재귀적으로 삭제됩니다. 항목은 사이트 휴지통으로 이동됩니다.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): SharePoint 사이트의 ID.
|
||||
- `item_id` (string, 필수): 삭제할 파일 또는 폴더의 ID.
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): 삭제할 파일 또는 폴더의 고유 식별자. list_files에서 가져오세요.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/list_files_by_path">
|
||||
**설명:** 경로로 SharePoint 문서 라이브러리 폴더의 파일과 폴더를 나열합니다. 깊은 탐색을 위해 여러 list_files 호출보다 더 효율적입니다.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `folder_path` (string, 필수): 앞뒤 슬래시 없이 폴더의 전체 경로 (예: 'Documents', 'Reports/2024/Q1').
|
||||
- `top` (integer, 선택사항): 페이지당 반환할 최대 항목 수 (1-1000). 기본값: 50
|
||||
- `skip_token` (string, 선택사항): 다음 결과 페이지를 가져오기 위한 이전 응답의 페이지네이션 토큰.
|
||||
- `orderby` (string, 선택사항): 결과 정렬 순서 (예: 'name asc', 'size desc'). 기본값: 'name asc'
|
||||
- `select` (string, 선택사항): 반환할 필드의 쉼표로 구분된 목록 (예: 'id,name,size,folder,file,webUrl,lastModifiedDateTime').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/download_file">
|
||||
**설명:** SharePoint 문서 라이브러리에서 원시 파일 내용을 다운로드합니다. 일반 텍스트 파일(.txt, .csv, .json)에만 사용하세요. Excel 파일의 경우 Excel 전용 작업을 사용하세요. Word 파일의 경우 get_word_document_content를 사용하세요.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): 다운로드할 파일의 고유 식별자. list_files 또는 list_files_by_path에서 가져오세요.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_file_info">
|
||||
**설명:** SharePoint 문서 라이브러리의 특정 파일 또는 폴더에 대한 자세한 메타데이터를 가져옵니다. 이름, 크기, 생성/수정 날짜 및 작성자 정보가 포함됩니다.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): 파일 또는 폴더의 고유 식별자. list_files 또는 list_files_by_path에서 가져오세요.
|
||||
- `select` (string, 선택사항): 반환할 속성의 쉼표로 구분된 목록 (예: 'id,name,size,createdDateTime,lastModifiedDateTime,webUrl,createdBy,lastModifiedBy').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/create_folder">
|
||||
**설명:** SharePoint 문서 라이브러리에 새 폴더를 만듭니다. 기본적으로 루트에 폴더를 만들며 하위 폴더를 만들려면 parent_id를 사용하세요.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `folder_name` (string, 필수): 새 폴더의 이름. 사용할 수 없는 문자: \ / : * ? " < > |
|
||||
- `parent_id` (string, 선택사항): 상위 폴더의 ID. 문서 라이브러리 루트의 경우 'root'를 사용하거나 list_files에서 가져온 폴더 ID를 제공하세요. 기본값: 'root'
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/search_files">
|
||||
**설명:** 키워드로 SharePoint 문서 라이브러리에서 파일과 폴더를 검색합니다. 파일 이름, 폴더 이름 및 Office 문서의 파일 내용을 검색합니다. 와일드카드나 특수 문자를 사용하지 마세요.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `query` (string, 필수): 검색 키워드 (예: 'report', 'budget 2024'). *.txt와 같은 와일드카드는 지원되지 않습니다.
|
||||
- `top` (integer, 선택사항): 페이지당 반환할 최대 결과 수 (1-1000). 기본값: 50
|
||||
- `skip_token` (string, 선택사항): 다음 결과 페이지를 가져오기 위한 이전 응답의 페이지네이션 토큰.
|
||||
- `select` (string, 선택사항): 반환할 필드의 쉼표로 구분된 목록 (예: 'id,name,size,folder,file,webUrl,lastModifiedDateTime').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/copy_file">
|
||||
**설명:** SharePoint 내에서 파일 또는 폴더를 새 위치로 복사합니다. 원본 항목은 변경되지 않습니다. 대용량 파일의 경우 복사 작업은 비동기적입니다.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): 복사할 파일 또는 폴더의 고유 식별자. list_files 또는 search_files에서 가져오세요.
|
||||
- `destination_folder_id` (string, 필수): 대상 폴더의 ID. 루트 폴더의 경우 'root'를 사용하거나 list_files에서 가져온 폴더 ID를 사용하세요.
|
||||
- `new_name` (string, 선택사항): 복사본의 새 이름. 제공하지 않으면 원래 이름이 사용됩니다.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/move_file">
|
||||
**설명:** SharePoint 내에서 파일 또는 폴더를 새 위치로 이동합니다. 항목은 원래 위치에서 제거됩니다. 폴더의 경우 모든 내용도 함께 이동됩니다.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): 이동할 파일 또는 폴더의 고유 식별자. list_files 또는 search_files에서 가져오세요.
|
||||
- `destination_folder_id` (string, 필수): 대상 폴더의 ID. 루트 폴더의 경우 'root'를 사용하거나 list_files에서 가져온 폴더 ID를 사용하세요.
|
||||
- `new_name` (string, 선택사항): 이동된 항목의 새 이름. 제공하지 않으면 원래 이름이 유지됩니다.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_worksheets">
|
||||
**설명:** SharePoint 문서 라이브러리에 저장된 Excel 통합 문서의 모든 워크시트(탭)를 나열합니다. 반환된 워크시트 이름을 다른 Excel 작업에 사용하세요.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
|
||||
- `select` (string, 선택사항): 반환할 속성의 쉼표로 구분된 목록 (예: 'id,name,position,visibility').
|
||||
- `filter` (string, 선택사항): OData 필터 표현식 (예: "visibility eq 'Visible'"로 숨겨진 시트 제외).
|
||||
- `top` (integer, 선택사항): 반환할 최대 워크시트 수. 최소: 1, 최대: 999
|
||||
- `orderby` (string, 선택사항): 정렬 순서 (예: 'position asc'로 탭 순서대로 반환).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/create_excel_worksheet">
|
||||
**설명:** SharePoint 문서 라이브러리에 저장된 Excel 통합 문서에 새 워크시트(탭)를 만듭니다. 새 시트는 탭 목록의 끝에 추가됩니다.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
|
||||
- `name` (string, 필수): 새 워크시트의 이름. 최대 31자. 사용할 수 없는 문자: \ / * ? : [ ]. 통합 문서 내에서 고유해야 합니다.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_range_data">
|
||||
**설명:** SharePoint에 저장된 Excel 워크시트의 특정 범위에서 셀 값을 가져옵니다. 크기를 모르는 상태에서 모든 데이터를 읽으려면 대신 get_excel_used_range를 사용하세요.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
|
||||
- `worksheet_name` (string, 필수): 읽을 워크시트(탭)의 이름. get_excel_worksheets에서 가져오세요. 대소문자를 구분합니다.
|
||||
- `range` (string, 필수): A1 표기법의 셀 범위 (예: 'A1:C10', 'A:C', '1:5', 'A1').
|
||||
- `select` (string, 선택사항): 반환할 속성의 쉼표로 구분된 목록 (예: 'address,values,formulas,numberFormat,text').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/update_excel_range_data">
|
||||
**설명:** SharePoint에 저장된 Excel 워크시트의 특정 범위에 값을 씁니다. 기존 셀 내용을 덮어씁니다. values 배열의 크기는 범위 크기와 정확히 일치해야 합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
|
||||
- `worksheet_name` (string, 필수): 업데이트할 워크시트(탭)의 이름. get_excel_worksheets에서 가져오세요. 대소문자를 구분합니다.
|
||||
- `range` (string, 필수): 값을 쓸 A1 표기법의 셀 범위 (예: 'A1:C3'은 3x3 블록).
|
||||
- `values` (array, 필수): 2D 값 배열 (셀을 포함하는 행). A1:B2의 예: [["Header1", "Header2"], ["Value1", "Value2"]]. 셀을 지우려면 null을 사용하세요.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_used_range_metadata">
|
||||
**설명:** 실제 셀 값 없이 워크시트에서 사용된 범위의 메타데이터(주소 및 크기)만 반환합니다. 대용량 파일에서 데이터를 청크로 읽기 전에 스프레드시트 크기를 파악하는 데 이상적입니다.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
|
||||
- `worksheet_name` (string, 필수): 읽을 워크시트(탭)의 이름. get_excel_worksheets에서 가져오세요. 대소문자를 구분합니다.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_used_range">
|
||||
**설명:** SharePoint에 저장된 워크시트에서 데이터가 포함된 모든 셀을 가져옵니다. 2MB보다 큰 파일에는 사용하지 마세요. 대용량 파일의 경우 먼저 get_excel_used_range_metadata를 사용한 다음 get_excel_range_data로 작은 청크로 읽으세요.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
|
||||
- `worksheet_name` (string, 필수): 읽을 워크시트(탭)의 이름. get_excel_worksheets에서 가져오세요. 대소문자를 구분합니다.
|
||||
- `select` (string, 선택사항): 반환할 속성의 쉼표로 구분된 목록 (예: 'address,values,formulas,numberFormat,text,rowCount,columnCount').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_cell">
|
||||
**설명:** SharePoint의 Excel 파일에서 행과 열 인덱스로 단일 셀의 값을 가져옵니다. 인덱스는 0 기반입니다 (행 0 = Excel 행 1, 열 0 = 열 A).
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
|
||||
- `worksheet_name` (string, 필수): 워크시트(탭)의 이름. get_excel_worksheets에서 가져오세요. 대소문자를 구분합니다.
|
||||
- `row` (integer, 필수): 0 기반 행 인덱스 (행 0 = Excel 행 1). 유효 범위: 0-1048575
|
||||
- `column` (integer, 필수): 0 기반 열 인덱스 (열 0 = A, 열 1 = B). 유효 범위: 0-16383
|
||||
- `select` (string, 선택사항): 반환할 속성의 쉼표로 구분된 목록 (예: 'address,values,formulas,numberFormat,text').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/add_excel_table">
|
||||
**설명:** 셀 범위를 필터링, 정렬 및 구조화된 데이터 기능이 있는 서식이 지정된 Excel 테이블로 변환합니다. 테이블을 만들면 add_excel_table_row로 데이터를 추가할 수 있습니다.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
|
||||
- `worksheet_name` (string, 필수): 데이터 범위가 포함된 워크시트의 이름. get_excel_worksheets에서 가져오세요.
|
||||
- `range` (string, 필수): 헤더와 데이터를 포함하여 테이블로 변환할 셀 범위 (예: 'A1:D10'에서 A1:D1은 열 헤더).
|
||||
- `has_headers` (boolean, 선택사항): 첫 번째 행에 열 헤더가 포함되어 있으면 true로 설정. 기본값: true
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_tables">
|
||||
**설명:** SharePoint에 저장된 특정 Excel 워크시트의 모든 테이블을 나열합니다. id, name, showHeaders 및 showTotals를 포함한 테이블 속성을 반환합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
|
||||
- `worksheet_name` (string, 필수): 테이블을 가져올 워크시트의 이름. get_excel_worksheets에서 가져오세요.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/add_excel_table_row">
|
||||
**설명:** SharePoint 파일의 Excel 테이블 끝에 새 행을 추가합니다. values 배열은 테이블의 열 수와 같은 수의 요소를 가져야 합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
|
||||
- `worksheet_name` (string, 필수): 테이블이 포함된 워크시트의 이름. get_excel_worksheets에서 가져오세요.
|
||||
- `table_name` (string, 필수): 행을 추가할 테이블의 이름 (예: 'Table1'). get_excel_tables에서 가져오세요. 대소문자를 구분합니다.
|
||||
- `values` (array, 필수): 새 행의 셀 값 배열로 테이블 순서대로 열당 하나씩 (예: ["John Doe", "john@example.com", 25]).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_table_data">
|
||||
**설명:** SharePoint 파일의 Excel 테이블에서 모든 행을 데이터 범위로 가져옵니다. 정확한 범위를 알 필요가 없으므로 구조화된 테이블 작업 시 get_excel_range_data보다 쉽습니다.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
|
||||
- `worksheet_name` (string, 필수): 테이블이 포함된 워크시트의 이름. get_excel_worksheets에서 가져오세요.
|
||||
- `table_name` (string, 필수): 데이터를 가져올 테이블의 이름 (예: 'Table1'). get_excel_tables에서 가져오세요. 대소문자를 구분합니다.
|
||||
- `select` (string, 선택사항): 반환할 속성의 쉼표로 구분된 목록 (예: 'address,values,formulas,numberFormat,text').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/create_excel_chart">
|
||||
**설명:** SharePoint에 저장된 Excel 워크시트에 데이터 범위에서 차트 시각화를 만듭니다. 차트는 워크시트에 포함됩니다.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
|
||||
- `worksheet_name` (string, 필수): 차트를 만들 워크시트의 이름. get_excel_worksheets에서 가져오세요.
|
||||
- `chart_type` (string, 필수): 차트 유형 (예: 'ColumnClustered', 'ColumnStacked', 'Line', 'LineMarkers', 'Pie', 'Bar', 'BarClustered', 'Area', 'Scatter', 'Doughnut').
|
||||
- `source_data` (string, 필수): 헤더를 포함한 A1 표기법의 차트 데이터 범위 (예: 'A1:B10').
|
||||
- `series_by` (string, 선택사항): 데이터 계열 구성 방법: 'Auto', 'Columns' 또는 'Rows'. 기본값: 'Auto'
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/list_excel_charts">
|
||||
**설명:** SharePoint에 저장된 Excel 워크시트에 포함된 모든 차트를 나열합니다. id, name, chartType, height, width 및 position을 포함한 차트 속성을 반환합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
|
||||
- `worksheet_name` (string, 필수): 차트를 나열할 워크시트의 이름. get_excel_worksheets에서 가져오세요.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/delete_excel_worksheet">
|
||||
**설명:** SharePoint에 저장된 Excel 통합 문서에서 워크시트(탭)와 모든 내용을 영구적으로 제거합니다. 실행 취소할 수 없습니다. 통합 문서에는 최소 하나의 워크시트가 있어야 합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
|
||||
- `worksheet_name` (string, 필수): 삭제할 워크시트의 이름. 대소문자를 구분합니다. 이 시트의 모든 데이터, 테이블 및 차트가 영구적으로 제거됩니다.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/delete_excel_table">
|
||||
**설명:** SharePoint의 Excel 워크시트에서 테이블을 제거합니다. 테이블 구조(필터링, 서식, 테이블 기능)는 삭제되지만 기본 셀 데이터는 보존됩니다.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
|
||||
- `worksheet_name` (string, 필수): 테이블이 포함된 워크시트의 이름. get_excel_worksheets에서 가져오세요.
|
||||
- `table_name` (string, 필수): 삭제할 테이블의 이름 (예: 'Table1'). get_excel_tables에서 가져오세요. 테이블 삭제 후에도 셀의 데이터는 유지됩니다.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/list_excel_names">
|
||||
**설명:** SharePoint에 저장된 Excel 통합 문서에 정의된 모든 명명된 범위를 가져옵니다. 명명된 범위는 셀 범위에 대한 사용자 정의 레이블입니다 (예: 'SalesData'는 A1:D100을 가리킴).
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): SharePoint에 있는 Excel 파일의 고유 식별자. list_files 또는 search_files에서 가져오세요.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_word_document_content">
|
||||
**설명:** SharePoint 문서 라이브러리에 저장된 Word 문서(.docx)에서 텍스트 내용을 다운로드하고 추출합니다. SharePoint에서 Word 문서를 읽는 권장 방법입니다.
|
||||
|
||||
**매개변수:**
|
||||
- `site_id` (string, 필수): get_sites에서 가져온 전체 SharePoint 사이트 식별자.
|
||||
- `drive_id` (string, 필수): 문서 라이브러리의 ID. 먼저 get_drives를 호출하여 유효한 드라이브 ID를 가져오세요.
|
||||
- `item_id` (string, 필수): SharePoint에 있는 Word 문서(.docx)의 고유 식별자. list_files 또는 search_files에서 가져오세요.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -107,6 +107,86 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
- `join_web_url` (string, 필수): 검색할 회의의 웹 참가 URL.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/search_online_meetings_by_meeting_id">
|
||||
**설명:** 외부 Meeting ID로 온라인 회의를 검색합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `join_meeting_id` (string, 필수): 참석자가 참가할 때 사용하는 회의 ID(숫자 코드). 회의 초대에 표시되는 joinMeetingId이며, Graph API meeting id가 아닙니다.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/get_meeting">
|
||||
**설명:** 특정 온라인 회의의 세부 정보를 가져옵니다.
|
||||
|
||||
**매개변수:**
|
||||
- `meeting_id` (string, 필수): Graph API 회의 ID(긴 영숫자 문자열). create_meeting 또는 search_online_meetings 작업에서 얻을 수 있습니다. 숫자 joinMeetingId와 다릅니다.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/get_team_members">
|
||||
**설명:** 특정 팀의 멤버를 가져옵니다.
|
||||
|
||||
**매개변수:**
|
||||
- `team_id` (string, 필수): 팀의 고유 식별자. get_teams 작업에서 얻을 수 있습니다.
|
||||
- `top` (integer, 선택사항): 페이지당 검색할 멤버 수 (1-999). 기본값: 100.
|
||||
- `skip_token` (string, 선택사항): 이전 응답의 페이지네이션 토큰. 응답에 @odata.nextLink가 포함된 경우 $skiptoken 매개변수 값을 추출하여 여기에 전달하면 다음 페이지 결과를 가져올 수 있습니다.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/create_channel">
|
||||
**설명:** 팀에 새 채널을 만듭니다.
|
||||
|
||||
**매개변수:**
|
||||
- `team_id` (string, 필수): 팀의 고유 식별자. get_teams 작업에서 얻을 수 있습니다.
|
||||
- `display_name` (string, 필수): Teams에 표시되는 채널 이름. 팀 내에서 고유해야 합니다. 최대 50자.
|
||||
- `description` (string, 선택사항): 채널 목적을 설명하는 선택적 설명. 채널 세부 정보에 표시됩니다. 최대 1024자.
|
||||
- `membership_type` (string, 선택사항): 채널 가시성. 옵션: standard, private. "standard" = 모든 팀 멤버에게 표시, "private" = 명시적으로 추가된 멤버에게만 표시. 기본값: standard.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/get_message_replies">
|
||||
**설명:** 채널의 특정 메시지에 대한 회신을 가져옵니다.
|
||||
|
||||
**매개변수:**
|
||||
- `team_id` (string, 필수): 팀의 고유 식별자. get_teams 작업에서 얻을 수 있습니다.
|
||||
- `channel_id` (string, 필수): 채널의 고유 식별자. get_channels 작업에서 얻을 수 있습니다.
|
||||
- `message_id` (string, 필수): 상위 메시지의 고유 식별자. get_messages 작업에서 얻을 수 있습니다.
|
||||
- `top` (integer, 선택사항): 페이지당 검색할 회신 수 (1-50). 기본값: 50.
|
||||
- `skip_token` (string, 선택사항): 이전 응답의 페이지네이션 토큰. 응답에 @odata.nextLink가 포함된 경우 $skiptoken 매개변수 값을 추출하여 여기에 전달하면 다음 페이지 결과를 가져올 수 있습니다.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/reply_to_message">
|
||||
**설명:** Teams 채널의 메시지에 회신합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `team_id` (string, 필수): 팀의 고유 식별자. get_teams 작업에서 얻을 수 있습니다.
|
||||
- `channel_id` (string, 필수): 채널의 고유 식별자. get_channels 작업에서 얻을 수 있습니다.
|
||||
- `message_id` (string, 필수): 회신할 메시지의 고유 식별자. get_messages 작업에서 얻을 수 있습니다.
|
||||
- `message` (string, 필수): 회신 내용. HTML의 경우 서식 태그 포함. 텍스트의 경우 일반 텍스트만.
|
||||
- `content_type` (string, 선택사항): 콘텐츠 형식. 옵션: html, text. "text"는 일반 텍스트, "html"은 서식이 있는 리치 텍스트. 기본값: text.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/update_meeting">
|
||||
**설명:** 기존 온라인 회의를 업데이트합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `meeting_id` (string, 필수): 회의의 고유 식별자. create_meeting 또는 search_online_meetings 작업에서 얻을 수 있습니다.
|
||||
- `subject` (string, 선택사항): 새 회의 제목.
|
||||
- `startDateTime` (string, 선택사항): 시간대가 포함된 ISO 8601 형식의 새 시작 시간. 예: "2024-01-20T10:00:00-08:00".
|
||||
- `endDateTime` (string, 선택사항): 시간대가 포함된 ISO 8601 형식의 새 종료 시간.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/delete_meeting">
|
||||
**설명:** 온라인 회의를 삭제합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `meeting_id` (string, 필수): 삭제할 회의의 고유 식별자. create_meeting 또는 search_online_meetings 작업에서 얻을 수 있습니다.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## 사용 예제
|
||||
@@ -140,6 +220,62 @@ crew = Crew(
|
||||
crew.kickoff()
|
||||
```
|
||||
|
||||
### 메시징 및 커뮤니케이션
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# 메시징에 특화된 에이전트 생성
|
||||
messenger = Agent(
|
||||
role="Teams 메신저",
|
||||
goal="Teams 채널에서 메시지 전송 및 검색",
|
||||
backstory="팀 커뮤니케이션 및 메시지 관리에 능숙한 AI 어시스턴트.",
|
||||
apps=['microsoft_teams/send_message', 'microsoft_teams/get_messages']
|
||||
)
|
||||
|
||||
# 메시지 전송 및 최근 메시지 검색 작업
|
||||
messaging_task = Task(
|
||||
description="'your_team_id' 팀의 General 채널에 'Hello team! This is an automated update from our AI assistant.' 메시지를 보낸 다음 해당 채널의 최근 10개 메시지를 검색하세요.",
|
||||
agent=messenger,
|
||||
expected_output="메시지가 성공적으로 전송되고 최근 메시지가 검색됨."
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[messenger],
|
||||
tasks=[messaging_task]
|
||||
)
|
||||
|
||||
crew.kickoff()
|
||||
```
|
||||
|
||||
### 회의 관리
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# 회의 관리를 위한 에이전트 생성
|
||||
meeting_scheduler = Agent(
|
||||
role="회의 스케줄러",
|
||||
goal="Teams 회의 생성 및 관리",
|
||||
backstory="회의 일정 관리 및 정리를 담당하는 AI 어시스턴트.",
|
||||
apps=['microsoft_teams/create_meeting', 'microsoft_teams/search_online_meetings_by_join_url']
|
||||
)
|
||||
|
||||
# 회의 생성 작업
|
||||
schedule_meeting_task = Task(
|
||||
description="내일 오전 10시에 1시간 동안 진행되는 '주간 팀 동기화' 제목의 Teams 회의를 생성하세요 (시간대가 포함된 적절한 ISO 8601 형식 사용).",
|
||||
agent=meeting_scheduler,
|
||||
expected_output="회의 세부 정보와 함께 Teams 회의가 성공적으로 생성됨."
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[meeting_scheduler],
|
||||
tasks=[schedule_meeting_task]
|
||||
)
|
||||
|
||||
crew.kickoff()
|
||||
```
|
||||
|
||||
## 문제 해결
|
||||
|
||||
### 일반적인 문제
|
||||
@@ -148,11 +284,35 @@ crew.kickoff()
|
||||
|
||||
- Microsoft 계정이 Teams 액세스에 필요한 권한을 가지고 있는지 확인하세요.
|
||||
- 필요한 범위: `Team.ReadBasic.All`, `Channel.ReadBasic.All`, `ChannelMessage.Send`, `ChannelMessage.Read.All`, `OnlineMeetings.ReadWrite`, `OnlineMeetings.Read`.
|
||||
- OAuth 연결에 필요한 모든 범위가 포함되어 있는지 확인하세요.
|
||||
|
||||
**팀 및 채널 액세스**
|
||||
|
||||
- 액세스하려는 팀의 멤버인지 확인하세요.
|
||||
- 팀 및 채널 ID가 올바른지 다시 확인하세요.
|
||||
- 팀 및 채널 ID는 `get_teams` 및 `get_channels` 작업을 사용하여 얻을 수 있습니다.
|
||||
|
||||
**메시지 전송 문제**
|
||||
|
||||
- `send_message`에 `team_id`, `channel_id`, `message`가 제공되는지 확인하세요.
|
||||
- 지정된 채널에 메시지를 보낼 권한이 있는지 확인하세요.
|
||||
- 메시지 형식에 따라 적절한 `content_type`(text 또는 html)을 선택하세요.
|
||||
|
||||
**회의 생성**
|
||||
|
||||
- `subject`, `startDateTime`, `endDateTime`이 제공되는지 확인하세요.
|
||||
- 날짜/시간 필드에 시간대가 포함된 적절한 ISO 8601 형식을 사용하세요 (예: '2024-01-20T10:00:00-08:00').
|
||||
- 회의 시간이 미래인지 확인하세요.
|
||||
|
||||
**메시지 검색 제한**
|
||||
|
||||
- `get_messages` 작업은 요청당 최대 50개 메시지만 검색할 수 있습니다.
|
||||
- 메시지는 역시간순(최신순)으로 반환됩니다.
|
||||
|
||||
**회의 검색**
|
||||
|
||||
- `search_online_meetings_by_join_url`의 경우 참가 URL이 정확하고 올바르게 형식화되어 있는지 확인하세요.
|
||||
- URL은 완전한 Teams 회의 참가 URL이어야 합니다.
|
||||
|
||||
### 도움 받기
|
||||
|
||||
|
||||
@@ -97,6 +97,26 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=your_enterprise_token
|
||||
- `file_id` (string, 필수): 삭제할 문서의 ID.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_word/copy_document">
|
||||
**설명:** OneDrive의 새 위치에 문서를 복사합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `file_id` (string, 필수): 복사할 문서의 ID.
|
||||
- `name` (string, 선택사항): 복사된 문서의 새 이름.
|
||||
- `parent_id` (string, 선택사항): 대상 폴더의 ID (기본값: 루트).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_word/move_document">
|
||||
**설명:** OneDrive의 새 위치로 문서를 이동합니다.
|
||||
|
||||
**매개변수:**
|
||||
- `file_id` (string, 필수): 이동할 문서의 ID.
|
||||
- `parent_id` (string, 필수): 대상 폴더의 ID.
|
||||
- `name` (string, 선택사항): 이동된 문서의 새 이름.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## 사용 예제
|
||||
|
||||
@@ -200,6 +200,25 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
|
||||
- `clientData` (array, opcional): Dados específicos do cliente. Cada item é um objeto com `key` (string) e `value` (string).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_contacts/update_contact_group">
|
||||
**Descrição:** Atualizar informações de um grupo de contatos.
|
||||
|
||||
**Parâmetros:**
|
||||
- `resourceName` (string, obrigatório): O nome do recurso do grupo de contatos (ex: 'contactGroups/myContactGroup').
|
||||
- `name` (string, obrigatório): O nome do grupo de contatos.
|
||||
- `clientData` (array, opcional): Dados específicos do cliente. Cada item é um objeto com `key` (string) e `value` (string).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_contacts/delete_contact_group">
|
||||
**Descrição:** Excluir um grupo de contatos.
|
||||
|
||||
**Parâmetros:**
|
||||
- `resourceName` (string, obrigatório): O nome do recurso do grupo de contatos a excluir (ex: 'contactGroups/myContactGroup').
|
||||
- `deleteContacts` (boolean, opcional): Se os contatos do grupo também devem ser excluídos. Padrão: false
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Exemplos de Uso
|
||||
|
||||
@@ -131,6 +131,297 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
|
||||
- `endIndex` (integer, obrigatório): O índice final do intervalo.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/create_document_with_content">
|
||||
**Descrição:** Criar um novo documento do Google com conteúdo em uma única ação.
|
||||
|
||||
**Parâmetros:**
|
||||
- `title` (string, obrigatório): O título para o novo documento. Aparece no topo do documento e no Google Drive.
|
||||
- `content` (string, opcional): O conteúdo de texto a inserir no documento. Use `\n` para novos parágrafos.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/append_text">
|
||||
**Descrição:** Adicionar texto ao final de um documento do Google. Insere automaticamente no final do documento sem necessidade de especificar um índice.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento obtido da resposta de create_document ou URL.
|
||||
- `text` (string, obrigatório): Texto a adicionar ao final do documento. Use `\n` para novos parágrafos.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_text_bold">
|
||||
**Descrição:** Aplicar ou remover formatação de negrito em texto de um documento do Google.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `startIndex` (integer, obrigatório): Posição inicial do texto a formatar.
|
||||
- `endIndex` (integer, obrigatório): Posição final do texto a formatar (exclusivo).
|
||||
- `bold` (boolean, obrigatório): Defina `true` para aplicar negrito, `false` para remover negrito.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_text_italic">
|
||||
**Descrição:** Aplicar ou remover formatação de itálico em texto de um documento do Google.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `startIndex` (integer, obrigatório): Posição inicial do texto a formatar.
|
||||
- `endIndex` (integer, obrigatório): Posição final do texto a formatar (exclusivo).
|
||||
- `italic` (boolean, obrigatório): Defina `true` para aplicar itálico, `false` para remover itálico.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_text_underline">
|
||||
**Descrição:** Adicionar ou remover formatação de sublinhado em texto de um documento do Google.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `startIndex` (integer, obrigatório): Posição inicial do texto a formatar.
|
||||
- `endIndex` (integer, obrigatório): Posição final do texto a formatar (exclusivo).
|
||||
- `underline` (boolean, obrigatório): Defina `true` para sublinhar, `false` para remover sublinhado.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_text_strikethrough">
|
||||
**Descrição:** Adicionar ou remover formatação de tachado em texto de um documento do Google.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `startIndex` (integer, obrigatório): Posição inicial do texto a formatar.
|
||||
- `endIndex` (integer, obrigatório): Posição final do texto a formatar (exclusivo).
|
||||
- `strikethrough` (boolean, obrigatório): Defina `true` para adicionar tachado, `false` para remover.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_font_size">
|
||||
**Descrição:** Alterar o tamanho da fonte do texto em um documento do Google.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `startIndex` (integer, obrigatório): Posição inicial do texto a formatar.
|
||||
- `endIndex` (integer, obrigatório): Posição final do texto a formatar (exclusivo).
|
||||
- `fontSize` (number, obrigatório): Tamanho da fonte em pontos. Tamanhos comuns: 10, 11, 12, 14, 16, 18, 24, 36.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_text_color">
|
||||
**Descrição:** Alterar a cor do texto usando valores RGB (escala 0-1) em um documento do Google.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `startIndex` (integer, obrigatório): Posição inicial do texto a formatar.
|
||||
- `endIndex` (integer, obrigatório): Posição final do texto a formatar (exclusivo).
|
||||
- `red` (number, obrigatório): Componente vermelho (0-1). Exemplo: `1` para vermelho total.
|
||||
- `green` (number, obrigatório): Componente verde (0-1). Exemplo: `0.5` para metade verde.
|
||||
- `blue` (number, obrigatório): Componente azul (0-1). Exemplo: `0` para sem azul.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/create_hyperlink">
|
||||
**Descrição:** Transformar texto existente em um hyperlink clicável em um documento do Google.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `startIndex` (integer, obrigatório): Posição inicial do texto a transformar em link.
|
||||
- `endIndex` (integer, obrigatório): Posição final do texto a transformar em link (exclusivo).
|
||||
- `url` (string, obrigatório): A URL para a qual o link deve apontar. Exemplo: `"https://example.com"`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/apply_heading_style">
|
||||
**Descrição:** Aplicar um estilo de título ou parágrafo a um intervalo de texto em um documento do Google.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `startIndex` (integer, obrigatório): Posição inicial do(s) parágrafo(s) a estilizar.
|
||||
- `endIndex` (integer, obrigatório): Posição final do(s) parágrafo(s) a estilizar.
|
||||
- `style` (string, obrigatório): O estilo a aplicar. Opções: `NORMAL_TEXT`, `TITLE`, `SUBTITLE`, `HEADING_1`, `HEADING_2`, `HEADING_3`, `HEADING_4`, `HEADING_5`, `HEADING_6`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_paragraph_alignment">
|
||||
**Descrição:** Definir o alinhamento de texto para parágrafos em um documento do Google.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `startIndex` (integer, obrigatório): Posição inicial do(s) parágrafo(s) a alinhar.
|
||||
- `endIndex` (integer, obrigatório): Posição final do(s) parágrafo(s) a alinhar.
|
||||
- `alignment` (string, obrigatório): Alinhamento do texto. Opções: `START` (esquerda), `CENTER`, `END` (direita), `JUSTIFIED`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/set_line_spacing">
|
||||
**Descrição:** Definir o espaçamento entre linhas para parágrafos em um documento do Google.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `startIndex` (integer, obrigatório): Posição inicial do(s) parágrafo(s).
|
||||
- `endIndex` (integer, obrigatório): Posição final do(s) parágrafo(s).
|
||||
- `lineSpacing` (number, obrigatório): Espaçamento entre linhas como porcentagem. `100` = simples, `115` = 1.15x, `150` = 1.5x, `200` = duplo.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/create_paragraph_bullets">
|
||||
**Descrição:** Converter parágrafos em uma lista com marcadores ou numerada em um documento do Google.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `startIndex` (integer, obrigatório): Posição inicial dos parágrafos a converter em lista.
|
||||
- `endIndex` (integer, obrigatório): Posição final dos parágrafos a converter em lista.
|
||||
- `bulletPreset` (string, obrigatório): Estilo de marcadores/numeração. Opções: `BULLET_DISC_CIRCLE_SQUARE`, `BULLET_DIAMONDX_ARROW3D_SQUARE`, `BULLET_CHECKBOX`, `BULLET_ARROW_DIAMOND_DISC`, `BULLET_STAR_CIRCLE_SQUARE`, `NUMBERED_DECIMAL_ALPHA_ROMAN`, `NUMBERED_DECIMAL_ALPHA_ROMAN_PARENS`, `NUMBERED_DECIMAL_NESTED`, `NUMBERED_UPPERALPHA_ALPHA_ROMAN`, `NUMBERED_UPPERROMAN_UPPERALPHA_DECIMAL`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/delete_paragraph_bullets">
|
||||
**Descrição:** Remover marcadores ou numeração de parágrafos em um documento do Google.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `startIndex` (integer, obrigatório): Posição inicial dos parágrafos de lista.
|
||||
- `endIndex` (integer, obrigatório): Posição final dos parágrafos de lista.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/insert_table_with_content">
|
||||
**Descrição:** Inserir uma tabela com conteúdo em um documento do Google em uma única ação. Forneça o conteúdo como um array 2D.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `rows` (integer, obrigatório): Número de linhas na tabela.
|
||||
- `columns` (integer, obrigatório): Número de colunas na tabela.
|
||||
- `index` (integer, opcional): Posição para inserir a tabela. Se não fornecido, a tabela é inserida no final do documento.
|
||||
- `content` (array, obrigatório): Conteúdo da tabela como um array 2D. Cada array interno é uma linha. Exemplo: `[["Ano", "Receita"], ["2023", "$43B"], ["2024", "$45B"]]`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/insert_table_row">
|
||||
**Descrição:** Inserir uma nova linha acima ou abaixo de uma célula de referência em uma tabela existente.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `tableStartIndex` (integer, obrigatório): O índice inicial da tabela. Obtenha de get_document.
|
||||
- `rowIndex` (integer, obrigatório): Índice da linha (baseado em 0) da célula de referência.
|
||||
- `columnIndex` (integer, opcional): Índice da coluna (baseado em 0) da célula de referência. Padrão: `0`.
|
||||
- `insertBelow` (boolean, opcional): Se `true`, insere abaixo da linha de referência. Se `false`, insere acima. Padrão: `true`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/insert_table_column">
|
||||
**Descrição:** Inserir uma nova coluna à esquerda ou à direita de uma célula de referência em uma tabela existente.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `tableStartIndex` (integer, obrigatório): O índice inicial da tabela.
|
||||
- `rowIndex` (integer, opcional): Índice da linha (baseado em 0) da célula de referência. Padrão: `0`.
|
||||
- `columnIndex` (integer, obrigatório): Índice da coluna (baseado em 0) da célula de referência.
|
||||
- `insertRight` (boolean, opcional): Se `true`, insere à direita. Se `false`, insere à esquerda. Padrão: `true`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/delete_table_row">
|
||||
**Descrição:** Excluir uma linha de uma tabela existente em um documento do Google.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `tableStartIndex` (integer, obrigatório): O índice inicial da tabela.
|
||||
- `rowIndex` (integer, obrigatório): Índice da linha (baseado em 0) a excluir.
|
||||
- `columnIndex` (integer, opcional): Índice da coluna (baseado em 0) de qualquer célula na linha. Padrão: `0`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/delete_table_column">
|
||||
**Descrição:** Excluir uma coluna de uma tabela existente em um documento do Google.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `tableStartIndex` (integer, obrigatório): O índice inicial da tabela.
|
||||
- `rowIndex` (integer, opcional): Índice da linha (baseado em 0) de qualquer célula na coluna. Padrão: `0`.
|
||||
- `columnIndex` (integer, obrigatório): Índice da coluna (baseado em 0) a excluir.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/merge_table_cells">
|
||||
**Descrição:** Mesclar um intervalo de células de tabela em uma única célula. O conteúdo de todas as células é preservado.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `tableStartIndex` (integer, obrigatório): O índice inicial da tabela.
|
||||
- `rowIndex` (integer, obrigatório): Índice da linha inicial (baseado em 0) para a mesclagem.
|
||||
- `columnIndex` (integer, obrigatório): Índice da coluna inicial (baseado em 0) para a mesclagem.
|
||||
- `rowSpan` (integer, obrigatório): Número de linhas a mesclar.
|
||||
- `columnSpan` (integer, obrigatório): Número de colunas a mesclar.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/unmerge_table_cells">
|
||||
**Descrição:** Desfazer a mesclagem de células de tabela previamente mescladas, retornando-as a células individuais.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `tableStartIndex` (integer, obrigatório): O índice inicial da tabela.
|
||||
- `rowIndex` (integer, obrigatório): Índice da linha (baseado em 0) da célula mesclada.
|
||||
- `columnIndex` (integer, obrigatório): Índice da coluna (baseado em 0) da célula mesclada.
|
||||
- `rowSpan` (integer, obrigatório): Número de linhas que a célula mesclada abrange.
|
||||
- `columnSpan` (integer, obrigatório): Número de colunas que a célula mesclada abrange.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/insert_inline_image">
|
||||
**Descrição:** Inserir uma imagem de uma URL pública em um documento do Google. A imagem deve ser publicamente acessível, ter menos de 50MB e estar no formato PNG/JPEG/GIF.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `uri` (string, obrigatório): URL pública da imagem. Deve ser acessível sem autenticação.
|
||||
- `index` (integer, opcional): Posição para inserir a imagem. Se não fornecido, a imagem é inserida no final do documento. Padrão: `1`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/insert_section_break">
|
||||
**Descrição:** Inserir uma quebra de seção para criar seções de documento com formatação diferente.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `index` (integer, obrigatório): Posição para inserir a quebra de seção.
|
||||
- `sectionType` (string, obrigatório): O tipo de quebra de seção. Opções: `CONTINUOUS` (permanece na mesma página), `NEXT_PAGE` (inicia uma nova página).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/create_header">
|
||||
**Descrição:** Criar um cabeçalho para o documento. Retorna um headerId que pode ser usado com insert_text para adicionar conteúdo ao cabeçalho.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `type` (string, opcional): Tipo de cabeçalho. Opções: `DEFAULT`. Padrão: `DEFAULT`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/create_footer">
|
||||
**Descrição:** Criar um rodapé para o documento. Retorna um footerId que pode ser usado com insert_text para adicionar conteúdo ao rodapé.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `type` (string, opcional): Tipo de rodapé. Opções: `DEFAULT`. Padrão: `DEFAULT`.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/delete_header">
|
||||
**Descrição:** Excluir um cabeçalho do documento. Use get_document para encontrar o headerId.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `headerId` (string, obrigatório): O ID do cabeçalho a excluir. Obtenha da resposta de get_document.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_docs/delete_footer">
|
||||
**Descrição:** Excluir um rodapé do documento. Use get_document para encontrar o footerId.
|
||||
|
||||
**Parâmetros:**
|
||||
- `documentId` (string, obrigatório): O ID do documento.
|
||||
- `footerId` (string, obrigatório): O ID do rodapé a excluir. Obtenha da resposta de get_document.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Exemplos de Uso
|
||||
|
||||
@@ -61,6 +61,22 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/get_presentation_metadata">
|
||||
**Descrição:** Obter metadados leves de uma apresentação (título, número de slides, IDs dos slides). Use isso primeiro antes de recuperar o conteúdo completo.
|
||||
|
||||
**Parâmetros:**
|
||||
- `presentationId` (string, obrigatório): O ID da apresentação a ser recuperada.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/get_presentation_text">
|
||||
**Descrição:** Extrair todo o conteúdo de texto de uma apresentação. Retorna IDs dos slides e texto de formas e tabelas apenas (sem formatação).
|
||||
|
||||
**Parâmetros:**
|
||||
- `presentationId` (string, obrigatório): O ID da apresentação.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/get_presentation">
|
||||
**Descrição:** Recupera uma apresentação por ID.
|
||||
|
||||
@@ -80,6 +96,15 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/get_slide_text">
|
||||
**Descrição:** Extrair conteúdo de texto de um único slide. Retorna apenas texto de formas e tabelas (sem formatação ou estilo).
|
||||
|
||||
**Parâmetros:**
|
||||
- `presentationId` (string, obrigatório): O ID da apresentação.
|
||||
- `pageObjectId` (string, obrigatório): O ID do slide/página para obter o texto.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/get_page">
|
||||
**Descrição:** Recupera uma página específica por seu ID.
|
||||
|
||||
@@ -98,6 +123,120 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/create_slide">
|
||||
**Descrição:** Adicionar um slide em branco adicional a uma apresentação. Novas apresentações já possuem um slide em branco - verifique get_presentation_metadata primeiro. Para slides com áreas de título/corpo, use create_slide_with_layout.
|
||||
|
||||
**Parâmetros:**
|
||||
- `presentationId` (string, obrigatório): O ID da apresentação.
|
||||
- `insertionIndex` (integer, opcional): Onde inserir o slide (baseado em 0). Se omitido, adiciona no final.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/create_slide_with_layout">
|
||||
**Descrição:** Criar um slide com layout predefinido contendo áreas de espaço reservado para título, corpo, etc. Melhor que create_slide para conteúdo estruturado. Após criar, use get_page para encontrar os IDs de espaço reservado, depois insira texto neles.
|
||||
|
||||
**Parâmetros:**
|
||||
- `presentationId` (string, obrigatório): O ID da apresentação.
|
||||
- `layout` (string, obrigatório): Tipo de layout. Um de: `BLANK`, `TITLE`, `TITLE_AND_BODY`, `TITLE_AND_TWO_COLUMNS`, `TITLE_ONLY`, `SECTION_HEADER`, `ONE_COLUMN_TEXT`, `MAIN_POINT`, `BIG_NUMBER`. TITLE_AND_BODY é melhor para título+descrição. TITLE para slides apenas com título. SECTION_HEADER para divisores de seção.
|
||||
- `insertionIndex` (integer, opcional): Onde inserir (baseado em 0). Se omitido, adiciona no final.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/create_text_box">
|
||||
**Descrição:** Criar uma caixa de texto em um slide com conteúdo. Use para títulos, descrições, parágrafos - não para tabelas. Opcionalmente especifique posição (x, y) e tamanho (width, height) em unidades EMU (914400 EMU = 1 polegada).
|
||||
|
||||
**Parâmetros:**
|
||||
- `presentationId` (string, obrigatório): O ID da apresentação.
|
||||
- `slideId` (string, obrigatório): O ID do slide para adicionar a caixa de texto.
|
||||
- `text` (string, obrigatório): O conteúdo de texto da caixa de texto.
|
||||
- `x` (integer, opcional): Posição X em EMU (914400 = 1 polegada). Padrão: 914400 (1 polegada da esquerda).
|
||||
- `y` (integer, opcional): Posição Y em EMU (914400 = 1 polegada). Padrão: 914400 (1 polegada do topo).
|
||||
- `width` (integer, opcional): Largura em EMU. Padrão: 7315200 (~8 polegadas).
|
||||
- `height` (integer, opcional): Altura em EMU. Padrão: 914400 (~1 polegada).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/delete_slide">
|
||||
**Descrição:** Remover um slide de uma apresentação. Use get_presentation primeiro para encontrar o ID do slide.
|
||||
|
||||
**Parâmetros:**
|
||||
- `presentationId` (string, obrigatório): O ID da apresentação.
|
||||
- `slideId` (string, obrigatório): O ID do objeto do slide a excluir. Obtenha de get_presentation.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/duplicate_slide">
|
||||
**Descrição:** Criar uma cópia de um slide existente. A duplicata é inserida imediatamente após o original.
|
||||
|
||||
**Parâmetros:**
|
||||
- `presentationId` (string, obrigatório): O ID da apresentação.
|
||||
- `slideId` (string, obrigatório): O ID do objeto do slide a duplicar. Obtenha de get_presentation.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/move_slides">
|
||||
**Descrição:** Reordenar slides movendo-os para uma nova posição. Os IDs dos slides devem estar na ordem atual da apresentação (sem duplicatas).
|
||||
|
||||
**Parâmetros:**
|
||||
- `presentationId` (string, obrigatório): O ID da apresentação.
|
||||
- `slideIds` (array de strings, obrigatório): Array de IDs dos slides a mover. Obrigatoriamente na ordem atual da apresentação.
|
||||
- `insertionIndex` (integer, obrigatório): Posição de destino (baseado em 0). 0 = início, número de slides = final.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/insert_youtube_video">
|
||||
**Descrição:** Incorporar um vídeo do YouTube em um slide. O ID do vídeo é o valor após "v=" nas URLs do YouTube (ex: para youtube.com/watch?v=abc123, use "abc123").
|
||||
|
||||
**Parâmetros:**
|
||||
- `presentationId` (string, obrigatório): O ID da apresentação.
|
||||
- `slideId` (string, obrigatório): O ID do slide para adicionar o vídeo. Obtenha de get_presentation.
|
||||
- `videoId` (string, obrigatório): O ID do vídeo do YouTube (o valor após v= na URL).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/insert_drive_video">
|
||||
**Descrição:** Incorporar um vídeo do Google Drive em um slide. O ID do arquivo pode ser encontrado na URL do arquivo no Drive.
|
||||
|
||||
**Parâmetros:**
|
||||
- `presentationId` (string, obrigatório): O ID da apresentação.
|
||||
- `slideId` (string, obrigatório): O ID do slide para adicionar o vídeo. Obtenha de get_presentation.
|
||||
- `fileId` (string, obrigatório): O ID do arquivo do Google Drive do vídeo.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/set_slide_background_image">
|
||||
**Descrição:** Definir uma imagem de fundo para um slide. A URL da imagem deve ser publicamente acessível.
|
||||
|
||||
**Parâmetros:**
|
||||
- `presentationId` (string, obrigatório): O ID da apresentação.
|
||||
- `slideId` (string, obrigatório): O ID do slide para definir o fundo. Obtenha de get_presentation.
|
||||
- `imageUrl` (string, obrigatório): URL publicamente acessível da imagem a usar como fundo.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/create_table">
|
||||
**Descrição:** Criar uma tabela vazia em um slide. Para criar uma tabela com conteúdo, use create_table_with_content.
|
||||
|
||||
**Parâmetros:**
|
||||
- `presentationId` (string, obrigatório): O ID da apresentação.
|
||||
- `slideId` (string, obrigatório): O ID do slide para adicionar a tabela. Obtenha de get_presentation.
|
||||
- `rows` (integer, obrigatório): Número de linhas na tabela.
|
||||
- `columns` (integer, obrigatório): Número de colunas na tabela.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/create_table_with_content">
|
||||
**Descrição:** Criar uma tabela com conteúdo em uma única ação. Forneça o conteúdo como uma matriz 2D onde cada array interno é uma linha. Exemplo: [["Cabeçalho1", "Cabeçalho2"], ["Linha1Col1", "Linha1Col2"]].
|
||||
|
||||
**Parâmetros:**
|
||||
- `presentationId` (string, obrigatório): O ID da apresentação.
|
||||
- `slideId` (string, obrigatório): O ID do slide para adicionar a tabela. Obtenha de get_presentation.
|
||||
- `rows` (integer, obrigatório): Número de linhas na tabela.
|
||||
- `columns` (integer, obrigatório): Número de colunas na tabela.
|
||||
- `content` (array, obrigatório): Conteúdo da tabela como matriz 2D. Cada array interno é uma linha. Exemplo: [["Ano", "Receita"], ["2023", "$10M"]].
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="google_slides/import_data_from_sheet">
|
||||
**Descrição:** Importa dados de uma planilha do Google para uma apresentação.
|
||||
|
||||
|
||||
@@ -148,6 +148,16 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/get_table_data">
|
||||
**Descrição:** Obter dados de uma tabela específica em uma planilha do Excel.
|
||||
|
||||
**Parâmetros:**
|
||||
- `file_id` (string, obrigatório): O ID do arquivo Excel.
|
||||
- `worksheet_name` (string, obrigatório): Nome da planilha.
|
||||
- `table_name` (string, obrigatório): Nome da tabela.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/create_chart">
|
||||
**Descrição:** Criar um gráfico em uma planilha do Excel.
|
||||
|
||||
@@ -180,6 +190,15 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/get_used_range_metadata">
|
||||
**Descrição:** Obter os metadados do intervalo usado (apenas dimensões, sem dados) de uma planilha do Excel.
|
||||
|
||||
**Parâmetros:**
|
||||
- `file_id` (string, obrigatório): O ID do arquivo Excel.
|
||||
- `worksheet_name` (string, obrigatório): Nome da planilha.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_excel/list_charts">
|
||||
**Descrição:** Obter todos os gráficos em uma planilha do Excel.
|
||||
|
||||
|
||||
@@ -150,6 +150,49 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
|
||||
- `item_id` (string, obrigatório): O ID do arquivo.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/list_files_by_path">
|
||||
**Descrição:** Listar arquivos e pastas em um caminho específico do OneDrive.
|
||||
|
||||
**Parâmetros:**
|
||||
- `folder_path` (string, obrigatório): O caminho da pasta (ex: 'Documents/Reports').
|
||||
- `top` (integer, opcional): Número de itens a recuperar (máx 1000). Padrão: 50.
|
||||
- `orderby` (string, opcional): Ordenar por campo (ex: "name asc", "lastModifiedDateTime desc"). Padrão: "name asc".
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/get_recent_files">
|
||||
**Descrição:** Obter arquivos acessados recentemente no OneDrive.
|
||||
|
||||
**Parâmetros:**
|
||||
- `top` (integer, opcional): Número de itens a recuperar (máx 200). Padrão: 25.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/get_shared_with_me">
|
||||
**Descrição:** Obter arquivos e pastas compartilhados com o usuário.
|
||||
|
||||
**Parâmetros:**
|
||||
- `top` (integer, opcional): Número de itens a recuperar (máx 200). Padrão: 50.
|
||||
- `orderby` (string, opcional): Ordenar por campo. Padrão: "name asc".
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/get_file_by_path">
|
||||
**Descrição:** Obter informações sobre um arquivo ou pasta específica pelo caminho.
|
||||
|
||||
**Parâmetros:**
|
||||
- `file_path` (string, obrigatório): O caminho do arquivo ou pasta (ex: 'Documents/report.docx').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_onedrive/download_file_by_path">
|
||||
**Descrição:** Baixar um arquivo do OneDrive pelo seu caminho.
|
||||
|
||||
**Parâmetros:**
|
||||
- `file_path` (string, obrigatório): O caminho do arquivo (ex: 'Documents/report.docx').
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Exemplos de Uso
|
||||
|
||||
@@ -132,6 +132,74 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
|
||||
- `companyName` (string, opcional): Nome da empresa do contato.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/get_message">
|
||||
**Descrição:** Obter uma mensagem de email específica por ID.
|
||||
|
||||
**Parâmetros:**
|
||||
- `message_id` (string, obrigatório): O identificador único da mensagem. Obter pela ação get_messages.
|
||||
- `select` (string, opcional): Lista separada por vírgulas de propriedades a retornar. Exemplo: "id,subject,body,from,receivedDateTime". Padrão: "id,subject,body,from,toRecipients,receivedDateTime".
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/reply_to_email">
|
||||
**Descrição:** Responder a uma mensagem de email.
|
||||
|
||||
**Parâmetros:**
|
||||
- `message_id` (string, obrigatório): O identificador único da mensagem a responder. Obter pela ação get_messages.
|
||||
- `comment` (string, obrigatório): O conteúdo da mensagem de resposta. Pode ser texto simples ou HTML. A mensagem original será citada abaixo deste conteúdo.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/forward_email">
|
||||
**Descrição:** Encaminhar uma mensagem de email.
|
||||
|
||||
**Parâmetros:**
|
||||
- `message_id` (string, obrigatório): O identificador único da mensagem a encaminhar. Obter pela ação get_messages.
|
||||
- `to_recipients` (array, obrigatório): Array de endereços de email dos destinatários. Exemplo: ["john@example.com", "jane@example.com"].
|
||||
- `comment` (string, opcional): Mensagem opcional a incluir acima do conteúdo encaminhado. Pode ser texto simples ou HTML.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/mark_message_read">
|
||||
**Descrição:** Marcar uma mensagem como lida ou não lida.
|
||||
|
||||
**Parâmetros:**
|
||||
- `message_id` (string, obrigatório): O identificador único da mensagem. Obter pela ação get_messages.
|
||||
- `is_read` (boolean, obrigatório): Definir como true para marcar como lida, false para marcar como não lida.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/delete_message">
|
||||
**Descrição:** Excluir uma mensagem de email.
|
||||
|
||||
**Parâmetros:**
|
||||
- `message_id` (string, obrigatório): O identificador único da mensagem a excluir. Obter pela ação get_messages.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/update_event">
|
||||
**Descrição:** Atualizar um evento de calendário existente.
|
||||
|
||||
**Parâmetros:**
|
||||
- `event_id` (string, obrigatório): O identificador único do evento. Obter pela ação get_calendar_events.
|
||||
- `subject` (string, opcional): Novo assunto/título do evento.
|
||||
- `start_time` (string, opcional): Nova hora de início no formato ISO 8601 (ex: "2024-01-20T10:00:00"). OBRIGATÓRIO: Também deve fornecer start_timezone ao usar este campo.
|
||||
- `start_timezone` (string, opcional): Fuso horário da hora de início. OBRIGATÓRIO ao atualizar start_time. Exemplos: "Pacific Standard Time", "Eastern Standard Time", "UTC".
|
||||
- `end_time` (string, opcional): Nova hora de término no formato ISO 8601. OBRIGATÓRIO: Também deve fornecer end_timezone ao usar este campo.
|
||||
- `end_timezone` (string, opcional): Fuso horário da hora de término. OBRIGATÓRIO ao atualizar end_time. Exemplos: "Pacific Standard Time", "Eastern Standard Time", "UTC".
|
||||
- `location` (string, opcional): Novo local do evento.
|
||||
- `body` (string, opcional): Novo corpo/descrição do evento. Suporta formatação HTML.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_outlook/delete_event">
|
||||
**Descrição:** Excluir um evento de calendário.
|
||||
|
||||
**Parâmetros:**
|
||||
- `event_id` (string, obrigatório): O identificador único do evento a excluir. Obter pela ação get_calendar_events.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Exemplos de Uso
|
||||
|
||||
@@ -77,6 +77,17 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_drives">
|
||||
**Descrição:** Listar todas as bibliotecas de documentos (drives) em um site do SharePoint. Use isto para descobrir bibliotecas disponíveis antes de usar operações de arquivo.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `top` (integer, opcional): Número máximo de drives a retornar por página (1-999). Padrão: 100
|
||||
- `skip_token` (string, opcional): Token de paginação de uma resposta anterior para buscar a próxima página de resultados.
|
||||
- `select` (string, opcional): Lista de propriedades separadas por vírgula para retornar (ex: 'id,name,webUrl,driveType').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_site_lists">
|
||||
**Descrição:** Obter todas as listas em um site do SharePoint.
|
||||
|
||||
@@ -145,20 +156,317 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_drive_items">
|
||||
**Descrição:** Obter arquivos e pastas de uma biblioteca de documentos do SharePoint.
|
||||
<Accordion title="microsoft_sharepoint/list_files">
|
||||
**Descrição:** Recuperar arquivos e pastas de uma biblioteca de documentos do SharePoint. Por padrão, lista a pasta raiz, mas você pode navegar em subpastas fornecendo um folder_id.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O ID do site do SharePoint.
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `folder_id` (string, opcional): O ID da pasta para listar o conteúdo. Use 'root' para a pasta raiz, ou forneça um ID de pasta de uma chamada anterior de list_files. Padrão: 'root'
|
||||
- `top` (integer, opcional): Número máximo de itens a retornar por página (1-1000). Padrão: 50
|
||||
- `skip_token` (string, opcional): Token de paginação de uma resposta anterior para buscar a próxima página de resultados.
|
||||
- `orderby` (string, opcional): Ordem de classificação dos resultados (ex: 'name asc', 'size desc', 'lastModifiedDateTime desc'). Padrão: 'name asc'
|
||||
- `filter` (string, opcional): Filtro OData para restringir resultados (ex: 'file ne null' apenas para arquivos, 'folder ne null' apenas para pastas).
|
||||
- `select` (string, opcional): Lista de campos separados por vírgula para retornar (ex: 'id,name,size,folder,file,webUrl,lastModifiedDateTime').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/delete_drive_item">
|
||||
**Descrição:** Excluir um arquivo ou pasta da biblioteca de documentos do SharePoint.
|
||||
<Accordion title="microsoft_sharepoint/delete_file">
|
||||
**Descrição:** Excluir um arquivo ou pasta de uma biblioteca de documentos do SharePoint. Para pastas, todo o conteúdo é excluído recursivamente. Os itens são movidos para a lixeira do site.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O ID do site do SharePoint.
|
||||
- `item_id` (string, obrigatório): O ID do arquivo ou pasta a excluir.
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo ou pasta a excluir. Obtenha de list_files.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/list_files_by_path">
|
||||
**Descrição:** Listar arquivos e pastas em uma pasta de biblioteca de documentos do SharePoint pelo caminho. Mais eficiente do que múltiplas chamadas list_files para navegação profunda.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `folder_path` (string, obrigatório): O caminho completo para a pasta sem barras iniciais/finais (ex: 'Documents', 'Reports/2024/Q1').
|
||||
- `top` (integer, opcional): Número máximo de itens a retornar por página (1-1000). Padrão: 50
|
||||
- `skip_token` (string, opcional): Token de paginação de uma resposta anterior para buscar a próxima página de resultados.
|
||||
- `orderby` (string, opcional): Ordem de classificação dos resultados (ex: 'name asc', 'size desc'). Padrão: 'name asc'
|
||||
- `select` (string, opcional): Lista de campos separados por vírgula para retornar (ex: 'id,name,size,folder,file,webUrl,lastModifiedDateTime').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/download_file">
|
||||
**Descrição:** Baixar conteúdo bruto de um arquivo de uma biblioteca de documentos do SharePoint. Use apenas para arquivos de texto simples (.txt, .csv, .json). Para arquivos Excel, use as ações específicas de Excel. Para arquivos Word, use get_word_document_content.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo a baixar. Obtenha de list_files ou list_files_by_path.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_file_info">
|
||||
**Descrição:** Recuperar metadados detalhados de um arquivo ou pasta específico em uma biblioteca de documentos do SharePoint, incluindo nome, tamanho, datas de criação/modificação e informações do autor.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo ou pasta. Obtenha de list_files ou list_files_by_path.
|
||||
- `select` (string, opcional): Lista de propriedades separadas por vírgula para retornar (ex: 'id,name,size,createdDateTime,lastModifiedDateTime,webUrl,createdBy,lastModifiedBy').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/create_folder">
|
||||
**Descrição:** Criar uma nova pasta em uma biblioteca de documentos do SharePoint. Por padrão, cria a pasta na raiz; use parent_id para criar subpastas.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `folder_name` (string, obrigatório): Nome para a nova pasta. Não pode conter: \ / : * ? " < > |
|
||||
- `parent_id` (string, opcional): O ID da pasta pai. Use 'root' para a raiz da biblioteca de documentos, ou forneça um ID de pasta de list_files. Padrão: 'root'
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/search_files">
|
||||
**Descrição:** Pesquisar arquivos e pastas em uma biblioteca de documentos do SharePoint por palavras-chave. Pesquisa nomes de arquivos, nomes de pastas e conteúdo de arquivos para documentos Office. Não use curingas ou caracteres especiais.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `query` (string, obrigatório): Palavras-chave de pesquisa (ex: 'relatório', 'orçamento 2024'). Curingas como *.txt não são suportados.
|
||||
- `top` (integer, opcional): Número máximo de resultados a retornar por página (1-1000). Padrão: 50
|
||||
- `skip_token` (string, opcional): Token de paginação de uma resposta anterior para buscar a próxima página de resultados.
|
||||
- `select` (string, opcional): Lista de campos separados por vírgula para retornar (ex: 'id,name,size,folder,file,webUrl,lastModifiedDateTime').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/copy_file">
|
||||
**Descrição:** Copiar um arquivo ou pasta para um novo local dentro do SharePoint. O item original permanece inalterado. A operação de cópia é assíncrona para arquivos grandes.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo ou pasta a copiar. Obtenha de list_files ou search_files.
|
||||
- `destination_folder_id` (string, obrigatório): O ID da pasta de destino. Use 'root' para a pasta raiz, ou um ID de pasta de list_files.
|
||||
- `new_name` (string, opcional): Novo nome para a cópia. Se não fornecido, o nome original é usado.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/move_file">
|
||||
**Descrição:** Mover um arquivo ou pasta para um novo local dentro do SharePoint. O item é removido de sua localização original. Para pastas, todo o conteúdo é movido também.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo ou pasta a mover. Obtenha de list_files ou search_files.
|
||||
- `destination_folder_id` (string, obrigatório): O ID da pasta de destino. Use 'root' para a pasta raiz, ou um ID de pasta de list_files.
|
||||
- `new_name` (string, opcional): Novo nome para o item movido. Se não fornecido, o nome original é mantido.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_worksheets">
|
||||
**Descrição:** Listar todas as planilhas (abas) em uma pasta de trabalho Excel armazenada em uma biblioteca de documentos do SharePoint. Use o nome da planilha retornado com outras ações de Excel.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
|
||||
- `select` (string, opcional): Lista de propriedades separadas por vírgula para retornar (ex: 'id,name,position,visibility').
|
||||
- `filter` (string, opcional): Expressão de filtro OData (ex: "visibility eq 'Visible'" para excluir planilhas ocultas).
|
||||
- `top` (integer, opcional): Número máximo de planilhas a retornar. Mínimo: 1, Máximo: 999
|
||||
- `orderby` (string, opcional): Ordem de classificação (ex: 'position asc' para retornar planilhas na ordem das abas).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/create_excel_worksheet">
|
||||
**Descrição:** Criar uma nova planilha (aba) em uma pasta de trabalho Excel armazenada em uma biblioteca de documentos do SharePoint. A nova planilha é adicionada no final da lista de abas.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
|
||||
- `name` (string, obrigatório): Nome para a nova planilha. Máximo de 31 caracteres. Não pode conter: \ / * ? : [ ]. Deve ser único na pasta de trabalho.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_range_data">
|
||||
**Descrição:** Recuperar valores de células de um intervalo específico em uma planilha Excel armazenada no SharePoint. Para ler todos os dados sem saber as dimensões, use get_excel_used_range em vez disso.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
|
||||
- `worksheet_name` (string, obrigatório): Nome da planilha (aba) para leitura. Obtenha de get_excel_worksheets. Sensível a maiúsculas e minúsculas.
|
||||
- `range` (string, obrigatório): Intervalo de células em notação A1 (ex: 'A1:C10', 'A:C', '1:5', 'A1').
|
||||
- `select` (string, opcional): Lista de propriedades separadas por vírgula para retornar (ex: 'address,values,formulas,numberFormat,text').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/update_excel_range_data">
|
||||
**Descrição:** Escrever valores em um intervalo específico em uma planilha Excel armazenada no SharePoint. Sobrescreve o conteúdo existente das células. As dimensões do array de valores devem corresponder exatamente às dimensões do intervalo.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
|
||||
- `worksheet_name` (string, obrigatório): Nome da planilha (aba) a atualizar. Obtenha de get_excel_worksheets. Sensível a maiúsculas e minúsculas.
|
||||
- `range` (string, obrigatório): Intervalo de células em notação A1 onde os valores serão escritos (ex: 'A1:C3' para um bloco 3x3).
|
||||
- `values` (array, obrigatório): Array 2D de valores (linhas contendo células). Exemplo para A1:B2: [["Cabeçalho1", "Cabeçalho2"], ["Valor1", "Valor2"]]. Use null para limpar uma célula.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_used_range_metadata">
|
||||
**Descrição:** Retornar apenas os metadados (endereço e dimensões) do intervalo utilizado em uma planilha, sem os valores reais das células. Ideal para arquivos grandes para entender o tamanho da planilha antes de ler dados em blocos.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
|
||||
- `worksheet_name` (string, obrigatório): Nome da planilha (aba) para leitura. Obtenha de get_excel_worksheets. Sensível a maiúsculas e minúsculas.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_used_range">
|
||||
**Descrição:** Recuperar todas as células contendo dados em uma planilha armazenada no SharePoint. Não use para arquivos maiores que 2MB. Para arquivos grandes, use get_excel_used_range_metadata primeiro, depois get_excel_range_data para ler em blocos menores.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
|
||||
- `worksheet_name` (string, obrigatório): Nome da planilha (aba) para leitura. Obtenha de get_excel_worksheets. Sensível a maiúsculas e minúsculas.
|
||||
- `select` (string, opcional): Lista de propriedades separadas por vírgula para retornar (ex: 'address,values,formulas,numberFormat,text,rowCount,columnCount').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_cell">
|
||||
**Descrição:** Recuperar o valor de uma única célula por índice de linha e coluna de um arquivo Excel no SharePoint. Os índices são baseados em 0 (linha 0 = linha 1 do Excel, coluna 0 = coluna A).
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
|
||||
- `worksheet_name` (string, obrigatório): Nome da planilha (aba). Obtenha de get_excel_worksheets. Sensível a maiúsculas e minúsculas.
|
||||
- `row` (integer, obrigatório): Índice de linha baseado em 0 (linha 0 = linha 1 do Excel). Intervalo válido: 0-1048575
|
||||
- `column` (integer, obrigatório): Índice de coluna baseado em 0 (coluna 0 = A, coluna 1 = B). Intervalo válido: 0-16383
|
||||
- `select` (string, opcional): Lista de propriedades separadas por vírgula para retornar (ex: 'address,values,formulas,numberFormat,text').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/add_excel_table">
|
||||
**Descrição:** Converter um intervalo de células em uma tabela Excel formatada com recursos de filtragem, classificação e dados estruturados. Tabelas habilitam add_excel_table_row para adicionar dados.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
|
||||
- `worksheet_name` (string, obrigatório): Nome da planilha contendo o intervalo de dados. Obtenha de get_excel_worksheets.
|
||||
- `range` (string, obrigatório): Intervalo de células para converter em tabela, incluindo cabeçalhos e dados (ex: 'A1:D10' onde A1:D1 contém cabeçalhos de coluna).
|
||||
- `has_headers` (boolean, opcional): Defina como true se a primeira linha contém cabeçalhos de coluna. Padrão: true
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_tables">
|
||||
**Descrição:** Listar todas as tabelas em uma planilha Excel específica armazenada no SharePoint. Retorna propriedades da tabela incluindo id, name, showHeaders e showTotals.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
|
||||
- `worksheet_name` (string, obrigatório): Nome da planilha para obter tabelas. Obtenha de get_excel_worksheets.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/add_excel_table_row">
|
||||
**Descrição:** Adicionar uma nova linha ao final de uma tabela Excel em um arquivo do SharePoint. O array de valores deve ter o mesmo número de elementos que o número de colunas da tabela.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
|
||||
- `worksheet_name` (string, obrigatório): Nome da planilha contendo a tabela. Obtenha de get_excel_worksheets.
|
||||
- `table_name` (string, obrigatório): Nome da tabela para adicionar a linha (ex: 'Table1'). Obtenha de get_excel_tables. Sensível a maiúsculas e minúsculas.
|
||||
- `values` (array, obrigatório): Array de valores de células para a nova linha, um por coluna na ordem da tabela (ex: ["João Silva", "joao@exemplo.com", 25]).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_excel_table_data">
|
||||
**Descrição:** Obter todas as linhas de uma tabela Excel em um arquivo do SharePoint como um intervalo de dados. Mais fácil do que get_excel_range_data ao trabalhar com tabelas estruturadas, pois não é necessário saber o intervalo exato.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
|
||||
- `worksheet_name` (string, obrigatório): Nome da planilha contendo a tabela. Obtenha de get_excel_worksheets.
|
||||
- `table_name` (string, obrigatório): Nome da tabela para obter dados (ex: 'Table1'). Obtenha de get_excel_tables. Sensível a maiúsculas e minúsculas.
|
||||
- `select` (string, opcional): Lista de propriedades separadas por vírgula para retornar (ex: 'address,values,formulas,numberFormat,text').
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/create_excel_chart">
|
||||
**Descrição:** Criar uma visualização de gráfico em uma planilha Excel armazenada no SharePoint a partir de um intervalo de dados. O gráfico é incorporado na planilha.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
|
||||
- `worksheet_name` (string, obrigatório): Nome da planilha onde o gráfico será criado. Obtenha de get_excel_worksheets.
|
||||
- `chart_type` (string, obrigatório): Tipo de gráfico (ex: 'ColumnClustered', 'ColumnStacked', 'Line', 'LineMarkers', 'Pie', 'Bar', 'BarClustered', 'Area', 'Scatter', 'Doughnut').
|
||||
- `source_data` (string, obrigatório): Intervalo de dados para o gráfico em notação A1, incluindo cabeçalhos (ex: 'A1:B10').
|
||||
- `series_by` (string, opcional): Como as séries de dados são organizadas: 'Auto', 'Columns' ou 'Rows'. Padrão: 'Auto'
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/list_excel_charts">
|
||||
**Descrição:** Listar todos os gráficos incorporados em uma planilha Excel armazenada no SharePoint. Retorna propriedades do gráfico incluindo id, name, chartType, height, width e position.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
|
||||
- `worksheet_name` (string, obrigatório): Nome da planilha para listar gráficos. Obtenha de get_excel_worksheets.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/delete_excel_worksheet">
|
||||
**Descrição:** Remover permanentemente uma planilha (aba) e todo seu conteúdo de uma pasta de trabalho Excel armazenada no SharePoint. Não pode ser desfeito. Uma pasta de trabalho deve ter pelo menos uma planilha.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
|
||||
- `worksheet_name` (string, obrigatório): Nome da planilha a excluir. Sensível a maiúsculas e minúsculas. Todos os dados, tabelas e gráficos nesta planilha serão permanentemente removidos.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/delete_excel_table">
|
||||
**Descrição:** Remover uma tabela de uma planilha Excel no SharePoint. Isto exclui a estrutura da tabela (filtragem, formatação, recursos de tabela) mas preserva os dados subjacentes das células.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
|
||||
- `worksheet_name` (string, obrigatório): Nome da planilha contendo a tabela. Obtenha de get_excel_worksheets.
|
||||
- `table_name` (string, obrigatório): Nome da tabela a excluir (ex: 'Table1'). Obtenha de get_excel_tables. Os dados nas células permanecerão após a exclusão da tabela.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/list_excel_names">
|
||||
**Descrição:** Recuperar todos os intervalos nomeados definidos em uma pasta de trabalho Excel armazenada no SharePoint. Intervalos nomeados são rótulos definidos pelo usuário para intervalos de células (ex: 'DadosVendas' para A1:D100).
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do arquivo Excel no SharePoint. Obtenha de list_files ou search_files.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_sharepoint/get_word_document_content">
|
||||
**Descrição:** Baixar e extrair conteúdo de texto de um documento Word (.docx) armazenado em uma biblioteca de documentos do SharePoint. Esta é a maneira recomendada de ler documentos Word do SharePoint.
|
||||
|
||||
**Parâmetros:**
|
||||
- `site_id` (string, obrigatório): O identificador completo do site SharePoint obtido de get_sites.
|
||||
- `drive_id` (string, obrigatório): O ID da biblioteca de documentos. Chame get_drives primeiro para obter IDs de drive válidos.
|
||||
- `item_id` (string, obrigatório): O identificador único do documento Word (.docx) no SharePoint. Obtenha de list_files ou search_files.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
@@ -107,6 +107,86 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
|
||||
- `join_web_url` (string, obrigatório): A URL de participação na web da reunião a pesquisar.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/search_online_meetings_by_meeting_id">
|
||||
**Descrição:** Pesquisar reuniões online por ID externo da reunião.
|
||||
|
||||
**Parâmetros:**
|
||||
- `join_meeting_id` (string, obrigatório): O ID da reunião (código numérico) que os participantes usam para entrar. É o joinMeetingId exibido nos convites da reunião, não o meeting id da API Graph.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/get_meeting">
|
||||
**Descrição:** Obter detalhes de uma reunião online específica.
|
||||
|
||||
**Parâmetros:**
|
||||
- `meeting_id` (string, obrigatório): O ID da reunião na API Graph (string alfanumérica longa). Obter pelas ações create_meeting ou search_online_meetings. Diferente do joinMeetingId numérico.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/get_team_members">
|
||||
**Descrição:** Obter membros de uma equipe específica.
|
||||
|
||||
**Parâmetros:**
|
||||
- `team_id` (string, obrigatório): O identificador único da equipe. Obter pela ação get_teams.
|
||||
- `top` (integer, opcional): Número máximo de membros a recuperar por página (1-999). Padrão: 100.
|
||||
- `skip_token` (string, opcional): Token de paginação de uma resposta anterior. Quando a resposta incluir @odata.nextLink, extraia o valor do parâmetro $skiptoken e passe aqui para obter a próxima página de resultados.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/create_channel">
|
||||
**Descrição:** Criar um novo canal em uma equipe.
|
||||
|
||||
**Parâmetros:**
|
||||
- `team_id` (string, obrigatório): O identificador único da equipe. Obter pela ação get_teams.
|
||||
- `display_name` (string, obrigatório): Nome do canal exibido no Teams. Deve ser único na equipe. Máx 50 caracteres.
|
||||
- `description` (string, opcional): Descrição opcional explicando o propósito do canal. Visível nos detalhes do canal. Máx 1024 caracteres.
|
||||
- `membership_type` (string, opcional): Visibilidade do canal. Opções: standard, private. "standard" = visível para todos os membros da equipe, "private" = visível apenas para membros adicionados especificamente. Padrão: standard.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/get_message_replies">
|
||||
**Descrição:** Obter respostas a uma mensagem específica em um canal.
|
||||
|
||||
**Parâmetros:**
|
||||
- `team_id` (string, obrigatório): O identificador único da equipe. Obter pela ação get_teams.
|
||||
- `channel_id` (string, obrigatório): O identificador único do canal. Obter pela ação get_channels.
|
||||
- `message_id` (string, obrigatório): O identificador único da mensagem pai. Obter pela ação get_messages.
|
||||
- `top` (integer, opcional): Número máximo de respostas a recuperar por página (1-50). Padrão: 50.
|
||||
- `skip_token` (string, opcional): Token de paginação de uma resposta anterior. Quando a resposta incluir @odata.nextLink, extraia o valor do parâmetro $skiptoken e passe aqui para obter a próxima página de resultados.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/reply_to_message">
|
||||
**Descrição:** Responder a uma mensagem em um canal do Teams.
|
||||
|
||||
**Parâmetros:**
|
||||
- `team_id` (string, obrigatório): O identificador único da equipe. Obter pela ação get_teams.
|
||||
- `channel_id` (string, obrigatório): O identificador único do canal. Obter pela ação get_channels.
|
||||
- `message_id` (string, obrigatório): O identificador único da mensagem a responder. Obter pela ação get_messages.
|
||||
- `message` (string, obrigatório): O conteúdo da resposta. Para HTML, inclua tags de formatação. Para texto, use apenas texto simples.
|
||||
- `content_type` (string, opcional): Formato do conteúdo. Opções: html, text. "text" para texto simples, "html" para texto rico com formatação. Padrão: text.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/update_meeting">
|
||||
**Descrição:** Atualizar uma reunião online existente.
|
||||
|
||||
**Parâmetros:**
|
||||
- `meeting_id` (string, obrigatório): O identificador único da reunião. Obter pelas ações create_meeting ou search_online_meetings.
|
||||
- `subject` (string, opcional): Novo título da reunião.
|
||||
- `startDateTime` (string, opcional): Nova hora de início no formato ISO 8601 com fuso horário. Exemplo: "2024-01-20T10:00:00-08:00".
|
||||
- `endDateTime` (string, opcional): Nova hora de término no formato ISO 8601 com fuso horário.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_teams/delete_meeting">
|
||||
**Descrição:** Excluir uma reunião online.
|
||||
|
||||
**Parâmetros:**
|
||||
- `meeting_id` (string, obrigatório): O identificador único da reunião a excluir. Obter pelas ações create_meeting ou search_online_meetings.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Exemplos de Uso
|
||||
|
||||
@@ -97,6 +97,26 @@ CREWAI_PLATFORM_INTEGRATION_TOKEN=seu_enterprise_token
|
||||
- `file_id` (string, obrigatório): O ID do documento a excluir.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_word/copy_document">
|
||||
**Descrição:** Copiar um documento para um novo local no OneDrive.
|
||||
|
||||
**Parâmetros:**
|
||||
- `file_id` (string, obrigatório): O ID do documento a copiar.
|
||||
- `name` (string, opcional): Novo nome para o documento copiado.
|
||||
- `parent_id` (string, opcional): O ID da pasta de destino (padrão: raiz).
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="microsoft_word/move_document">
|
||||
**Descrição:** Mover um documento para um novo local no OneDrive.
|
||||
|
||||
**Parâmetros:**
|
||||
- `file_id` (string, obrigatório): O ID do documento a mover.
|
||||
- `parent_id` (string, obrigatório): O ID da pasta de destino.
|
||||
- `name` (string, opcional): Novo nome para o documento movido.
|
||||
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Exemplos de Uso
|
||||
|
||||
@@ -8,6 +8,29 @@ This enables multiple workflows like having an Agent to access the database fetc
|
||||
|
||||
**Attention**: Make sure that the Agent has access to a Read-Replica or that is okay for the Agent to run insert/update queries on the database.
|
||||
|
||||
## Security Model
|
||||
|
||||
`NL2SQLTool` is an execution-capable tool. It runs model-generated SQL directly against the configured database connection.
|
||||
|
||||
Risk depends on deployment choices:
|
||||
|
||||
- Which credentials are used in `db_uri`
|
||||
- Whether untrusted input can influence prompts
|
||||
- Whether tool-call guardrails are enforced before execution
|
||||
|
||||
If untrusted input can reach this tool, treat the integration as high risk.
|
||||
|
||||
## Hardening Recommendations
|
||||
|
||||
Use all of the following in production:
|
||||
|
||||
- Use a read-only database user whenever possible
|
||||
- Prefer a read replica for analytics/retrieval workloads
|
||||
- Grant least privilege (no superuser/admin roles, no file/system-level capabilities)
|
||||
- Apply database-side resource limits (statement timeout, lock timeout, cost/row limits)
|
||||
- Add `before_tool_call` hooks to enforce allowed query patterns
|
||||
- Enable query logging and alerting for destructive statements
|
||||
|
||||
## Requirements
|
||||
|
||||
- SqlAlchemy
|
||||
|
||||
@@ -33,8 +33,11 @@ def test_brave_tool_search(mock_get, brave_tool):
|
||||
mock_get.return_value.json.return_value = mock_response
|
||||
|
||||
result = brave_tool.run(query="test")
|
||||
assert "Test Title" in result
|
||||
assert "http://test.com" in result
|
||||
data = json.loads(result)
|
||||
assert isinstance(data, list)
|
||||
assert len(data) >= 1
|
||||
assert data[0]["title"] == "Test Title"
|
||||
assert data[0]["url"] == "http://test.com"
|
||||
|
||||
|
||||
@patch("requests.get")
|
||||
|
||||
@@ -14,7 +14,7 @@ dependencies = [
|
||||
"instructor>=1.3.3",
|
||||
# Text Processing
|
||||
"pdfplumber~=0.11.4",
|
||||
"regex~=2024.9.11",
|
||||
"regex~=2026.1.15",
|
||||
# Telemetry and Monitoring
|
||||
"opentelemetry-api~=1.34.0",
|
||||
"opentelemetry-sdk~=1.34.0",
|
||||
@@ -36,7 +36,7 @@ dependencies = [
|
||||
"json5~=0.10.0",
|
||||
"portalocker~=2.7.0",
|
||||
"pydantic-settings~=2.10.1",
|
||||
"mcp~=1.23.1",
|
||||
"mcp~=1.26.0",
|
||||
"uv~=0.9.13",
|
||||
"aiosqlite~=0.21.0",
|
||||
]
|
||||
|
||||
@@ -1009,7 +1009,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
raise
|
||||
|
||||
if self.ask_for_human_input:
|
||||
formatted_answer = self._handle_human_feedback(formatted_answer)
|
||||
formatted_answer = await self._ahandle_human_feedback(formatted_answer)
|
||||
|
||||
self._create_short_term_memory(formatted_answer)
|
||||
self._create_long_term_memory(formatted_answer)
|
||||
@@ -1508,6 +1508,20 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
provider = get_provider()
|
||||
return provider.handle_feedback(formatted_answer, self)
|
||||
|
||||
async def _ahandle_human_feedback(
|
||||
self, formatted_answer: AgentFinish
|
||||
) -> AgentFinish:
|
||||
"""Process human feedback asynchronously via the configured provider.
|
||||
|
||||
Args:
|
||||
formatted_answer: Initial agent result.
|
||||
|
||||
Returns:
|
||||
Final answer after feedback.
|
||||
"""
|
||||
provider = get_provider()
|
||||
return await provider.handle_feedback_async(formatted_answer, self)
|
||||
|
||||
def _is_training_mode(self) -> bool:
|
||||
"""Check if training mode is active.
|
||||
|
||||
|
||||
@@ -143,6 +143,12 @@ def create_folder_structure(
|
||||
(folder_path / "src" / folder_name).mkdir(parents=True)
|
||||
(folder_path / "src" / folder_name / "tools").mkdir(parents=True)
|
||||
(folder_path / "src" / folder_name / "config").mkdir(parents=True)
|
||||
|
||||
# Copy AGENTS.md to project root (top-level projects only)
|
||||
package_dir = Path(__file__).parent
|
||||
agents_md_src = package_dir / "templates" / "AGENTS.md"
|
||||
if agents_md_src.exists():
|
||||
shutil.copy2(agents_md_src, folder_path / "AGENTS.md")
|
||||
|
||||
return folder_path, folder_name, class_name
|
||||
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
|
||||
import click
|
||||
@@ -34,6 +35,11 @@ def create_flow(name):
|
||||
package_dir = Path(__file__).parent
|
||||
templates_dir = package_dir / "templates" / "flow"
|
||||
|
||||
# Copy AGENTS.md to project root
|
||||
agents_md_src = package_dir / "templates" / "AGENTS.md"
|
||||
if agents_md_src.exists():
|
||||
shutil.copy2(agents_md_src, project_root / "AGENTS.md")
|
||||
|
||||
# List of template files to copy
|
||||
root_template_files = [".gitignore", "pyproject.toml", "README.md"]
|
||||
src_template_files = ["__init__.py", "main.py"]
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
import os
|
||||
from typing import Any
|
||||
from urllib.parse import urljoin
|
||||
import os
|
||||
|
||||
import httpx
|
||||
import requests
|
||||
|
||||
from crewai.cli.config import Settings
|
||||
@@ -33,7 +35,11 @@ class PlusAPI:
|
||||
if settings.org_uuid:
|
||||
self.headers["X-Crewai-Organization-Id"] = settings.org_uuid
|
||||
|
||||
self.base_url = os.getenv("CREWAI_PLUS_URL") or str(settings.enterprise_base_url) or DEFAULT_CREWAI_ENTERPRISE_URL
|
||||
self.base_url = (
|
||||
os.getenv("CREWAI_PLUS_URL")
|
||||
or str(settings.enterprise_base_url)
|
||||
or DEFAULT_CREWAI_ENTERPRISE_URL
|
||||
)
|
||||
|
||||
def _make_request(
|
||||
self, method: str, endpoint: str, **kwargs: Any
|
||||
@@ -49,8 +55,10 @@ class PlusAPI:
|
||||
def get_tool(self, handle: str) -> requests.Response:
|
||||
return self._make_request("GET", f"{self.TOOLS_RESOURCE}/{handle}")
|
||||
|
||||
def get_agent(self, handle: str) -> requests.Response:
|
||||
return self._make_request("GET", f"{self.AGENTS_RESOURCE}/{handle}")
|
||||
async def get_agent(self, handle: str) -> httpx.Response:
|
||||
url = urljoin(self.base_url, f"{self.AGENTS_RESOURCE}/{handle}")
|
||||
async with httpx.AsyncClient() as client:
|
||||
return await client.get(url, headers=self.headers)
|
||||
|
||||
def publish_tool(
|
||||
self,
|
||||
|
||||
1017
lib/crewai/src/crewai/cli/templates/AGENTS.md
Normal file
1017
lib/crewai/src/crewai/cli/templates/AGENTS.md
Normal file
File diff suppressed because it is too large
Load Diff
@@ -2,6 +2,7 @@ import base64
|
||||
from json import JSONDecodeError
|
||||
import os
|
||||
from pathlib import Path
|
||||
import shutil
|
||||
import subprocess
|
||||
import tempfile
|
||||
from typing import Any
|
||||
@@ -55,6 +56,11 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
tree_find_and_replace(project_root, "{{folder_name}}", folder_name)
|
||||
tree_find_and_replace(project_root, "{{class_name}}", class_name)
|
||||
|
||||
# Copy AGENTS.md to project root
|
||||
agents_md_src = Path(__file__).parent.parent / "templates" / "AGENTS.md"
|
||||
if agents_md_src.exists():
|
||||
shutil.copy2(agents_md_src, project_root / "AGENTS.md")
|
||||
|
||||
old_directory = os.getcwd()
|
||||
os.chdir(project_root)
|
||||
try:
|
||||
|
||||
@@ -6,12 +6,12 @@ from functools import lru_cache
|
||||
import importlib.metadata
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Any, cast
|
||||
from typing import Any
|
||||
from urllib import request
|
||||
from urllib.error import URLError
|
||||
|
||||
import appdirs
|
||||
from packaging.version import InvalidVersion, parse
|
||||
from packaging.version import InvalidVersion, Version, parse
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
@@ -42,21 +42,88 @@ def _is_cache_valid(cache_data: Mapping[str, Any]) -> bool:
|
||||
return False
|
||||
|
||||
|
||||
def _find_latest_non_yanked_version(
|
||||
releases: Mapping[str, list[dict[str, Any]]],
|
||||
) -> str | None:
|
||||
"""Find the latest non-yanked version from PyPI releases data.
|
||||
|
||||
Args:
|
||||
releases: PyPI releases dict mapping version strings to file info lists.
|
||||
|
||||
Returns:
|
||||
The latest non-yanked version string, or None if all versions are yanked.
|
||||
"""
|
||||
best_version: Version | None = None
|
||||
best_version_str: str | None = None
|
||||
|
||||
for version_str, files in releases.items():
|
||||
try:
|
||||
v = parse(version_str)
|
||||
except InvalidVersion:
|
||||
continue
|
||||
|
||||
if v.is_prerelease or v.is_devrelease:
|
||||
continue
|
||||
|
||||
if not files:
|
||||
continue
|
||||
|
||||
all_yanked = all(f.get("yanked", False) for f in files)
|
||||
if all_yanked:
|
||||
continue
|
||||
|
||||
if best_version is None or v > best_version:
|
||||
best_version = v
|
||||
best_version_str = version_str
|
||||
|
||||
return best_version_str
|
||||
|
||||
|
||||
def _is_version_yanked(
|
||||
version_str: str,
|
||||
releases: Mapping[str, list[dict[str, Any]]],
|
||||
) -> tuple[bool, str]:
|
||||
"""Check if a specific version is yanked.
|
||||
|
||||
Args:
|
||||
version_str: The version string to check.
|
||||
releases: PyPI releases dict mapping version strings to file info lists.
|
||||
|
||||
Returns:
|
||||
Tuple of (is_yanked, yanked_reason).
|
||||
"""
|
||||
files = releases.get(version_str, [])
|
||||
if not files:
|
||||
return False, ""
|
||||
|
||||
all_yanked = all(f.get("yanked", False) for f in files)
|
||||
if not all_yanked:
|
||||
return False, ""
|
||||
|
||||
for f in files:
|
||||
reason = f.get("yanked_reason", "")
|
||||
if reason:
|
||||
return True, str(reason)
|
||||
|
||||
return True, ""
|
||||
|
||||
|
||||
def get_latest_version_from_pypi(timeout: int = 2) -> str | None:
|
||||
"""Get the latest version of CrewAI from PyPI.
|
||||
"""Get the latest non-yanked version of CrewAI from PyPI.
|
||||
|
||||
Args:
|
||||
timeout: Request timeout in seconds.
|
||||
|
||||
Returns:
|
||||
Latest version string or None if unable to fetch.
|
||||
Latest non-yanked version string or None if unable to fetch.
|
||||
"""
|
||||
cache_file = _get_cache_file()
|
||||
if cache_file.exists():
|
||||
try:
|
||||
cache_data = json.loads(cache_file.read_text())
|
||||
if _is_cache_valid(cache_data):
|
||||
return cast(str | None, cache_data.get("version"))
|
||||
if _is_cache_valid(cache_data) and "current_version" in cache_data:
|
||||
version: str | None = cache_data.get("version")
|
||||
return version
|
||||
except (json.JSONDecodeError, OSError):
|
||||
pass
|
||||
|
||||
@@ -65,11 +132,18 @@ def get_latest_version_from_pypi(timeout: int = 2) -> str | None:
|
||||
"https://pypi.org/pypi/crewai/json", timeout=timeout
|
||||
) as response:
|
||||
data = json.loads(response.read())
|
||||
latest_version = cast(str, data["info"]["version"])
|
||||
releases: dict[str, list[dict[str, Any]]] = data["releases"]
|
||||
latest_version = _find_latest_non_yanked_version(releases)
|
||||
|
||||
current_version = get_crewai_version()
|
||||
is_yanked, yanked_reason = _is_version_yanked(current_version, releases)
|
||||
|
||||
cache_data = {
|
||||
"version": latest_version,
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"current_version": current_version,
|
||||
"current_version_yanked": is_yanked,
|
||||
"current_version_yanked_reason": yanked_reason,
|
||||
}
|
||||
cache_file.write_text(json.dumps(cache_data))
|
||||
|
||||
@@ -78,6 +152,40 @@ def get_latest_version_from_pypi(timeout: int = 2) -> str | None:
|
||||
return None
|
||||
|
||||
|
||||
def is_current_version_yanked() -> tuple[bool, str]:
|
||||
"""Check if the currently installed version has been yanked on PyPI.
|
||||
|
||||
Reads from cache if available, otherwise triggers a fetch.
|
||||
|
||||
Returns:
|
||||
Tuple of (is_yanked, yanked_reason).
|
||||
"""
|
||||
cache_file = _get_cache_file()
|
||||
if cache_file.exists():
|
||||
try:
|
||||
cache_data = json.loads(cache_file.read_text())
|
||||
if _is_cache_valid(cache_data) and "current_version" in cache_data:
|
||||
current = get_crewai_version()
|
||||
if cache_data.get("current_version") == current:
|
||||
return (
|
||||
bool(cache_data.get("current_version_yanked", False)),
|
||||
str(cache_data.get("current_version_yanked_reason", "")),
|
||||
)
|
||||
except (json.JSONDecodeError, OSError):
|
||||
pass
|
||||
|
||||
get_latest_version_from_pypi()
|
||||
|
||||
try:
|
||||
cache_data = json.loads(cache_file.read_text())
|
||||
return (
|
||||
bool(cache_data.get("current_version_yanked", False)),
|
||||
str(cache_data.get("current_version_yanked_reason", "")),
|
||||
)
|
||||
except (json.JSONDecodeError, OSError):
|
||||
return False, ""
|
||||
|
||||
|
||||
def check_version() -> tuple[str, str | None]:
|
||||
"""Check current and latest versions.
|
||||
|
||||
|
||||
@@ -43,3 +43,23 @@ def platform_context(integration_token: str) -> Generator[None, Any, None]:
|
||||
yield
|
||||
finally:
|
||||
_platform_integration_token.reset(token)
|
||||
|
||||
|
||||
_current_task_id: contextvars.ContextVar[str | None] = contextvars.ContextVar(
|
||||
"current_task_id", default=None
|
||||
)
|
||||
|
||||
|
||||
def set_current_task_id(task_id: str | None) -> contextvars.Token[str | None]:
|
||||
"""Set the current task ID in the context. Returns a token for reset."""
|
||||
return _current_task_id.set(task_id)
|
||||
|
||||
|
||||
def reset_current_task_id(token: contextvars.Token[str | None]) -> None:
|
||||
"""Reset the current task ID to its previous value."""
|
||||
_current_task_id.reset(token)
|
||||
|
||||
|
||||
def get_current_task_id() -> str | None:
|
||||
"""Get the current task ID from the context."""
|
||||
return _current_task_id.get()
|
||||
|
||||
@@ -2,7 +2,9 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from contextvars import ContextVar, Token
|
||||
import sys
|
||||
from typing import TYPE_CHECKING, Protocol, runtime_checkable
|
||||
|
||||
|
||||
@@ -46,13 +48,21 @@ class ExecutorContext(Protocol):
|
||||
...
|
||||
|
||||
|
||||
class AsyncExecutorContext(ExecutorContext, Protocol):
|
||||
"""Extended context for executors that support async invocation."""
|
||||
|
||||
async def _ainvoke_loop(self) -> AgentFinish:
|
||||
"""Invoke the agent loop asynchronously and return the result."""
|
||||
...
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class HumanInputProvider(Protocol):
|
||||
"""Protocol for human input handling.
|
||||
|
||||
Implementations handle the full feedback flow:
|
||||
- Sync: prompt user, loop until satisfied
|
||||
- Async: raise exception for external handling
|
||||
- Async: use non-blocking I/O and async invoke loop
|
||||
"""
|
||||
|
||||
def setup_messages(self, context: ExecutorContext) -> bool:
|
||||
@@ -86,7 +96,7 @@ class HumanInputProvider(Protocol):
|
||||
formatted_answer: AgentFinish,
|
||||
context: ExecutorContext,
|
||||
) -> AgentFinish:
|
||||
"""Handle the full human feedback flow.
|
||||
"""Handle the full human feedback flow synchronously.
|
||||
|
||||
Args:
|
||||
formatted_answer: The agent's current answer.
|
||||
@@ -100,6 +110,25 @@ class HumanInputProvider(Protocol):
|
||||
"""
|
||||
...
|
||||
|
||||
async def handle_feedback_async(
|
||||
self,
|
||||
formatted_answer: AgentFinish,
|
||||
context: AsyncExecutorContext,
|
||||
) -> AgentFinish:
|
||||
"""Handle the full human feedback flow asynchronously.
|
||||
|
||||
Uses non-blocking I/O for user prompts and async invoke loop
|
||||
for agent re-execution.
|
||||
|
||||
Args:
|
||||
formatted_answer: The agent's current answer.
|
||||
context: Async executor context for callbacks.
|
||||
|
||||
Returns:
|
||||
The final answer after feedback processing.
|
||||
"""
|
||||
...
|
||||
|
||||
@staticmethod
|
||||
def _get_output_string(answer: AgentFinish) -> str:
|
||||
"""Extract output string from answer.
|
||||
@@ -116,7 +145,7 @@ class HumanInputProvider(Protocol):
|
||||
|
||||
|
||||
class SyncHumanInputProvider(HumanInputProvider):
|
||||
"""Default synchronous human input via terminal."""
|
||||
"""Default human input provider with sync and async support."""
|
||||
|
||||
def setup_messages(self, context: ExecutorContext) -> bool:
|
||||
"""Use standard message setup.
|
||||
@@ -157,6 +186,33 @@ class SyncHumanInputProvider(HumanInputProvider):
|
||||
|
||||
return self._handle_regular_feedback(formatted_answer, feedback, context)
|
||||
|
||||
async def handle_feedback_async(
|
||||
self,
|
||||
formatted_answer: AgentFinish,
|
||||
context: AsyncExecutorContext,
|
||||
) -> AgentFinish:
|
||||
"""Handle feedback asynchronously without blocking the event loop.
|
||||
|
||||
Args:
|
||||
formatted_answer: The agent's current answer.
|
||||
context: Async executor context for callbacks.
|
||||
|
||||
Returns:
|
||||
The final answer after feedback processing.
|
||||
"""
|
||||
feedback = await self._prompt_input_async(context.crew)
|
||||
|
||||
if context._is_training_mode():
|
||||
return await self._handle_training_feedback_async(
|
||||
formatted_answer, feedback, context
|
||||
)
|
||||
|
||||
return await self._handle_regular_feedback_async(
|
||||
formatted_answer, feedback, context
|
||||
)
|
||||
|
||||
# ── Sync helpers ──────────────────────────────────────────────────
|
||||
|
||||
@staticmethod
|
||||
def _handle_training_feedback(
|
||||
initial_answer: AgentFinish,
|
||||
@@ -209,6 +265,62 @@ class SyncHumanInputProvider(HumanInputProvider):
|
||||
|
||||
return answer
|
||||
|
||||
# ── Async helpers ─────────────────────────────────────────────────
|
||||
|
||||
@staticmethod
|
||||
async def _handle_training_feedback_async(
|
||||
initial_answer: AgentFinish,
|
||||
feedback: str,
|
||||
context: AsyncExecutorContext,
|
||||
) -> AgentFinish:
|
||||
"""Process training feedback asynchronously (single iteration).
|
||||
|
||||
Args:
|
||||
initial_answer: The agent's initial answer.
|
||||
feedback: Human feedback string.
|
||||
context: Async executor context for callbacks.
|
||||
|
||||
Returns:
|
||||
Improved answer after processing feedback.
|
||||
"""
|
||||
context._handle_crew_training_output(initial_answer, feedback)
|
||||
context.messages.append(context._format_feedback_message(feedback))
|
||||
improved_answer = await context._ainvoke_loop()
|
||||
context._handle_crew_training_output(improved_answer)
|
||||
context.ask_for_human_input = False
|
||||
return improved_answer
|
||||
|
||||
async def _handle_regular_feedback_async(
|
||||
self,
|
||||
current_answer: AgentFinish,
|
||||
initial_feedback: str,
|
||||
context: AsyncExecutorContext,
|
||||
) -> AgentFinish:
|
||||
"""Process regular feedback with async iteration loop.
|
||||
|
||||
Args:
|
||||
current_answer: The agent's current answer.
|
||||
initial_feedback: Initial human feedback string.
|
||||
context: Async executor context for callbacks.
|
||||
|
||||
Returns:
|
||||
Final answer after all feedback iterations.
|
||||
"""
|
||||
feedback = initial_feedback
|
||||
answer = current_answer
|
||||
|
||||
while context.ask_for_human_input:
|
||||
if feedback.strip() == "":
|
||||
context.ask_for_human_input = False
|
||||
else:
|
||||
context.messages.append(context._format_feedback_message(feedback))
|
||||
answer = await context._ainvoke_loop()
|
||||
feedback = await self._prompt_input_async(context.crew)
|
||||
|
||||
return answer
|
||||
|
||||
# ── I/O ───────────────────────────────────────────────────────────
|
||||
|
||||
@staticmethod
|
||||
def _prompt_input(crew: Crew | None) -> str:
|
||||
"""Show rich panel and prompt for input.
|
||||
@@ -262,6 +374,79 @@ class SyncHumanInputProvider(HumanInputProvider):
|
||||
finally:
|
||||
formatter.resume_live_updates()
|
||||
|
||||
@staticmethod
|
||||
async def _prompt_input_async(crew: Crew | None) -> str:
|
||||
"""Show rich panel and prompt for input without blocking the event loop.
|
||||
|
||||
Args:
|
||||
crew: The crew instance for context.
|
||||
|
||||
Returns:
|
||||
User input string from terminal.
|
||||
"""
|
||||
from rich.panel import Panel
|
||||
from rich.text import Text
|
||||
|
||||
from crewai.events.event_listener import event_listener
|
||||
|
||||
formatter = event_listener.formatter
|
||||
formatter.pause_live_updates()
|
||||
|
||||
try:
|
||||
if crew and getattr(crew, "_train", False):
|
||||
prompt_text = (
|
||||
"TRAINING MODE: Provide feedback to improve the agent's performance.\n\n"
|
||||
"This will be used to train better versions of the agent.\n"
|
||||
"Please provide detailed feedback about the result quality and reasoning process."
|
||||
)
|
||||
title = "🎓 Training Feedback Required"
|
||||
else:
|
||||
prompt_text = (
|
||||
"Provide feedback on the Final Result above.\n\n"
|
||||
"• If you are happy with the result, simply hit Enter without typing anything.\n"
|
||||
"• Otherwise, provide specific improvement requests.\n"
|
||||
"• You can provide multiple rounds of feedback until satisfied."
|
||||
)
|
||||
title = "💬 Human Feedback Required"
|
||||
|
||||
content = Text()
|
||||
content.append(prompt_text, style="yellow")
|
||||
|
||||
prompt_panel = Panel(
|
||||
content,
|
||||
title=title,
|
||||
border_style="yellow",
|
||||
padding=(1, 2),
|
||||
)
|
||||
formatter.console.print(prompt_panel)
|
||||
|
||||
response = await _async_readline()
|
||||
if response.strip() != "":
|
||||
formatter.console.print("\n[cyan]Processing your feedback...[/cyan]")
|
||||
return response
|
||||
finally:
|
||||
formatter.resume_live_updates()
|
||||
|
||||
|
||||
async def _async_readline() -> str:
|
||||
"""Read a line from stdin using the event loop's native I/O.
|
||||
|
||||
Falls back to asyncio.to_thread on platforms where piping stdin
|
||||
is unsupported.
|
||||
|
||||
Returns:
|
||||
The line read from stdin, with trailing newline stripped.
|
||||
"""
|
||||
loop = asyncio.get_running_loop()
|
||||
try:
|
||||
reader = asyncio.StreamReader()
|
||||
protocol = asyncio.StreamReaderProtocol(reader)
|
||||
await loop.connect_read_pipe(lambda: protocol, sys.stdin)
|
||||
raw = await reader.readline()
|
||||
return raw.decode().rstrip("\n")
|
||||
except (OSError, NotImplementedError, ValueError):
|
||||
return await asyncio.to_thread(input)
|
||||
|
||||
|
||||
_provider: ContextVar[HumanInputProvider | None] = ContextVar(
|
||||
"human_input_provider",
|
||||
|
||||
@@ -187,6 +187,7 @@ class Crew(FlowTrackable, BaseModel):
|
||||
_task_output_handler: TaskOutputStorageHandler = PrivateAttr(
|
||||
default_factory=TaskOutputStorageHandler
|
||||
)
|
||||
_kickoff_event_id: str | None = PrivateAttr(default=None)
|
||||
|
||||
name: str | None = Field(default="crew")
|
||||
cache: bool = Field(default=True)
|
||||
@@ -759,7 +760,11 @@ class Crew(FlowTrackable, BaseModel):
|
||||
except Exception as e:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
CrewKickoffFailedEvent(error=str(e), crew_name=self.name),
|
||||
CrewKickoffFailedEvent(
|
||||
error=str(e),
|
||||
crew_name=self.name,
|
||||
started_event_id=self._kickoff_event_id,
|
||||
),
|
||||
)
|
||||
raise
|
||||
finally:
|
||||
@@ -949,7 +954,11 @@ class Crew(FlowTrackable, BaseModel):
|
||||
except Exception as e:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
CrewKickoffFailedEvent(error=str(e), crew_name=self.name),
|
||||
CrewKickoffFailedEvent(
|
||||
error=str(e),
|
||||
crew_name=self.name,
|
||||
started_event_id=self._kickoff_event_id,
|
||||
),
|
||||
)
|
||||
raise
|
||||
finally:
|
||||
@@ -1517,12 +1526,14 @@ class Crew(FlowTrackable, BaseModel):
|
||||
final_string_output = final_task_output.raw
|
||||
self._finish_execution(final_string_output)
|
||||
self.token_usage = self.calculate_usage_metrics()
|
||||
crewai_event_bus.flush()
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
CrewKickoffCompletedEvent(
|
||||
crew_name=self.name,
|
||||
output=final_task_output,
|
||||
total_tokens=self.token_usage.total_tokens,
|
||||
started_event_id=self._kickoff_event_id,
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
@@ -265,10 +265,9 @@ def prepare_kickoff(
|
||||
normalized = {}
|
||||
normalized = before_callback(normalized)
|
||||
|
||||
future = crewai_event_bus.emit(
|
||||
crew,
|
||||
CrewKickoffStartedEvent(crew_name=crew.name, inputs=normalized),
|
||||
)
|
||||
started_event = CrewKickoffStartedEvent(crew_name=crew.name, inputs=normalized)
|
||||
crew._kickoff_event_id = started_event.event_id
|
||||
future = crewai_event_bus.emit(crew, started_event)
|
||||
if future is not None:
|
||||
try:
|
||||
future.result()
|
||||
|
||||
@@ -8,7 +8,7 @@ from rich.live import Live
|
||||
from rich.panel import Panel
|
||||
from rich.text import Text
|
||||
|
||||
from crewai.cli.version import is_newer_version_available
|
||||
from crewai.cli.version import is_current_version_yanked, is_newer_version_available
|
||||
|
||||
|
||||
_disable_version_check: ContextVar[bool] = ContextVar(
|
||||
@@ -104,6 +104,22 @@ To update, run: uv sync --upgrade-package crewai"""
|
||||
)
|
||||
self.console.print(panel)
|
||||
self.console.print()
|
||||
|
||||
is_yanked, yanked_reason = is_current_version_yanked()
|
||||
if is_yanked:
|
||||
yanked_message = f"Version {current} has been yanked from PyPI."
|
||||
if yanked_reason:
|
||||
yanked_message += f"\nReason: {yanked_reason}"
|
||||
yanked_message += "\n\nTo update, run: uv sync --upgrade-package crewai"
|
||||
|
||||
yanked_panel = Panel(
|
||||
yanked_message,
|
||||
title="Yanked Version",
|
||||
border_style="red",
|
||||
padding=(1, 2),
|
||||
)
|
||||
self.console.print(yanked_panel)
|
||||
self.console.print()
|
||||
except Exception: # noqa: S110
|
||||
# Silently ignore errors in version check - it's non-critical
|
||||
pass
|
||||
|
||||
@@ -32,7 +32,8 @@ from crewai.events.types.tool_usage_events import (
|
||||
ToolUsageFinishedEvent,
|
||||
ToolUsageStartedEvent,
|
||||
)
|
||||
from crewai.flow.flow import Flow, listen, or_, router, start
|
||||
from crewai.flow.flow import Flow, StateProxy, listen, or_, router, start
|
||||
from crewai.flow.types import FlowMethodName
|
||||
from crewai.hooks.llm_hooks import (
|
||||
get_after_llm_call_hooks,
|
||||
get_before_llm_call_hooks,
|
||||
@@ -225,7 +226,11 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
@messages.setter
|
||||
def messages(self, value: list[LLMMessage]) -> None:
|
||||
"""Delegate to state for ExecutorContext conformance."""
|
||||
self._state.messages = value
|
||||
if self._flow_initialized and hasattr(self, "_state_lock"):
|
||||
with self._state_lock:
|
||||
self._state.messages = value
|
||||
else:
|
||||
self._state.messages = value
|
||||
|
||||
@property
|
||||
def ask_for_human_input(self) -> bool:
|
||||
@@ -253,6 +258,22 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
raise RuntimeError("Agent loop did not produce a final answer")
|
||||
return answer
|
||||
|
||||
async def _ainvoke_loop(self) -> AgentFinish:
|
||||
"""Invoke the agent loop asynchronously and return the result.
|
||||
|
||||
Required by AsyncExecutorContext protocol.
|
||||
"""
|
||||
self._state.iterations = 0
|
||||
self._state.is_finished = False
|
||||
self._state.current_answer = None
|
||||
|
||||
await self.akickoff()
|
||||
|
||||
answer = self._state.current_answer
|
||||
if not isinstance(answer, AgentFinish):
|
||||
raise RuntimeError("Agent loop did not produce a final answer")
|
||||
return answer
|
||||
|
||||
def _format_feedback_message(self, feedback: str) -> LLMMessage:
|
||||
"""Format feedback as a message for the LLM.
|
||||
|
||||
@@ -353,6 +374,8 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
Flow initialization is deferred to prevent event emission during agent setup.
|
||||
Returns the temporary state until invoke() is called.
|
||||
"""
|
||||
if self._flow_initialized and hasattr(self, "_state_lock"):
|
||||
return StateProxy(self._state, self._state_lock) # type: ignore[return-value]
|
||||
return self._state
|
||||
|
||||
@property
|
||||
@@ -461,15 +484,14 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
raise
|
||||
|
||||
@listen("continue_reasoning_native")
|
||||
def call_llm_native_tools(
|
||||
self,
|
||||
) -> Literal["native_tool_calls", "native_finished", "context_error"]:
|
||||
def call_llm_native_tools(self) -> None:
|
||||
"""Execute LLM call with native function calling.
|
||||
|
||||
Always calls the LLM so it can read reflection prompts and decide
|
||||
whether to provide a final answer or request more tools.
|
||||
|
||||
Returns routing decision based on whether tool calls or final answer.
|
||||
Note: This is a listener, not a router. The route_native_tool_result
|
||||
router fires after this to determine the next step based on state.
|
||||
"""
|
||||
try:
|
||||
# Clear pending tools - LLM will decide what to do next after reading
|
||||
@@ -499,8 +521,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
if isinstance(answer, list) and answer and self._is_tool_call_list(answer):
|
||||
# Store tool calls for sequential processing
|
||||
self.state.pending_tool_calls = list(answer)
|
||||
|
||||
return "native_tool_calls"
|
||||
return # Router will check pending_tool_calls
|
||||
|
||||
if isinstance(answer, BaseModel):
|
||||
self.state.current_answer = AgentFinish(
|
||||
@@ -510,7 +531,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
)
|
||||
self._invoke_step_callback(self.state.current_answer)
|
||||
self._append_message_to_state(answer.model_dump_json())
|
||||
return "native_finished"
|
||||
return # Router will check current_answer
|
||||
|
||||
# Text response - this is the final answer
|
||||
if isinstance(answer, str):
|
||||
@@ -521,8 +542,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
)
|
||||
self._invoke_step_callback(self.state.current_answer)
|
||||
self._append_message_to_state(answer)
|
||||
|
||||
return "native_finished"
|
||||
return # Router will check current_answer
|
||||
|
||||
# Unexpected response type, treat as final answer
|
||||
self.state.current_answer = AgentFinish(
|
||||
@@ -532,13 +552,12 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
)
|
||||
self._invoke_step_callback(self.state.current_answer)
|
||||
self._append_message_to_state(str(answer))
|
||||
|
||||
return "native_finished"
|
||||
# Router will check current_answer
|
||||
|
||||
except Exception as e:
|
||||
if is_context_length_exceeded(e):
|
||||
self._last_context_error = e
|
||||
return "context_error"
|
||||
return # Router will check _last_context_error
|
||||
if e.__class__.__module__.startswith("litellm"):
|
||||
raise e
|
||||
handle_unknown_error(self._printer, e, verbose=self.agent.verbose)
|
||||
@@ -551,6 +570,22 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
return "execute_tool"
|
||||
return "agent_finished"
|
||||
|
||||
@router(call_llm_native_tools)
|
||||
def route_native_tool_result(
|
||||
self,
|
||||
) -> Literal["native_tool_calls", "native_finished", "context_error"]:
|
||||
"""Route based on LLM response for native tool calling.
|
||||
|
||||
Checks state set by call_llm_native_tools to determine next step.
|
||||
This router is needed because only router return values trigger
|
||||
downstream listeners.
|
||||
"""
|
||||
if self._last_context_error is not None:
|
||||
return "context_error"
|
||||
if self.state.pending_tool_calls:
|
||||
return "native_tool_calls"
|
||||
return "native_finished"
|
||||
|
||||
@listen("execute_tool")
|
||||
def execute_tool_action(self) -> Literal["tool_completed", "tool_result_is_final"]:
|
||||
"""Execute the tool action and handle the result."""
|
||||
@@ -908,9 +943,11 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
self.state.iterations += 1
|
||||
return "initialized"
|
||||
|
||||
@listen("initialized")
|
||||
@listen(or_("initialized", "tool_completed", "native_tool_completed"))
|
||||
def continue_iteration(self) -> Literal["check_iteration"]:
|
||||
"""Bridge listener that connects iteration loop back to iteration check."""
|
||||
if self._flow_initialized:
|
||||
self._discard_or_listener(FlowMethodName("continue_iteration"))
|
||||
return "check_iteration"
|
||||
|
||||
@router(or_(initialize_reasoning, continue_iteration))
|
||||
@@ -1152,7 +1189,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
)
|
||||
|
||||
if self.state.ask_for_human_input:
|
||||
formatted_answer = self._handle_human_feedback(formatted_answer)
|
||||
formatted_answer = await self._ahandle_human_feedback(formatted_answer)
|
||||
|
||||
self._create_short_term_memory(formatted_answer)
|
||||
self._create_long_term_memory(formatted_answer)
|
||||
@@ -1369,6 +1406,20 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
|
||||
provider = get_provider()
|
||||
return provider.handle_feedback(formatted_answer, self)
|
||||
|
||||
async def _ahandle_human_feedback(
|
||||
self, formatted_answer: AgentFinish
|
||||
) -> AgentFinish:
|
||||
"""Process human feedback asynchronously and refine answer.
|
||||
|
||||
Args:
|
||||
formatted_answer: Initial agent result.
|
||||
|
||||
Returns:
|
||||
Final answer after feedback.
|
||||
"""
|
||||
provider = get_provider()
|
||||
return await provider.handle_feedback_async(formatted_answer, self)
|
||||
|
||||
def _is_training_mode(self) -> bool:
|
||||
"""Check if training mode is active.
|
||||
|
||||
|
||||
@@ -7,7 +7,14 @@ for building event-driven workflows with conditional execution and routing.
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from collections.abc import Callable, Sequence
|
||||
from collections.abc import (
|
||||
Callable,
|
||||
ItemsView,
|
||||
Iterator,
|
||||
KeysView,
|
||||
Sequence,
|
||||
ValuesView,
|
||||
)
|
||||
from concurrent.futures import Future
|
||||
import copy
|
||||
import inspect
|
||||
@@ -409,6 +416,132 @@ def and_(*conditions: str | FlowCondition | Callable[..., Any]) -> FlowCondition
|
||||
return {"type": AND_CONDITION, "conditions": processed_conditions}
|
||||
|
||||
|
||||
class LockedListProxy(Generic[T]):
|
||||
"""Thread-safe proxy for list operations.
|
||||
|
||||
Wraps a list and uses a lock for all mutating operations.
|
||||
"""
|
||||
|
||||
def __init__(self, lst: list[T], lock: threading.Lock) -> None:
|
||||
self._list = lst
|
||||
self._lock = lock
|
||||
|
||||
def append(self, item: T) -> None:
|
||||
with self._lock:
|
||||
self._list.append(item)
|
||||
|
||||
def extend(self, items: list[T]) -> None:
|
||||
with self._lock:
|
||||
self._list.extend(items)
|
||||
|
||||
def insert(self, index: int, item: T) -> None:
|
||||
with self._lock:
|
||||
self._list.insert(index, item)
|
||||
|
||||
def remove(self, item: T) -> None:
|
||||
with self._lock:
|
||||
self._list.remove(item)
|
||||
|
||||
def pop(self, index: int = -1) -> T:
|
||||
with self._lock:
|
||||
return self._list.pop(index)
|
||||
|
||||
def clear(self) -> None:
|
||||
with self._lock:
|
||||
self._list.clear()
|
||||
|
||||
def __setitem__(self, index: int, value: T) -> None:
|
||||
with self._lock:
|
||||
self._list[index] = value
|
||||
|
||||
def __delitem__(self, index: int) -> None:
|
||||
with self._lock:
|
||||
del self._list[index]
|
||||
|
||||
def __getitem__(self, index: int) -> T:
|
||||
return self._list[index]
|
||||
|
||||
def __len__(self) -> int:
|
||||
return len(self._list)
|
||||
|
||||
def __iter__(self) -> Iterator[T]:
|
||||
return iter(self._list)
|
||||
|
||||
def __contains__(self, item: object) -> bool:
|
||||
return item in self._list
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return repr(self._list)
|
||||
|
||||
def __bool__(self) -> bool:
|
||||
return bool(self._list)
|
||||
|
||||
|
||||
class LockedDictProxy(Generic[T]):
|
||||
"""Thread-safe proxy for dict operations.
|
||||
|
||||
Wraps a dict and uses a lock for all mutating operations.
|
||||
"""
|
||||
|
||||
def __init__(self, d: dict[str, T], lock: threading.Lock) -> None:
|
||||
self._dict = d
|
||||
self._lock = lock
|
||||
|
||||
def __setitem__(self, key: str, value: T) -> None:
|
||||
with self._lock:
|
||||
self._dict[key] = value
|
||||
|
||||
def __delitem__(self, key: str) -> None:
|
||||
with self._lock:
|
||||
del self._dict[key]
|
||||
|
||||
def pop(self, key: str, *default: T) -> T:
|
||||
with self._lock:
|
||||
return self._dict.pop(key, *default)
|
||||
|
||||
def update(self, other: dict[str, T]) -> None:
|
||||
with self._lock:
|
||||
self._dict.update(other)
|
||||
|
||||
def clear(self) -> None:
|
||||
with self._lock:
|
||||
self._dict.clear()
|
||||
|
||||
def setdefault(self, key: str, default: T) -> T:
|
||||
with self._lock:
|
||||
return self._dict.setdefault(key, default)
|
||||
|
||||
def __getitem__(self, key: str) -> T:
|
||||
return self._dict[key]
|
||||
|
||||
def __len__(self) -> int:
|
||||
return len(self._dict)
|
||||
|
||||
def __iter__(self) -> Iterator[str]:
|
||||
return iter(self._dict)
|
||||
|
||||
def __contains__(self, key: object) -> bool:
|
||||
return key in self._dict
|
||||
|
||||
def keys(self) -> KeysView[str]:
|
||||
return self._dict.keys()
|
||||
|
||||
def values(self) -> ValuesView[T]:
|
||||
return self._dict.values()
|
||||
|
||||
def items(self) -> ItemsView[str, T]:
|
||||
return self._dict.items()
|
||||
|
||||
def get(self, key: str, default: T | None = None) -> T | None:
|
||||
return self._dict.get(key, default)
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return repr(self._dict)
|
||||
|
||||
def __bool__(self) -> bool:
|
||||
return bool(self._dict)
|
||||
|
||||
|
||||
class StateProxy(Generic[T]):
|
||||
"""Proxy that provides thread-safe access to flow state.
|
||||
|
||||
@@ -423,7 +556,13 @@ class StateProxy(Generic[T]):
|
||||
object.__setattr__(self, "_proxy_lock", lock)
|
||||
|
||||
def __getattr__(self, name: str) -> Any:
|
||||
return getattr(object.__getattribute__(self, "_proxy_state"), name)
|
||||
value = getattr(object.__getattribute__(self, "_proxy_state"), name)
|
||||
lock = object.__getattribute__(self, "_proxy_lock")
|
||||
if isinstance(value, list):
|
||||
return LockedListProxy(value, lock)
|
||||
if isinstance(value, dict):
|
||||
return LockedDictProxy(value, lock)
|
||||
return value
|
||||
|
||||
def __setattr__(self, name: str, value: Any) -> None:
|
||||
if name in ("_proxy_state", "_proxy_lock"):
|
||||
@@ -1593,7 +1732,6 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
reset_emission_counter()
|
||||
reset_last_event_id()
|
||||
|
||||
# Emit FlowStartedEvent and log the start of the flow.
|
||||
if not self.suppress_flow_events:
|
||||
future = crewai_event_bus.emit(
|
||||
self,
|
||||
@@ -1604,7 +1742,10 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
),
|
||||
)
|
||||
if future:
|
||||
self._event_futures.append(future)
|
||||
try:
|
||||
await asyncio.wrap_future(future)
|
||||
except Exception:
|
||||
logger.warning("FlowStartedEvent handler failed", exc_info=True)
|
||||
self._log_flow_event(
|
||||
f"Flow started with ID: {self.flow_id}", color="bold magenta"
|
||||
)
|
||||
@@ -1696,6 +1837,12 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
|
||||
final_output = self._method_outputs[-1] if self._method_outputs else None
|
||||
|
||||
if self._event_futures:
|
||||
await asyncio.gather(
|
||||
*[asyncio.wrap_future(f) for f in self._event_futures]
|
||||
)
|
||||
self._event_futures.clear()
|
||||
|
||||
if not self.suppress_flow_events:
|
||||
future = crewai_event_bus.emit(
|
||||
self,
|
||||
@@ -1707,13 +1854,12 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
),
|
||||
)
|
||||
if future:
|
||||
self._event_futures.append(future)
|
||||
|
||||
if self._event_futures:
|
||||
await asyncio.gather(
|
||||
*[asyncio.wrap_future(f) for f in self._event_futures]
|
||||
)
|
||||
self._event_futures.clear()
|
||||
try:
|
||||
await asyncio.wrap_future(future)
|
||||
except Exception:
|
||||
logger.warning(
|
||||
"FlowFinishedEvent handler failed", exc_info=True
|
||||
)
|
||||
|
||||
if not self.suppress_flow_events:
|
||||
trace_listener = TraceCollectionListener()
|
||||
@@ -1788,40 +1934,14 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
await self._execute_listeners(start_method_name, result, finished_event_id)
|
||||
# Then execute listeners for the router result (e.g., "approved")
|
||||
router_result_trigger = FlowMethodName(str(result))
|
||||
listeners_for_result = self._find_triggered_methods(
|
||||
router_result_trigger, router_only=False
|
||||
listener_result = (
|
||||
self.last_human_feedback
|
||||
if self.last_human_feedback is not None
|
||||
else result
|
||||
)
|
||||
await self._execute_listeners(
|
||||
router_result_trigger, listener_result, finished_event_id
|
||||
)
|
||||
if listeners_for_result:
|
||||
# Pass the HumanFeedbackResult if available
|
||||
listener_result = (
|
||||
self.last_human_feedback
|
||||
if self.last_human_feedback is not None
|
||||
else result
|
||||
)
|
||||
racing_group = self._get_racing_group_for_listeners(
|
||||
listeners_for_result
|
||||
)
|
||||
if racing_group:
|
||||
racing_members, _ = racing_group
|
||||
other_listeners = [
|
||||
name
|
||||
for name in listeners_for_result
|
||||
if name not in racing_members
|
||||
]
|
||||
await self._execute_racing_listeners(
|
||||
racing_members,
|
||||
other_listeners,
|
||||
listener_result,
|
||||
finished_event_id,
|
||||
)
|
||||
else:
|
||||
tasks = [
|
||||
self._execute_single_listener(
|
||||
listener_name, listener_result, finished_event_id
|
||||
)
|
||||
for listener_name in listeners_for_result
|
||||
]
|
||||
await asyncio.gather(*tasks)
|
||||
else:
|
||||
await self._execute_listeners(start_method_name, result, finished_event_id)
|
||||
|
||||
@@ -2027,15 +2147,14 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
router_input = router_result_to_feedback.get(
|
||||
str(current_trigger), current_result
|
||||
)
|
||||
current_triggering_event_id = await self._execute_single_listener(
|
||||
(
|
||||
router_result,
|
||||
current_triggering_event_id,
|
||||
) = await self._execute_single_listener(
|
||||
router_name, router_input, current_triggering_event_id
|
||||
)
|
||||
# After executing router, the router's result is the path
|
||||
router_result = (
|
||||
self._method_outputs[-1] if self._method_outputs else None
|
||||
)
|
||||
if router_result: # Only add non-None results
|
||||
router_results.append(router_result)
|
||||
router_results.append(FlowMethodName(str(router_result)))
|
||||
# If this was a human_feedback router, map the outcome to the feedback
|
||||
if self.last_human_feedback is not None:
|
||||
router_result_to_feedback[str(router_result)] = (
|
||||
@@ -2265,7 +2384,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
listener_name: FlowMethodName,
|
||||
result: Any,
|
||||
triggering_event_id: str | None = None,
|
||||
) -> str | None:
|
||||
) -> tuple[Any, str | None]:
|
||||
"""Executes a single listener method with proper event handling.
|
||||
|
||||
This internal method manages the execution of an individual listener,
|
||||
@@ -2278,8 +2397,9 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
used for causal chain tracking.
|
||||
|
||||
Returns:
|
||||
The event_id of the MethodExecutionFinishedEvent emitted by this listener,
|
||||
or None if events are suppressed.
|
||||
A tuple of (listener_result, event_id) where listener_result is the return
|
||||
value of the listener method and event_id is the MethodExecutionFinishedEvent
|
||||
id, or (None, None) if skipped during resumption.
|
||||
|
||||
Note:
|
||||
- Inspects method signature to determine if it accepts the trigger result
|
||||
@@ -2305,7 +2425,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
):
|
||||
# This conditional start was executed, continue its chain
|
||||
await self._execute_start_method(start_method_name)
|
||||
return None
|
||||
return (None, None)
|
||||
# For cyclic flows, clear from completed to allow re-execution
|
||||
self._completed_methods.discard(listener_name)
|
||||
# Also clear from fired OR listeners for cyclic flows
|
||||
@@ -2343,46 +2463,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
listener_name, listener_result, finished_event_id
|
||||
)
|
||||
|
||||
# If this listener is also a router (e.g., has @human_feedback with emit),
|
||||
# we need to trigger listeners for the router result as well
|
||||
if listener_name in self._routers and listener_result is not None:
|
||||
router_result_trigger = FlowMethodName(str(listener_result))
|
||||
listeners_for_result = self._find_triggered_methods(
|
||||
router_result_trigger, router_only=False
|
||||
)
|
||||
if listeners_for_result:
|
||||
# Pass the HumanFeedbackResult if available
|
||||
feedback_result = (
|
||||
self.last_human_feedback
|
||||
if self.last_human_feedback is not None
|
||||
else listener_result
|
||||
)
|
||||
racing_group = self._get_racing_group_for_listeners(
|
||||
listeners_for_result
|
||||
)
|
||||
if racing_group:
|
||||
racing_members, _ = racing_group
|
||||
other_listeners = [
|
||||
name
|
||||
for name in listeners_for_result
|
||||
if name not in racing_members
|
||||
]
|
||||
await self._execute_racing_listeners(
|
||||
racing_members,
|
||||
other_listeners,
|
||||
feedback_result,
|
||||
finished_event_id,
|
||||
)
|
||||
else:
|
||||
tasks = [
|
||||
self._execute_single_listener(
|
||||
name, feedback_result, finished_event_id
|
||||
)
|
||||
for name in listeners_for_result
|
||||
]
|
||||
await asyncio.gather(*tasks)
|
||||
|
||||
return finished_event_id
|
||||
return (listener_result, finished_event_id)
|
||||
|
||||
except Exception as e:
|
||||
# Don't log HumanFeedbackPending as an error - it's expected control flow
|
||||
|
||||
@@ -1580,10 +1580,12 @@ class AnthropicCompletion(BaseLLM):
|
||||
usage = response.usage
|
||||
input_tokens = getattr(usage, "input_tokens", 0)
|
||||
output_tokens = getattr(usage, "output_tokens", 0)
|
||||
cache_read_tokens = getattr(usage, "cache_read_input_tokens", 0) or 0
|
||||
return {
|
||||
"input_tokens": input_tokens,
|
||||
"output_tokens": output_tokens,
|
||||
"total_tokens": input_tokens + output_tokens,
|
||||
"cached_prompt_tokens": cache_read_tokens,
|
||||
}
|
||||
return {"total_tokens": 0}
|
||||
|
||||
|
||||
@@ -425,8 +425,9 @@ class AzureCompletion(BaseLLM):
|
||||
"stream": self.stream,
|
||||
}
|
||||
|
||||
model_extras: dict[str, Any] = {}
|
||||
if self.stream:
|
||||
params["model_extras"] = {"stream_options": {"include_usage": True}}
|
||||
model_extras["stream_options"] = {"include_usage": True}
|
||||
|
||||
if response_model and self.is_openai_model:
|
||||
model_description = generate_model_description(response_model)
|
||||
@@ -464,6 +465,13 @@ class AzureCompletion(BaseLLM):
|
||||
params["tools"] = self._convert_tools_for_interference(tools)
|
||||
params["tool_choice"] = "auto"
|
||||
|
||||
prompt_cache_key = self.additional_params.get("prompt_cache_key")
|
||||
if prompt_cache_key:
|
||||
model_extras["prompt_cache_key"] = prompt_cache_key
|
||||
|
||||
if model_extras:
|
||||
params["model_extras"] = model_extras
|
||||
|
||||
additional_params = self.additional_params
|
||||
additional_drop_params = additional_params.get("additional_drop_params")
|
||||
drop_params = additional_params.get("drop_params")
|
||||
@@ -1063,10 +1071,15 @@ class AzureCompletion(BaseLLM):
|
||||
"""Extract token usage from Azure response."""
|
||||
if hasattr(response, "usage") and response.usage:
|
||||
usage = response.usage
|
||||
cached_tokens = 0
|
||||
prompt_details = getattr(usage, "prompt_tokens_details", None)
|
||||
if prompt_details:
|
||||
cached_tokens = getattr(prompt_details, "cached_tokens", 0) or 0
|
||||
return {
|
||||
"prompt_tokens": getattr(usage, "prompt_tokens", 0),
|
||||
"completion_tokens": getattr(usage, "completion_tokens", 0),
|
||||
"total_tokens": getattr(usage, "total_tokens", 0),
|
||||
"cached_prompt_tokens": cached_tokens,
|
||||
}
|
||||
return {"total_tokens": 0}
|
||||
|
||||
|
||||
@@ -1295,11 +1295,13 @@ class GeminiCompletion(BaseLLM):
|
||||
"""Extract token usage from Gemini response."""
|
||||
if response.usage_metadata:
|
||||
usage = response.usage_metadata
|
||||
cached_tokens = getattr(usage, "cached_content_token_count", 0) or 0
|
||||
return {
|
||||
"prompt_token_count": getattr(usage, "prompt_token_count", 0),
|
||||
"candidates_token_count": getattr(usage, "candidates_token_count", 0),
|
||||
"total_token_count": getattr(usage, "total_token_count", 0),
|
||||
"total_tokens": getattr(usage, "total_token_count", 0),
|
||||
"cached_prompt_tokens": cached_tokens,
|
||||
}
|
||||
return {"total_tokens": 0}
|
||||
|
||||
|
||||
@@ -1094,11 +1094,7 @@ class OpenAICompletion(BaseLLM):
|
||||
if reasoning_items:
|
||||
self._last_reasoning_items = reasoning_items
|
||||
if event.response and event.response.usage:
|
||||
usage = {
|
||||
"prompt_tokens": event.response.usage.input_tokens,
|
||||
"completion_tokens": event.response.usage.output_tokens,
|
||||
"total_tokens": event.response.usage.total_tokens,
|
||||
}
|
||||
usage = self._extract_responses_token_usage(event.response)
|
||||
self._track_token_usage_internal(usage)
|
||||
|
||||
# If parse_tool_outputs is enabled, return structured result
|
||||
@@ -1222,11 +1218,7 @@ class OpenAICompletion(BaseLLM):
|
||||
if reasoning_items:
|
||||
self._last_reasoning_items = reasoning_items
|
||||
if event.response and event.response.usage:
|
||||
usage = {
|
||||
"prompt_tokens": event.response.usage.input_tokens,
|
||||
"completion_tokens": event.response.usage.output_tokens,
|
||||
"total_tokens": event.response.usage.total_tokens,
|
||||
}
|
||||
usage = self._extract_responses_token_usage(event.response)
|
||||
self._track_token_usage_internal(usage)
|
||||
|
||||
# If parse_tool_outputs is enabled, return structured result
|
||||
@@ -1310,11 +1302,18 @@ class OpenAICompletion(BaseLLM):
|
||||
def _extract_responses_token_usage(self, response: Response) -> dict[str, Any]:
|
||||
"""Extract token usage from Responses API response."""
|
||||
if response.usage:
|
||||
return {
|
||||
result = {
|
||||
"prompt_tokens": response.usage.input_tokens,
|
||||
"completion_tokens": response.usage.output_tokens,
|
||||
"total_tokens": response.usage.total_tokens,
|
||||
}
|
||||
# Extract cached prompt tokens from input_tokens_details
|
||||
input_details = getattr(response.usage, "input_tokens_details", None)
|
||||
if input_details:
|
||||
result["cached_prompt_tokens"] = (
|
||||
getattr(input_details, "cached_tokens", 0) or 0
|
||||
)
|
||||
return result
|
||||
return {"total_tokens": 0}
|
||||
|
||||
def _extract_builtin_tool_outputs(self, response: Response) -> ResponsesAPIResult:
|
||||
@@ -1696,6 +1695,99 @@ class OpenAICompletion(BaseLLM):
|
||||
|
||||
return content
|
||||
|
||||
def _finalize_streaming_response(
|
||||
self,
|
||||
full_response: str,
|
||||
tool_calls: dict[int, dict[str, Any]],
|
||||
usage_data: dict[str, int],
|
||||
params: dict[str, Any],
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
) -> str | list[dict[str, Any]]:
|
||||
"""Finalize a streaming response with usage tracking, tool call handling, and events.
|
||||
|
||||
Args:
|
||||
full_response: The accumulated text response from the stream.
|
||||
tool_calls: Accumulated tool calls from the stream, keyed by index.
|
||||
usage_data: Token usage data from the stream.
|
||||
params: The completion parameters containing messages.
|
||||
available_functions: Available functions for tool calling.
|
||||
from_task: Task that initiated the call.
|
||||
from_agent: Agent that initiated the call.
|
||||
|
||||
Returns:
|
||||
Tool calls list when tools were invoked without available_functions,
|
||||
tool execution result when available_functions is provided,
|
||||
or the text response string.
|
||||
"""
|
||||
self._track_token_usage_internal(usage_data)
|
||||
|
||||
if tool_calls and not available_functions:
|
||||
tool_calls_list = [
|
||||
{
|
||||
"id": call_data["id"],
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": call_data["name"],
|
||||
"arguments": call_data["arguments"],
|
||||
},
|
||||
"index": call_data["index"],
|
||||
}
|
||||
for call_data in tool_calls.values()
|
||||
]
|
||||
self._emit_call_completed_event(
|
||||
response=tool_calls_list,
|
||||
call_type=LLMCallType.TOOL_CALL,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
messages=params["messages"],
|
||||
)
|
||||
return tool_calls_list
|
||||
|
||||
if tool_calls and available_functions:
|
||||
for call_data in tool_calls.values():
|
||||
function_name = call_data["name"]
|
||||
arguments = call_data["arguments"]
|
||||
|
||||
if not function_name or not arguments:
|
||||
continue
|
||||
|
||||
if function_name not in available_functions:
|
||||
logging.warning(
|
||||
f"Function '{function_name}' not found in available functions"
|
||||
)
|
||||
continue
|
||||
|
||||
try:
|
||||
function_args = json.loads(arguments)
|
||||
except json.JSONDecodeError as e:
|
||||
logging.error(f"Failed to parse streamed tool arguments: {e}")
|
||||
continue
|
||||
|
||||
result = self._handle_tool_execution(
|
||||
function_name=function_name,
|
||||
function_args=function_args,
|
||||
available_functions=available_functions,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
)
|
||||
|
||||
if result is not None:
|
||||
return result
|
||||
|
||||
full_response = self._apply_stop_words(full_response)
|
||||
|
||||
self._emit_call_completed_event(
|
||||
response=full_response,
|
||||
call_type=LLMCallType.LLM_CALL,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
messages=params["messages"],
|
||||
)
|
||||
|
||||
return full_response
|
||||
|
||||
def _handle_streaming_completion(
|
||||
self,
|
||||
params: dict[str, Any],
|
||||
@@ -1703,7 +1795,7 @@ class OpenAICompletion(BaseLLM):
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str | BaseModel:
|
||||
) -> str | list[dict[str, Any]] | BaseModel:
|
||||
"""Handle streaming chat completion."""
|
||||
full_response = ""
|
||||
tool_calls: dict[int, dict[str, Any]] = {}
|
||||
@@ -1820,54 +1912,20 @@ class OpenAICompletion(BaseLLM):
|
||||
response_id=response_id_stream,
|
||||
)
|
||||
|
||||
self._track_token_usage_internal(usage_data)
|
||||
|
||||
if tool_calls and available_functions:
|
||||
for call_data in tool_calls.values():
|
||||
function_name = call_data["name"]
|
||||
arguments = call_data["arguments"]
|
||||
|
||||
# Skip if function name is empty or arguments are empty
|
||||
if not function_name or not arguments:
|
||||
continue
|
||||
|
||||
# Check if function exists in available functions
|
||||
if function_name not in available_functions:
|
||||
logging.warning(
|
||||
f"Function '{function_name}' not found in available functions"
|
||||
)
|
||||
continue
|
||||
|
||||
try:
|
||||
function_args = json.loads(arguments)
|
||||
except json.JSONDecodeError as e:
|
||||
logging.error(f"Failed to parse streamed tool arguments: {e}")
|
||||
continue
|
||||
|
||||
result = self._handle_tool_execution(
|
||||
function_name=function_name,
|
||||
function_args=function_args,
|
||||
available_functions=available_functions,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
)
|
||||
|
||||
if result is not None:
|
||||
return result
|
||||
|
||||
full_response = self._apply_stop_words(full_response)
|
||||
|
||||
self._emit_call_completed_event(
|
||||
response=full_response,
|
||||
call_type=LLMCallType.LLM_CALL,
|
||||
result = self._finalize_streaming_response(
|
||||
full_response=full_response,
|
||||
tool_calls=tool_calls,
|
||||
usage_data=usage_data,
|
||||
params=params,
|
||||
available_functions=available_functions,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
messages=params["messages"],
|
||||
)
|
||||
|
||||
return self._invoke_after_llm_call_hooks(
|
||||
params["messages"], full_response, from_agent
|
||||
)
|
||||
if isinstance(result, str):
|
||||
return self._invoke_after_llm_call_hooks(
|
||||
params["messages"], result, from_agent
|
||||
)
|
||||
return result
|
||||
|
||||
async def _ahandle_completion(
|
||||
self,
|
||||
@@ -2016,7 +2074,7 @@ class OpenAICompletion(BaseLLM):
|
||||
from_task: Any | None = None,
|
||||
from_agent: Any | None = None,
|
||||
response_model: type[BaseModel] | None = None,
|
||||
) -> str | BaseModel:
|
||||
) -> str | list[dict[str, Any]] | BaseModel:
|
||||
"""Handle async streaming chat completion."""
|
||||
full_response = ""
|
||||
tool_calls: dict[int, dict[str, Any]] = {}
|
||||
@@ -2142,51 +2200,16 @@ class OpenAICompletion(BaseLLM):
|
||||
response_id=response_id_stream,
|
||||
)
|
||||
|
||||
self._track_token_usage_internal(usage_data)
|
||||
|
||||
if tool_calls and available_functions:
|
||||
for call_data in tool_calls.values():
|
||||
function_name = call_data["name"]
|
||||
arguments = call_data["arguments"]
|
||||
|
||||
if not function_name or not arguments:
|
||||
continue
|
||||
|
||||
if function_name not in available_functions:
|
||||
logging.warning(
|
||||
f"Function '{function_name}' not found in available functions"
|
||||
)
|
||||
continue
|
||||
|
||||
try:
|
||||
function_args = json.loads(arguments)
|
||||
except json.JSONDecodeError as e:
|
||||
logging.error(f"Failed to parse streamed tool arguments: {e}")
|
||||
continue
|
||||
|
||||
result = self._handle_tool_execution(
|
||||
function_name=function_name,
|
||||
function_args=function_args,
|
||||
available_functions=available_functions,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
)
|
||||
|
||||
if result is not None:
|
||||
return result
|
||||
|
||||
full_response = self._apply_stop_words(full_response)
|
||||
|
||||
self._emit_call_completed_event(
|
||||
response=full_response,
|
||||
call_type=LLMCallType.LLM_CALL,
|
||||
return self._finalize_streaming_response(
|
||||
full_response=full_response,
|
||||
tool_calls=tool_calls,
|
||||
usage_data=usage_data,
|
||||
params=params,
|
||||
available_functions=available_functions,
|
||||
from_task=from_task,
|
||||
from_agent=from_agent,
|
||||
messages=params["messages"],
|
||||
)
|
||||
|
||||
return full_response
|
||||
|
||||
def supports_function_calling(self) -> bool:
|
||||
"""Check if the model supports function calling."""
|
||||
return not self.is_o1_model
|
||||
@@ -2240,11 +2263,18 @@ class OpenAICompletion(BaseLLM):
|
||||
"""Extract token usage from OpenAI ChatCompletion or ChatCompletionChunk response."""
|
||||
if hasattr(response, "usage") and response.usage:
|
||||
usage = response.usage
|
||||
return {
|
||||
result = {
|
||||
"prompt_tokens": getattr(usage, "prompt_tokens", 0),
|
||||
"completion_tokens": getattr(usage, "completion_tokens", 0),
|
||||
"total_tokens": getattr(usage, "total_tokens", 0),
|
||||
}
|
||||
# Extract cached prompt tokens from prompt_tokens_details
|
||||
prompt_details = getattr(usage, "prompt_tokens_details", None)
|
||||
if prompt_details:
|
||||
result["cached_prompt_tokens"] = (
|
||||
getattr(prompt_details, "cached_tokens", 0) or 0
|
||||
)
|
||||
return result
|
||||
return {"total_tokens": 0}
|
||||
|
||||
def _format_messages(self, messages: str | list[LLMMessage]) -> list[LLMMessage]:
|
||||
|
||||
@@ -31,6 +31,7 @@ from pydantic_core import PydanticCustomError
|
||||
from typing_extensions import Self
|
||||
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.context import reset_current_task_id, set_current_task_id
|
||||
from crewai.core.providers.content_processor import process_content
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.task_events import (
|
||||
@@ -561,6 +562,7 @@ class Task(BaseModel):
|
||||
tools: list[Any] | None,
|
||||
) -> TaskOutput:
|
||||
"""Run the core execution logic of the task asynchronously."""
|
||||
task_id_token = set_current_task_id(str(self.id))
|
||||
self._store_input_files()
|
||||
try:
|
||||
agent = agent or self.agent
|
||||
@@ -648,6 +650,7 @@ class Task(BaseModel):
|
||||
raise e # Re-raise the exception after emitting the event
|
||||
finally:
|
||||
clear_task_files(self.id)
|
||||
reset_current_task_id(task_id_token)
|
||||
|
||||
def _execute_core(
|
||||
self,
|
||||
@@ -656,6 +659,7 @@ class Task(BaseModel):
|
||||
tools: list[Any] | None,
|
||||
) -> TaskOutput:
|
||||
"""Run the core execution logic of the task."""
|
||||
task_id_token = set_current_task_id(str(self.id))
|
||||
self._store_input_files()
|
||||
try:
|
||||
agent = agent or self.agent
|
||||
@@ -744,6 +748,7 @@ class Task(BaseModel):
|
||||
raise e # Re-raise the exception after emitting the event
|
||||
finally:
|
||||
clear_task_files(self.id)
|
||||
reset_current_task_id(task_id_token)
|
||||
|
||||
def _post_agent_execution(self, agent: BaseAgent) -> None:
|
||||
pass
|
||||
|
||||
@@ -22,9 +22,9 @@
|
||||
"expected_output": "\nThis is the expected criteria for your final answer: {expected_output}\nyou MUST return the actual complete content as the final answer, not a summary.",
|
||||
"human_feedback": "You got human feedback on your work, re-evaluate it and give a new Final Answer when ready.\n {human_feedback}",
|
||||
"getting_input": "This is the agent's final answer: {final_answer}\n\n",
|
||||
"summarizer_system_message": "You are a helpful assistant that summarizes text.",
|
||||
"summarize_instruction": "Summarize the following text, make sure to include all the important information: {group}",
|
||||
"summary": "This is a summary of our conversation so far:\n{merged_summary}",
|
||||
"summarizer_system_message": "You are a precise assistant that creates structured summaries of agent conversations. You preserve critical context needed for seamless task continuation.",
|
||||
"summarize_instruction": "Analyze the following conversation and create a structured summary that preserves all information needed to continue the task seamlessly.\n\n<conversation>\n{conversation}\n</conversation>\n\nCreate a summary with these sections:\n1. **Task Overview**: What is the agent trying to accomplish?\n2. **Current State**: What has been completed so far? What step is the agent on?\n3. **Important Discoveries**: Key facts, data, tool results, or findings that must not be lost.\n4. **Next Steps**: What should the agent do next based on the conversation?\n5. **Context to Preserve**: Any specific values, names, URLs, code snippets, or details referenced in the conversation.\n\nWrap your entire summary in <summary> tags.\n\n<summary>\n[Your structured summary here]\n</summary>",
|
||||
"summary": "<summary>\n{merged_summary}\n</summary>\n\nContinue the task from where the conversation left off. The above is a structured summary of prior context.",
|
||||
"manager_request": "Your best answer to your coworker asking you this, accounting for the context shared.",
|
||||
"formatted_task_instructions": "Format your final answer according to the following OpenAPI schema: {output_format}\n\nIMPORTANT: Preserve the original content exactly as-is. Do NOT rewrite, paraphrase, or modify the meaning of the content. Only structure it to match the schema format.\n\nDo not include the OpenAPI schema in the final output. Ensure the final output does not include any code block markers like ```json or ```python.",
|
||||
"conversation_history_instruction": "You are a member of a crew collaborating to achieve a common goal. Your task is a specific action that contributes to this larger objective. For additional context, please review the conversation history between you and the user that led to the initiation of this crew. Use any relevant information or feedback from the conversation to inform your task execution and ensure your response aligns with both the immediate task and the crew's overall goals.",
|
||||
|
||||
@@ -1,37 +0,0 @@
|
||||
"""Human-in-the-loop (HITL) type definitions.
|
||||
|
||||
This module provides type definitions for human-in-the-loop interactions
|
||||
in crew executions.
|
||||
"""
|
||||
|
||||
from typing import TypedDict
|
||||
|
||||
|
||||
class HITLResumeInfo(TypedDict, total=False):
|
||||
"""HITL resume information passed from flow to crew.
|
||||
|
||||
Attributes:
|
||||
task_id: Unique identifier for the task.
|
||||
crew_execution_id: Unique identifier for the crew execution.
|
||||
task_key: Key identifying the specific task.
|
||||
task_output: Output from the task before human intervention.
|
||||
human_feedback: Feedback provided by the human.
|
||||
previous_messages: History of messages in the conversation.
|
||||
"""
|
||||
|
||||
task_id: str
|
||||
crew_execution_id: str
|
||||
task_key: str
|
||||
task_output: str
|
||||
human_feedback: str
|
||||
previous_messages: list[dict[str, str]]
|
||||
|
||||
|
||||
class CrewInputsWithHITL(TypedDict, total=False):
|
||||
"""Crew inputs that may contain HITL resume information.
|
||||
|
||||
Attributes:
|
||||
_hitl_resume: Optional HITL resume information for continuing execution.
|
||||
"""
|
||||
|
||||
_hitl_resume: HITLResumeInfo
|
||||
@@ -2,6 +2,7 @@ from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from collections.abc import Callable, Sequence
|
||||
import concurrent.futures
|
||||
import json
|
||||
import re
|
||||
from typing import TYPE_CHECKING, Any, Final, Literal, TypedDict
|
||||
@@ -640,6 +641,180 @@ def handle_context_length(
|
||||
)
|
||||
|
||||
|
||||
def _estimate_token_count(text: str) -> int:
|
||||
"""Estimate token count using a conservative cross-provider heuristic.
|
||||
|
||||
Args:
|
||||
text: The text to estimate tokens for.
|
||||
|
||||
Returns:
|
||||
Estimated token count (roughly 1 token per 4 characters).
|
||||
"""
|
||||
return len(text) // 4
|
||||
|
||||
|
||||
def _format_messages_for_summary(messages: list[LLMMessage]) -> str:
|
||||
"""Format messages with role labels for summarization.
|
||||
|
||||
Skips system messages. Handles None content, tool_calls, and
|
||||
multimodal content blocks.
|
||||
|
||||
Args:
|
||||
messages: List of messages to format.
|
||||
|
||||
Returns:
|
||||
Role-labeled conversation text.
|
||||
"""
|
||||
lines: list[str] = []
|
||||
for msg in messages:
|
||||
role = msg.get("role", "user")
|
||||
if role == "system":
|
||||
continue
|
||||
|
||||
content = msg.get("content")
|
||||
if content is None:
|
||||
# Check for tool_calls on assistant messages with no content
|
||||
tool_calls = msg.get("tool_calls")
|
||||
if tool_calls:
|
||||
tool_names = []
|
||||
for tc in tool_calls:
|
||||
func = tc.get("function", {})
|
||||
name = (
|
||||
func.get("name", "unknown")
|
||||
if isinstance(func, dict)
|
||||
else "unknown"
|
||||
)
|
||||
tool_names.append(name)
|
||||
content = f"[Called tools: {', '.join(tool_names)}]"
|
||||
else:
|
||||
content = ""
|
||||
elif isinstance(content, list):
|
||||
# Multimodal content blocks — extract text parts
|
||||
text_parts = [
|
||||
block.get("text", "")
|
||||
for block in content
|
||||
if isinstance(block, dict) and block.get("type") == "text"
|
||||
]
|
||||
content = " ".join(text_parts) if text_parts else "[multimodal content]"
|
||||
|
||||
if role == "assistant":
|
||||
label = "[ASSISTANT]:"
|
||||
elif role == "tool":
|
||||
tool_name = msg.get("name", "unknown")
|
||||
label = f"[TOOL_RESULT ({tool_name})]:"
|
||||
else:
|
||||
label = "[USER]:"
|
||||
|
||||
lines.append(f"{label} {content}")
|
||||
|
||||
return "\n\n".join(lines)
|
||||
|
||||
|
||||
def _split_messages_into_chunks(
|
||||
messages: list[LLMMessage], max_tokens: int
|
||||
) -> list[list[LLMMessage]]:
|
||||
"""Split messages into chunks at message boundaries.
|
||||
|
||||
Excludes system messages from chunks. Each chunk stays under
|
||||
max_tokens based on estimated token count.
|
||||
|
||||
Args:
|
||||
messages: List of messages to split.
|
||||
max_tokens: Maximum estimated tokens per chunk.
|
||||
|
||||
Returns:
|
||||
List of message chunks.
|
||||
"""
|
||||
non_system = [m for m in messages if m.get("role") != "system"]
|
||||
if not non_system:
|
||||
return []
|
||||
|
||||
chunks: list[list[LLMMessage]] = []
|
||||
current_chunk: list[LLMMessage] = []
|
||||
current_tokens = 0
|
||||
|
||||
for msg in non_system:
|
||||
content = msg.get("content")
|
||||
if content is None:
|
||||
msg_text = ""
|
||||
elif isinstance(content, list):
|
||||
msg_text = str(content)
|
||||
else:
|
||||
msg_text = str(content)
|
||||
|
||||
msg_tokens = _estimate_token_count(msg_text)
|
||||
|
||||
# If adding this message would exceed the limit and we already have
|
||||
# messages in the current chunk, start a new chunk
|
||||
if current_chunk and (current_tokens + msg_tokens) > max_tokens:
|
||||
chunks.append(current_chunk)
|
||||
current_chunk = []
|
||||
current_tokens = 0
|
||||
|
||||
current_chunk.append(msg)
|
||||
current_tokens += msg_tokens
|
||||
|
||||
if current_chunk:
|
||||
chunks.append(current_chunk)
|
||||
|
||||
return chunks
|
||||
|
||||
|
||||
def _extract_summary_tags(text: str) -> str:
|
||||
"""Extract content between <summary></summary> tags.
|
||||
|
||||
Falls back to the full text if no tags are found.
|
||||
|
||||
Args:
|
||||
text: Text potentially containing summary tags.
|
||||
|
||||
Returns:
|
||||
Extracted summary content, or full text if no tags found.
|
||||
"""
|
||||
match = re.search(r"<summary>(.*?)</summary>", text, re.DOTALL)
|
||||
if match:
|
||||
return match.group(1).strip()
|
||||
return text.strip()
|
||||
|
||||
|
||||
async def _asummarize_chunks(
|
||||
chunks: list[list[LLMMessage]],
|
||||
llm: LLM | BaseLLM,
|
||||
callbacks: list[TokenCalcHandler],
|
||||
i18n: I18N,
|
||||
) -> list[SummaryContent]:
|
||||
"""Summarize multiple message chunks concurrently using asyncio.
|
||||
|
||||
Args:
|
||||
chunks: List of message chunks to summarize.
|
||||
llm: LLM instance (must support ``acall``).
|
||||
callbacks: List of callbacks for the LLM.
|
||||
i18n: I18N instance for prompt templates.
|
||||
|
||||
Returns:
|
||||
Ordered list of summary contents, one per chunk.
|
||||
"""
|
||||
|
||||
async def _summarize_one(chunk: list[LLMMessage]) -> SummaryContent:
|
||||
conversation_text = _format_messages_for_summary(chunk)
|
||||
summarization_messages = [
|
||||
format_message_for_llm(
|
||||
i18n.slice("summarizer_system_message"), role="system"
|
||||
),
|
||||
format_message_for_llm(
|
||||
i18n.slice("summarize_instruction").format(
|
||||
conversation=conversation_text
|
||||
),
|
||||
),
|
||||
]
|
||||
summary = await llm.acall(summarization_messages, callbacks=callbacks)
|
||||
extracted = _extract_summary_tags(str(summary))
|
||||
return {"content": extracted}
|
||||
|
||||
results = await asyncio.gather(*[_summarize_one(chunk) for chunk in chunks])
|
||||
return list(results)
|
||||
|
||||
|
||||
def summarize_messages(
|
||||
messages: list[LLMMessage],
|
||||
llm: LLM | BaseLLM,
|
||||
@@ -649,6 +824,10 @@ def summarize_messages(
|
||||
) -> None:
|
||||
"""Summarize messages to fit within context window.
|
||||
|
||||
Uses structured context compaction: preserves system messages,
|
||||
splits at message boundaries, formats with role labels, and
|
||||
produces structured summaries for seamless task continuation.
|
||||
|
||||
Preserves any files attached to user messages and re-attaches them to
|
||||
the summarized message. Files from all user messages are merged.
|
||||
|
||||
@@ -657,49 +836,74 @@ def summarize_messages(
|
||||
llm: LLM instance for summarization
|
||||
callbacks: List of callbacks for LLM
|
||||
i18n: I18N instance for messages
|
||||
verbose: Whether to print progress.
|
||||
"""
|
||||
# 1. Extract & preserve file attachments from user messages
|
||||
preserved_files: dict[str, Any] = {}
|
||||
for msg in messages:
|
||||
if msg.get("role") == "user" and msg.get("files"):
|
||||
preserved_files.update(msg["files"])
|
||||
|
||||
messages_string = " ".join(
|
||||
[str(message.get("content", "")) for message in messages]
|
||||
)
|
||||
cut_size = llm.get_context_window_size()
|
||||
# 2. Extract system messages — never summarize them
|
||||
system_messages = [m for m in messages if m.get("role") == "system"]
|
||||
non_system_messages = [m for m in messages if m.get("role") != "system"]
|
||||
|
||||
messages_groups = [
|
||||
{"content": messages_string[i : i + cut_size]}
|
||||
for i in range(0, len(messages_string), cut_size)
|
||||
]
|
||||
# If there are only system messages (or no non-system messages), nothing to summarize
|
||||
if not non_system_messages:
|
||||
return
|
||||
|
||||
summarized_contents: list[SummaryContent] = []
|
||||
# 3. Split non-system messages into chunks at message boundaries
|
||||
max_tokens = llm.get_context_window_size()
|
||||
chunks = _split_messages_into_chunks(non_system_messages, max_tokens)
|
||||
|
||||
total_groups = len(messages_groups)
|
||||
for idx, group in enumerate(messages_groups, 1):
|
||||
# 4. Summarize each chunk with role-labeled formatting
|
||||
total_chunks = len(chunks)
|
||||
|
||||
if total_chunks <= 1:
|
||||
# Single chunk — no benefit from async overhead
|
||||
summarized_contents: list[SummaryContent] = []
|
||||
for idx, chunk in enumerate(chunks, 1):
|
||||
if verbose:
|
||||
Printer().print(
|
||||
content=f"Summarizing {idx}/{total_chunks}...",
|
||||
color="yellow",
|
||||
)
|
||||
conversation_text = _format_messages_for_summary(chunk)
|
||||
summarization_messages = [
|
||||
format_message_for_llm(
|
||||
i18n.slice("summarizer_system_message"), role="system"
|
||||
),
|
||||
format_message_for_llm(
|
||||
i18n.slice("summarize_instruction").format(
|
||||
conversation=conversation_text
|
||||
),
|
||||
),
|
||||
]
|
||||
summary = llm.call(summarization_messages, callbacks=callbacks)
|
||||
extracted = _extract_summary_tags(str(summary))
|
||||
summarized_contents.append({"content": extracted})
|
||||
else:
|
||||
# Multiple chunks — summarize in parallel via asyncio
|
||||
if verbose:
|
||||
Printer().print(
|
||||
content=f"Summarizing {idx}/{total_groups}...",
|
||||
content=f"Summarizing {total_chunks} chunks in parallel...",
|
||||
color="yellow",
|
||||
)
|
||||
|
||||
summarization_messages = [
|
||||
format_message_for_llm(
|
||||
i18n.slice("summarizer_system_message"), role="system"
|
||||
),
|
||||
format_message_for_llm(
|
||||
i18n.slice("summarize_instruction").format(group=group["content"]),
|
||||
),
|
||||
]
|
||||
summary = llm.call(
|
||||
summarization_messages,
|
||||
callbacks=callbacks,
|
||||
coro = _asummarize_chunks(
|
||||
chunks=chunks, llm=llm, callbacks=callbacks, i18n=i18n
|
||||
)
|
||||
summarized_contents.append({"content": str(summary)})
|
||||
if is_inside_event_loop():
|
||||
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as pool:
|
||||
summarized_contents = pool.submit(asyncio.run, coro).result()
|
||||
else:
|
||||
summarized_contents = asyncio.run(coro)
|
||||
|
||||
merged_summary = " ".join(content["content"] for content in summarized_contents)
|
||||
merged_summary = "\n\n".join(content["content"] for content in summarized_contents)
|
||||
|
||||
# 6. Reconstruct messages: [system messages...] + [summary user message]
|
||||
messages.clear()
|
||||
messages.extend(system_messages)
|
||||
|
||||
summary_message = format_message_for_llm(
|
||||
i18n.slice("summary").format(merged_summary=merged_summary)
|
||||
)
|
||||
@@ -832,7 +1036,7 @@ def load_agent_from_repository(from_repository: str) -> dict[str, Any]:
|
||||
|
||||
client = PlusAPI(api_key=get_auth_token())
|
||||
_print_current_organization()
|
||||
response = client.get_agent(from_repository)
|
||||
response = asyncio.run(client.get_agent(from_repository))
|
||||
if response.status_code == 404:
|
||||
raise AgentRepositoryError(
|
||||
f"Agent {from_repository} does not exist, make sure the name is correct or the agent is available on your organization."
|
||||
|
||||
@@ -606,9 +606,10 @@ def test_lite_agent_with_invalid_llm():
|
||||
|
||||
|
||||
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token"})
|
||||
@patch("crewai_tools.tools.crewai_platform_tools.crewai_platform_action_tool.requests.post")
|
||||
@patch("crewai_tools.tools.crewai_platform_tools.crewai_platform_tool_builder.requests.get")
|
||||
@pytest.mark.vcr()
|
||||
def test_agent_kickoff_with_platform_tools(mock_get):
|
||||
def test_agent_kickoff_with_platform_tools(mock_get, mock_post):
|
||||
"""Test that Agent.kickoff() properly integrates platform tools with LiteAgent"""
|
||||
mock_response = Mock()
|
||||
mock_response.raise_for_status.return_value = None
|
||||
@@ -632,6 +633,15 @@ def test_agent_kickoff_with_platform_tools(mock_get):
|
||||
}
|
||||
mock_get.return_value = mock_response
|
||||
|
||||
# Mock the platform tool execution
|
||||
mock_post_response = Mock()
|
||||
mock_post_response.ok = True
|
||||
mock_post_response.json.return_value = {
|
||||
"success": True,
|
||||
"issue_url": "https://github.com/test/repo/issues/1"
|
||||
}
|
||||
mock_post.return_value = mock_post_response
|
||||
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
|
||||
@@ -1,98 +1,227 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Test Agent. Test backstory\nYour personal goal is: Test goal\n\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\nTool Name: create_issue\nTool Arguments: {''title'': {''description'': ''Issue title'', ''type'': ''str''}, ''body'': {''description'': ''Issue body'', ''type'': ''Union[str, NoneType]''}}\nTool Description: Create a GitHub issue\nDetailed Parameter Structure:\nObject with properties:\n - title: Issue title (required)\n - body: Issue body (optional)\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought: you should always think about what to do\nAction: the action to take, only one name of [create_issue], just the name, exactly as it''s written.\nAction Input: the input to the action, just a simple JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce all necessary information
|
||||
is gathered, return the following format:\n\n```\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n```"}, {"role": "user", "content": "Create a GitHub issue"}], "model": "gpt-3.5-turbo", "stream": false}'
|
||||
body: '{"messages":[{"role":"system","content":"You are Test Agent. Test backstory\nYour
|
||||
personal goal is: Test goal"},{"role":"user","content":"\nCurrent Task: Create
|
||||
a GitHub issue"}],"model":"gpt-3.5-turbo","tool_choice":"auto","tools":[{"type":"function","function":{"name":"create_issue","description":"Create
|
||||
a GitHub issue","strict":true,"parameters":{"additionalProperties":false,"properties":{"title":{"description":"Issue
|
||||
title","title":"Title","type":"string"},"body":{"default":null,"description":"Issue
|
||||
body","title":"Body","type":"string"}},"required":["title","body"],"type":"object"}}}]}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1233'
|
||||
- '596'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.109.1
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.109.1
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- '600'
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
- 3.13.5
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-CULxKTEIB85AVItcEQ09z4Xi0JCID\",\n \"object\": \"chat.completion\",\n \"created\": 1761350274,\n \"model\": \"gpt-3.5-turbo-0125\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"I will need more specific information to create a GitHub issue. Could you please provide more details such as the title and body of the issue you would like to create?\",\n \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 255,\n \"completion_tokens\": 33,\n \"total_tokens\": 288,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n \
|
||||
\ }\n },\n \"service_tier\": \"default\",\n \"system_fingerprint\": null\n}\n"
|
||||
string: "{\n \"id\": \"chatcmpl-D6L3fqygkUIZ3bN4wvSpAhdaSk7MF\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770403287,\n \"model\": \"gpt-3.5-turbo-0125\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
|
||||
\ \"id\": \"call_RuWuYzjzgRL3byVGhLlPi0rq\",\n \"type\":
|
||||
\"function\",\n \"function\": {\n \"name\": \"create_issue\",\n
|
||||
\ \"arguments\": \"{\\\"title\\\":\\\"Test issue\\\",\\\"body\\\":\\\"This
|
||||
is a test issue created for testing purposes.\\\"}\"\n }\n }\n
|
||||
\ ],\n \"refusal\": null,\n \"annotations\": []\n },\n
|
||||
\ \"logprobs\": null,\n \"finish_reason\": \"tool_calls\"\n }\n
|
||||
\ ],\n \"usage\": {\n \"prompt_tokens\": 93,\n \"completion_tokens\":
|
||||
28,\n \"total_tokens\": 121,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": null\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 993d6b4be9862379-SJC
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 24 Oct 2025 23:57:54 GMT
|
||||
- Fri, 06 Feb 2026 18:41:28 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=WY9bgemMDI_hUYISAPlQ2a.DBGeZfM6AjVEa3SKNg1c-1761350274-1.0.1.1-K3Qm2cl6IlDAgmocoKZ8IMUTmue6Q81hH9stECprUq_SM8LF8rR9d1sHktvRCN3.jEM.twEuFFYDNpBnN8NBRJFZcea1yvpm8Uo0G_UhyDs; path=/; expires=Sat, 25-Oct-25 00:27:54 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
- _cfuvid=JklLS4i3hBGELpS9cz1KMpTbj72hCwP41LyXDSxWIv8-1761350274521-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
- SET-COOKIE-XXX
|
||||
Strict-Transport-Security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '487'
|
||||
- '1406'
|
||||
openai-project:
|
||||
- proj_xitITlrFeen7zjNSzML82h9x
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '526'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- '50000000'
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '49999727'
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- req_1708dc0928c64882aaa5bc2c168c140f
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Test Agent. Test backstory\nYour
|
||||
personal goal is: Test goal"},{"role":"user","content":"\nCurrent Task: Create
|
||||
a GitHub issue"},{"role":"assistant","content":null,"tool_calls":[{"id":"call_RuWuYzjzgRL3byVGhLlPi0rq","type":"function","function":{"name":"create_issue","arguments":"{\"title\":\"Test
|
||||
issue\",\"body\":\"This is a test issue created for testing purposes.\"}"}}]},{"role":"tool","tool_call_id":"call_RuWuYzjzgRL3byVGhLlPi0rq","name":"create_issue","content":"{\n \"success\":
|
||||
true,\n \"issue_url\": \"https://github.com/test/repo/issues/1\"\n}"}],"model":"gpt-3.5-turbo","tool_choice":"auto","tools":[{"type":"function","function":{"name":"create_issue","description":"Create
|
||||
a GitHub issue","strict":true,"parameters":{"additionalProperties":false,"properties":{"title":{"description":"Issue
|
||||
title","title":"Title","type":"string"},"body":{"default":null,"description":"Issue
|
||||
body","title":"Body","type":"string"}},"required":["title","body"],"type":"object"}}}]}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1028'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.5
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D6L3hfuBxk36LIb3ekD1IVwFD5VVL\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770403289,\n \"model\": \"gpt-3.5-turbo-0125\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I have successfully created a GitHub
|
||||
issue for testing purposes. You can view the issue at this URL: [Test issue](https://github.com/test/repo/issues/1)\",\n
|
||||
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
|
||||
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
154,\n \"completion_tokens\": 36,\n \"total_tokens\": 190,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": null\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 06 Feb 2026 18:41:29 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '888'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
|
||||
@@ -1,400 +1,428 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Sports Analyst. You are an expert at gathering and organizing information. You carefully collect details and present them in a structured way.\nYour personal goal is: Gather information about the best soccer players\n\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user", "content": "Top 10 best players in the world?"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
body: '{"messages":[{"role":"system","content":"You are Sports Analyst. You are
|
||||
an expert at gathering and organizing information. You carefully collect details
|
||||
and present them in a structured way.\nYour personal goal is: Gather information
|
||||
about the best soccer players"},{"role":"user","content":"\nCurrent Task: Top
|
||||
10 best players in the world?\n\nProvide your complete response:"}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '694'
|
||||
- '404'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.78.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.78.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
- 3.13.5
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-BgufUtDqGzvqPZx2NmkqqxdW4G8rQ\",\n \"object\": \"chat.completion\",\n \"created\": 1749567308,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"Thought: I now can give a great answer \\nFinal Answer: The top 10 best soccer players in the world, as of October 2023, can be identified based on their recent performances, skills, impact on games, and overall contributions to their teams. Here is the structured list:\\n\\n1. **Lionel Messi (Inter Miami CF)**\\n - Position: Forward\\n - Key Attributes: Dribbling, vision, goal-scoring ability.\\n - Achievements: Multiple Ballon d'Or winner, Copa America champion, World Cup champion (2022).\\n\\n2. **Kylian Mbappé (Paris Saint-Germain)**\\n - Position: Forward\\n - Key Attributes: Speed, technique, finishing.\\n - Achievements: FIFA World Cup champion (2018), Ligue 1 titles, multiple\
|
||||
\ domestic cups.\\n\\n3. **Erling Haaland (Manchester City)**\\n - Position: Forward\\n - Key Attributes: Power, speed, goal-scoring instinct.\\n - Achievements: Bundesliga top scorer, UEFA Champions League winner (2023), Premier League titles.\\n\\n4. **Kevin De Bruyne (Manchester City)**\\n - Position: Midfielder\\n - Key Attributes: Passing, vision, creativity.\\n - Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League winner (2023).\\n\\n5. **Karim Benzema (Al-Ittihad)**\\n - Position: Forward\\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\\n - Achievements: 2022 Ballon d'Or winner, multiple Champions Leagues with Real Madrid.\\n\\n6. **Neymar Jr. (Al Hilal)**\\n - Position: Forward\\n - Key Attributes: Flair, dribbling, creativity.\\n - Achievements: Multiple domestic league titles, Champions League runner-up.\\n\\n7. **Robert Lewandowski (FC Barcelona)**\\n - Position: Forward\\n - Key Attributes: Finishing,\
|
||||
\ positioning, aerial ability.\\n - Achievements: FIFA Best Men's Player, multiple Bundesliga titles, La Liga champion (2023).\\n\\n8. **Mohamed Salah (Liverpool)**\\n - Position: Forward\\n - Key Attributes: Speed, finishing, dribbling.\\n - Achievements: Premier League champion, FA Cup, UEFA Champions League winner.\\n\\n9. **Vinícius Júnior (Real Madrid)**\\n - Position: Forward\\n - Key Attributes: Speed, dribbling, creativity.\\n - Achievements: UEFA Champions League winner (2022), La Liga champion (2023).\\n\\n10. **Luka Modrić (Real Madrid)**\\n - Position: Midfielder\\n - Key Attributes: Passing, vision, tactical intelligence.\\n - Achievements: Multiple Champions League titles, Ballon d'Or winner (2018).\\n\\nThis list is compiled based on their current form, past performances, and contributions to their respective teams in both domestic and international competitions. Player rankings can vary based on personal opinion and specific criteria used for\
|
||||
\ evaluation, but these players have consistently been regarded as some of the best in the world as of October 2023.\",\n \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 122,\n \"completion_tokens\": 643,\n \"total_tokens\": 765,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": \"default\",\n \"system_fingerprint\": \"fp_34a54ae93c\"\n}\n"
|
||||
string: "{\n \"id\": \"chatcmpl-D6L3hzoRVVEa07HZsM9wpi2RVRKQp\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770403289,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Here is a structured list of the top
|
||||
10 best soccer players in the world as of 2024, based on recent performances,
|
||||
awards, and overall impact on the game:\\n\\n1. **Kylian Mbapp\xE9** \\n
|
||||
\ - Nationality: French \\n - Club: Paris Saint-Germain (PSG) \\n -
|
||||
Position: Forward \\n - Key Highlights: Multiple Ligue 1 titles, World
|
||||
Cup winner (2018), known for speed, dribbling, and scoring prowess.\\n\\n2.
|
||||
**Erling Haaland** \\n - Nationality: Norwegian \\n - Club: Manchester
|
||||
City \\n - Position: Striker \\n - Key Highlights: Premier League Golden
|
||||
Boot winner, incredible goal-scoring record, physical presence, and finishing
|
||||
skills.\\n\\n3. **Lionel Messi** \\n - Nationality: Argentine \\n -
|
||||
Club: Inter Miami \\n - Position: Forward/Attacking Midfielder \\n -
|
||||
Key Highlights: Seven Ballon d\u2019Or awards, World Cup winner (2022), exceptional
|
||||
playmaking and dribbling ability.\\n\\n4. **Kevin De Bruyne** \\n - Nationality:
|
||||
Belgian \\n - Club: Manchester City \\n - Position: Midfielder \\n
|
||||
\ - Key Highlights: One of the best playmakers globally, assists leader,
|
||||
consistent high-level performance in the Premier League.\\n\\n5. **Robert
|
||||
Lewandowski** \\n - Nationality: Polish \\n - Club: FC Barcelona \\n
|
||||
\ - Position: Striker \\n - Key Highlights: Exceptional goal-scoring record,
|
||||
multiple Bundesliga top scorer awards, key figure in Bayern Munich\u2019s
|
||||
dominance before transferring.\\n\\n6. **Karim Benzema** \\n - Nationality:
|
||||
French \\n - Club: Al-Ittihad \\n - Position: Striker \\n - Key Highlights:
|
||||
Ballon d\u2019Or winner (2022), excellent technical skills, leadership at
|
||||
Real Madrid before recent transfer.\\n\\n7. **Mohamed Salah** \\n - Nationality:
|
||||
Egyptian \\n - Club: Liverpool \\n - Position: Forward \\n - Key
|
||||
Highlights: Premier League Golden Boot winner, known for speed, dribbling,
|
||||
and goal-scoring consistency.\\n\\n8. **Vin\xEDcius J\xFAnior** \\n - Nationality:
|
||||
Brazilian \\n - Club: Real Madrid \\n - Position: Winger \\n - Key
|
||||
Highlights: Key player for Real Madrid, exceptional dribbling and pace, rising
|
||||
star in world football.\\n\\n9. **Jude Bellingham** \\n - Nationality:
|
||||
English \\n - Club: Real Madrid \\n - Position: Midfielder \\n -
|
||||
Key Highlights: Young talent with maturity beyond years, influential midfielder
|
||||
with great vision and work rate.\\n\\n10. **Thibaut Courtois** \\n - Nationality:
|
||||
Belgian \\n - Club: Real Madrid \\n - Position: Goalkeeper \\n -
|
||||
Key Highlights: One of the best goalkeepers globally, crucial performances
|
||||
in La Liga and Champions League.\\n\\nThese rankings consider individual talent,
|
||||
recent achievements, influence on matches, and overall contribution to club
|
||||
and country.\",\n \"refusal\": null,\n \"annotations\": []\n
|
||||
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
|
||||
\ ],\n \"usage\": {\n \"prompt_tokens\": 68,\n \"completion_tokens\":
|
||||
621,\n \"total_tokens\": 689,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_75546bd1a7\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 94d9b5400dcd624b-GRU
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 10 Jun 2025 14:55:42 GMT
|
||||
- Fri, 06 Feb 2026 18:41:40 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=8Yv8F0ZCFAo2lf.qoqxao70yxyjVvIV90zQqVF6bVzQ-1749567342-1.0.1.1-fZgnv3RDfunvCO1koxwwFJrHnxSx_rwS_FHvQ6xxDPpKHwYr7dTqIQLZrNgSX5twGyK4F22rUmkuiS6KMVogcinChk8lmHtJBTUVTFjr2KU; path=/; expires=Tue, 10-Jun-25 15:25:42 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
- _cfuvid=wzh8YnmXvLq1G0RcIVijtzboQtCZyIe2uZiochkBLqE-1749567342267-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
- SET-COOKIE-XXX
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '33288'
|
||||
- '10634'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-envoy-upstream-service-time:
|
||||
- '33292'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999859'
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- req_6a587ea22edef774ecdada790a320cab
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Sports Analyst. You are an expert at gathering and organizing information. You carefully collect details and present them in a structured way.\nYour personal goal is: Gather information about the best soccer players\n\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user", "content": "Top 10 best players in the world?"}, {"role": "assistant", "content": "Thought: I now can give a great answer \nFinal Answer: The top 10 best soccer players in the world, as of October 2023, can be identified based on their recent performances, skills, impact on games, and overall contributions to their teams. Here is the structured list:\n\n1. **Lionel Messi (Inter Miami CF)**\n -
|
||||
Position: Forward\n - Key Attributes: Dribbling, vision, goal-scoring ability.\n - Achievements: Multiple Ballon d''Or winner, Copa America champion, World Cup champion (2022).\n\n2. **Kylian Mbapp\u00e9 (Paris Saint-Germain)**\n - Position: Forward\n - Key Attributes: Speed, technique, finishing.\n - Achievements: FIFA World Cup champion (2018), Ligue 1 titles, multiple domestic cups.\n\n3. **Erling Haaland (Manchester City)**\n - Position: Forward\n - Key Attributes: Power, speed, goal-scoring instinct.\n - Achievements: Bundesliga top scorer, UEFA Champions League winner (2023), Premier League titles.\n\n4. **Kevin De Bruyne (Manchester City)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, creativity.\n - Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League winner (2023).\n\n5. **Karim Benzema (Al-Ittihad)**\n - Position: Forward\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\n - Achievements:
|
||||
2022 Ballon d''Or winner, multiple Champions Leagues with Real Madrid.\n\n6. **Neymar Jr. (Al Hilal)**\n - Position: Forward\n - Key Attributes: Flair, dribbling, creativity.\n - Achievements: Multiple domestic league titles, Champions League runner-up.\n\n7. **Robert Lewandowski (FC Barcelona)**\n - Position: Forward\n - Key Attributes: Finishing, positioning, aerial ability.\n - Achievements: FIFA Best Men''s Player, multiple Bundesliga titles, La Liga champion (2023).\n\n8. **Mohamed Salah (Liverpool)**\n - Position: Forward\n - Key Attributes: Speed, finishing, dribbling.\n - Achievements: Premier League champion, FA Cup, UEFA Champions League winner.\n\n9. **Vin\u00edcius J\u00fanior (Real Madrid)**\n - Position: Forward\n - Key Attributes: Speed, dribbling, creativity.\n - Achievements: UEFA Champions League winner (2022), La Liga champion (2023).\n\n10. **Luka Modri\u0107 (Real Madrid)**\n - Position: Midfielder\n - Key Attributes: Passing, vision,
|
||||
tactical intelligence.\n - Achievements: Multiple Champions League titles, Ballon d''Or winner (2018).\n\nThis list is compiled based on their current form, past performances, and contributions to their respective teams in both domestic and international competitions. Player rankings can vary based on personal opinion and specific criteria used for evaluation, but these players have consistently been regarded as some of the best in the world as of October 2023."}, {"role": "user", "content": "You are not allowed to include Brazilian players"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
body: '{"messages":[{"role":"system","content":"You are Sports Analyst. You are
|
||||
an expert at gathering and organizing information. You carefully collect details
|
||||
and present them in a structured way.\nYour personal goal is: Gather information
|
||||
about the best soccer players"},{"role":"user","content":"\nCurrent Task: Top
|
||||
10 best players in the world?\n\nProvide your complete response:"}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '3594'
|
||||
- '404'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=8Yv8F0ZCFAo2lf.qoqxao70yxyjVvIV90zQqVF6bVzQ-1749567342-1.0.1.1-fZgnv3RDfunvCO1koxwwFJrHnxSx_rwS_FHvQ6xxDPpKHwYr7dTqIQLZrNgSX5twGyK4F22rUmkuiS6KMVogcinChk8lmHtJBTUVTFjr2KU; _cfuvid=wzh8YnmXvLq1G0RcIVijtzboQtCZyIe2uZiochkBLqE-1749567342267-0.0.1.1-604800000
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.78.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.78.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
- 3.13.5
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-BgugJkCDtB2EfvAMiIFK0reeLKFBl\",\n \"object\": \"chat.completion\",\n \"created\": 1749567359,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"Thought: I now can give a great answer \\nFinal Answer: Here is an updated list of the top 10 best soccer players in the world as of October 2023, excluding Brazilian players:\\n\\n1. **Lionel Messi (Inter Miami CF)**\\n - Position: Forward\\n - Key Attributes: Dribbling, vision, goal-scoring ability.\\n - Achievements: Multiple Ballon d'Or winner, Copa America champion, World Cup champion (2022).\\n\\n2. **Kylian Mbappé (Paris Saint-Germain)**\\n - Position: Forward\\n - Key Attributes: Speed, technique, finishing.\\n - Achievements: FIFA World Cup champion (2018), Ligue 1 titles, multiple domestic cups.\\n\\n3. **Erling Haaland (Manchester City)**\\n - Position: Forward\\\
|
||||
n - Key Attributes: Power, speed, goal-scoring instinct.\\n - Achievements: Bundesliga top scorer, UEFA Champions League winner (2023), Premier League titles.\\n\\n4. **Kevin De Bruyne (Manchester City)**\\n - Position: Midfielder\\n - Key Attributes: Passing, vision, creativity.\\n - Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League winner (2023).\\n\\n5. **Karim Benzema (Al-Ittihad)**\\n - Position: Forward\\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\\n - Achievements: 2022 Ballon d'Or winner, multiple Champions Leagues with Real Madrid.\\n\\n6. **Robert Lewandowski (FC Barcelona)**\\n - Position: Forward\\n - Key Attributes: Finishing, positioning, aerial ability.\\n - Achievements: FIFA Best Men's Player, multiple Bundesliga titles, La Liga champion (2023).\\n\\n7. **Mohamed Salah (Liverpool)**\\n - Position: Forward\\n - Key Attributes: Speed, finishing, dribbling.\\n - Achievements: Premier League\
|
||||
\ champion, FA Cup, UEFA Champions League winner.\\n\\n8. **Vinícius Júnior (Real Madrid)**\\n - Position: Forward\\n - Key Attributes: Speed, dribbling, creativity.\\n - Achievements: UEFA Champions League winner (2022), La Liga champion (2023).\\n\\n9. **Luka Modrić (Real Madrid)**\\n - Position: Midfielder\\n - Key Attributes: Passing, vision, tactical intelligence.\\n - Achievements: Multiple Champions League titles, Ballon d'Or winner (2018).\\n\\n10. **Harry Kane (Bayern Munich)**\\n - Position: Forward\\n - Key Attributes: Goal-scoring, technique, playmaking.\\n - Achievements: Golden Boot winner, Premier League titles, UEFA European Championship runner-up.\\n\\nThis list has been adjusted to exclude Brazilian players and focuses on those who have made significant impacts in their clubs and on the international stage as of October 2023. Each player is recognized for their exceptional skills, performances, and achievements.\",\n \"refusal\": null,\n\
|
||||
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 781,\n \"completion_tokens\": 610,\n \"total_tokens\": 1391,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": \"default\",\n \"system_fingerprint\": \"fp_34a54ae93c\"\n}\n"
|
||||
string: "{\n \"id\": \"chatcmpl-D6L3sn9nSnGGOMKrS88avliVF7XTv\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770403300,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Certainly! Here's a structured list
|
||||
of the top 10 best soccer players in the world as of 2024, considering their
|
||||
performance, skills, achievements, and impact in recent seasons:\\n\\n###
|
||||
Top 10 Best Soccer Players in the World (2024)\\n\\n| Rank | Player Name |
|
||||
Nationality | Club (2023/24 Season) | Position | Key Attributes
|
||||
\ | Recent Achievements |\\n|-------|---------------------|-------------|----------------------------|------------------|---------------------------------|------------------------------------|\\n|
|
||||
1 | Lionel Messi | Argentina | Paris Saint-Germain (PSG) |
|
||||
Forward/Playmaker| Dribbling, Vision, Free kicks | 2023 World Cup Golden
|
||||
Ball, Club Successes |\\n| 2 | Kylian Mbapp\xE9 | France |
|
||||
Paris Saint-Germain (PSG) | Forward | Speed, Finishing, Dribbling
|
||||
\ | Ligue 1 Top Scorer, World Cup Winner 2018|\\n| 3 | Erling Haaland
|
||||
\ | Norway | Manchester City | Striker | Strength,
|
||||
Finishing, Positioning| Premier League Golden Boot, Champions League Impact|\\n|
|
||||
4 | Kevin De Bruyne | Belgium | Manchester City |
|
||||
Midfielder | Passing, Vision, Creativity | Premier League Titles,
|
||||
Key Playmaker|\\n| 5 | Robert Lewandowski | Poland | FC Barcelona
|
||||
\ | Striker | Finishing, Positioning, Composure| La
|
||||
Liga Top Scorer, Consistent Scorer|\\n| 6 | Neymar Jr. | Brazil
|
||||
\ | Al-Hilal | Forward/Winger | Dribbling, Creativity,
|
||||
Flair | Copa America Titles, Club Success |\\n| 7 | Mohamed Salah |
|
||||
Egypt | Liverpool | Forward/Winger | Pace, Finishing,
|
||||
Work Rate | Premier League Golden Boot, Champions League Winner|\\n|
|
||||
8 | Vin\xEDcius Jr. | Brazil | Real Madrid |
|
||||
Winger | Speed, Dribbling, Crossing | La Liga Titles, UEFA Champions
|
||||
League Winner|\\n| 9 | Luka Modri\u0107 | Croatia | Real Madrid
|
||||
\ | Midfielder | Passing, Control, Experience | Ballon
|
||||
d\u2019Or 2018, Multiple Champions League Titles|\\n| 10 | Karim Benzema
|
||||
\ | France | Al-Ittihad | Striker | Finishing,
|
||||
Link-up Play, Movements| Ballon d\u2019Or 2022, UEFA Champions League Top
|
||||
Scorer |\\n\\n### Notes:\\n- The rankings reflect a combination of individual
|
||||
skill, recent performance, consistency, and influence on the game.\\n- Players\u2019
|
||||
clubs are based on the 2023/24 season affiliations.\\n- Achievements highlight
|
||||
recent titles, awards, or standout contributions.\\n\\nIf you would like me
|
||||
to focus on specific leagues, historical players, or emerging talents, just
|
||||
let me know!\",\n \"refusal\": null,\n \"annotations\": []\n
|
||||
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
|
||||
\ ],\n \"usage\": {\n \"prompt_tokens\": 68,\n \"completion_tokens\":
|
||||
605,\n \"total_tokens\": 673,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_75546bd1a7\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 94d9b6782db84d3b-GRU
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 10 Jun 2025 14:56:30 GMT
|
||||
- Fri, 06 Feb 2026 18:41:49 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '31484'
|
||||
- '9044'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-envoy-upstream-service-time:
|
||||
- '31490'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149999166'
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- req_aa737cf40bb76af9f458bfd35f7a77a1
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Sports Analyst. You are an expert at gathering and organizing information. You carefully collect details and present them in a structured way.\nYour personal goal is: Gather information about the best soccer players\n\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user", "content": "Top 10 best players in the world?"}, {"role": "assistant", "content": "Thought: I now can give a great answer \nFinal Answer: The top 10 best soccer players in the world, as of October 2023, can be identified based on their recent performances, skills, impact on games, and overall contributions to their teams. Here is the structured list:\n\n1. **Lionel Messi (Inter Miami CF)**\n -
|
||||
Position: Forward\n - Key Attributes: Dribbling, vision, goal-scoring ability.\n - Achievements: Multiple Ballon d''Or winner, Copa America champion, World Cup champion (2022).\n\n2. **Kylian Mbapp\u00e9 (Paris Saint-Germain)**\n - Position: Forward\n - Key Attributes: Speed, technique, finishing.\n - Achievements: FIFA World Cup champion (2018), Ligue 1 titles, multiple domestic cups.\n\n3. **Erling Haaland (Manchester City)**\n - Position: Forward\n - Key Attributes: Power, speed, goal-scoring instinct.\n - Achievements: Bundesliga top scorer, UEFA Champions League winner (2023), Premier League titles.\n\n4. **Kevin De Bruyne (Manchester City)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, creativity.\n - Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League winner (2023).\n\n5. **Karim Benzema (Al-Ittihad)**\n - Position: Forward\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\n - Achievements:
|
||||
2022 Ballon d''Or winner, multiple Champions Leagues with Real Madrid.\n\n6. **Neymar Jr. (Al Hilal)**\n - Position: Forward\n - Key Attributes: Flair, dribbling, creativity.\n - Achievements: Multiple domestic league titles, Champions League runner-up.\n\n7. **Robert Lewandowski (FC Barcelona)**\n - Position: Forward\n - Key Attributes: Finishing, positioning, aerial ability.\n - Achievements: FIFA Best Men''s Player, multiple Bundesliga titles, La Liga champion (2023).\n\n8. **Mohamed Salah (Liverpool)**\n - Position: Forward\n - Key Attributes: Speed, finishing, dribbling.\n - Achievements: Premier League champion, FA Cup, UEFA Champions League winner.\n\n9. **Vin\u00edcius J\u00fanior (Real Madrid)**\n - Position: Forward\n - Key Attributes: Speed, dribbling, creativity.\n - Achievements: UEFA Champions League winner (2022), La Liga champion (2023).\n\n10. **Luka Modri\u0107 (Real Madrid)**\n - Position: Midfielder\n - Key Attributes: Passing, vision,
|
||||
tactical intelligence.\n - Achievements: Multiple Champions League titles, Ballon d''Or winner (2018).\n\nThis list is compiled based on their current form, past performances, and contributions to their respective teams in both domestic and international competitions. Player rankings can vary based on personal opinion and specific criteria used for evaluation, but these players have consistently been regarded as some of the best in the world as of October 2023."}, {"role": "user", "content": "You are not allowed to include Brazilian players"}, {"role": "assistant", "content": "Thought: I now can give a great answer \nFinal Answer: Here is an updated list of the top 10 best soccer players in the world as of October 2023, excluding Brazilian players:\n\n1. **Lionel Messi (Inter Miami CF)**\n - Position: Forward\n - Key Attributes: Dribbling, vision, goal-scoring ability.\n - Achievements: Multiple Ballon d''Or winner, Copa America champion, World Cup champion (2022).\n\n2.
|
||||
**Kylian Mbapp\u00e9 (Paris Saint-Germain)**\n - Position: Forward\n - Key Attributes: Speed, technique, finishing.\n - Achievements: FIFA World Cup champion (2018), Ligue 1 titles, multiple domestic cups.\n\n3. **Erling Haaland (Manchester City)**\n - Position: Forward\n - Key Attributes: Power, speed, goal-scoring instinct.\n - Achievements: Bundesliga top scorer, UEFA Champions League winner (2023), Premier League titles.\n\n4. **Kevin De Bruyne (Manchester City)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, creativity.\n - Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League winner (2023).\n\n5. **Karim Benzema (Al-Ittihad)**\n - Position: Forward\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\n - Achievements: 2022 Ballon d''Or winner, multiple Champions Leagues with Real Madrid.\n\n6. **Robert Lewandowski (FC Barcelona)**\n - Position: Forward\n - Key Attributes: Finishing, positioning,
|
||||
aerial ability.\n - Achievements: FIFA Best Men''s Player, multiple Bundesliga titles, La Liga champion (2023).\n\n7. **Mohamed Salah (Liverpool)**\n - Position: Forward\n - Key Attributes: Speed, finishing, dribbling.\n - Achievements: Premier League champion, FA Cup, UEFA Champions League winner.\n\n8. **Vin\u00edcius J\u00fanior (Real Madrid)**\n - Position: Forward\n - Key Attributes: Speed, dribbling, creativity.\n - Achievements: UEFA Champions League winner (2022), La Liga champion (2023).\n\n9. **Luka Modri\u0107 (Real Madrid)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, tactical intelligence.\n - Achievements: Multiple Champions League titles, Ballon d''Or winner (2018).\n\n10. **Harry Kane (Bayern Munich)**\n - Position: Forward\n - Key Attributes: Goal-scoring, technique, playmaking.\n - Achievements: Golden Boot winner, Premier League titles, UEFA European Championship runner-up.\n\nThis list has been adjusted to exclude Brazilian
|
||||
players and focuses on those who have made significant impacts in their clubs and on the international stage as of October 2023. Each player is recognized for their exceptional skills, performances, and achievements."}, {"role": "user", "content": "You are not allowed to include Brazilian players"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
body: '{"messages":[{"role":"system","content":"You are Sports Analyst. You are
|
||||
an expert at gathering and organizing information. You carefully collect details
|
||||
and present them in a structured way.\nYour personal goal is: Gather information
|
||||
about the best soccer players"},{"role":"user","content":"\nCurrent Task: Top
|
||||
10 best players in the world?\n\nProvide your complete response:"}],"model":"gpt-4.1-mini"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '6337'
|
||||
- '404'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=8Yv8F0ZCFAo2lf.qoqxao70yxyjVvIV90zQqVF6bVzQ-1749567342-1.0.1.1-fZgnv3RDfunvCO1koxwwFJrHnxSx_rwS_FHvQ6xxDPpKHwYr7dTqIQLZrNgSX5twGyK4F22rUmkuiS6KMVogcinChk8lmHtJBTUVTFjr2KU; _cfuvid=wzh8YnmXvLq1G0RcIVijtzboQtCZyIe2uZiochkBLqE-1749567342267-0.0.1.1-604800000
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.78.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.78.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
- 3.13.5
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-BgugsAmyI50uQ6SpCp89ZZY4eD1Pz\",\n \"object\": \"chat.completion\",\n \"created\": 1749567394,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"Thought: I now can give a great answer \\nFinal Answer: Here is the revised list of the top 10 best soccer players in the world as of October 2023, explicitly excluding Brazilian players:\\n\\n1. **Lionel Messi (Inter Miami CF)**\\n - Position: Forward\\n - Key Attributes: Dribbling, vision, goal-scoring ability.\\n - Achievements: Multiple Ballon d'Or winner, Copa America champion, World Cup champion (2022).\\n\\n2. **Kylian Mbappé (Paris Saint-Germain)**\\n - Position: Forward\\n - Key Attributes: Speed, technique, finishing.\\n - Achievements: FIFA World Cup champion (2018), multiple Ligue 1 titles, and various domestic cups.\\n\\n3. **Erling Haaland (Manchester City)**\\n\
|
||||
\ - Position: Forward\\n - Key Attributes: Power, speed, goal-scoring instinct.\\n - Achievements: Bundesliga top scorer, UEFA Champions League winner (2023), Premier League titles.\\n\\n4. **Kevin De Bruyne (Manchester City)**\\n - Position: Midfielder\\n - Key Attributes: Passing, vision, creativity.\\n - Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League winner (2023).\\n\\n5. **Karim Benzema (Al-Ittihad)**\\n - Position: Forward\\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\\n - Achievements: 2022 Ballon d'Or winner, multiple Champions Leagues with Real Madrid.\\n\\n6. **Robert Lewandowski (FC Barcelona)**\\n - Position: Forward\\n - Key Attributes: Finishing, positioning, aerial ability.\\n - Achievements: FIFA Best Men's Player, multiple Bundesliga titles, La Liga champion (2023).\\n\\n7. **Mohamed Salah (Liverpool)**\\n - Position: Forward\\n - Key Attributes: Speed, finishing, dribbling.\\n -\
|
||||
\ Achievements: Premier League champion, FA Cup, UEFA Champions League winner.\\n\\n8. **Luka Modrić (Real Madrid)**\\n - Position: Midfielder\\n - Key Attributes: Passing, vision, tactical intelligence.\\n - Achievements: Multiple Champions League titles, Ballon d'Or winner (2018).\\n\\n9. **Harry Kane (Bayern Munich)**\\n - Position: Forward\\n - Key Attributes: Goal-scoring, technique, playmaking.\\n - Achievements: Golden Boot winner, Premier League titles, UEFA European Championship runner-up.\\n\\n10. **Rodri (Manchester City)**\\n - Position: Midfielder\\n - Key Attributes: Defensive skills, passing, positional awareness.\\n - Achievements: Premier League titles, UEFA Champions League winner (2023).\\n\\nThis list is curated while adhering to the restriction of excluding Brazilian players. Each player included has demonstrated exceptional skills and remarkable performances, solidifying their status as some of the best in the world as of October 2023.\"\
|
||||
,\n \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 1407,\n \"completion_tokens\": 605,\n \"total_tokens\": 2012,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": \"default\",\n \"system_fingerprint\": \"fp_62a23a81ef\"\n}\n"
|
||||
string: "{\n \"id\": \"chatcmpl-D6L4102eMwTEPeHxfyN9Kh7rjBoX6\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770403309,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Certainly! Here is a list of the top
|
||||
10 best soccer players in the world as of 2024, considering their recent performances,
|
||||
skills, impact, and accolades:\\n\\n1. **Lionel Messi** \\n - Nationality:
|
||||
Argentine \\n - Position: Forward \\n - Key Achievements: 7 Ballon d'Or
|
||||
awards, led Argentina to 2021 Copa Am\xE9rica victory and 2022 FIFA World
|
||||
Cup triumph, exceptional dribbling and playmaking skills.\\n\\n2. **Kylian
|
||||
Mbapp\xE9** \\n - Nationality: French \\n - Position: Forward \\n -
|
||||
Key Achievements: FIFA World Cup winner (2018), multiple Ligue 1 titles, known
|
||||
for incredible speed, finishing, and consistency.\\n\\n3. **Erling Haaland**
|
||||
\ \\n - Nationality: Norwegian \\n - Position: Striker \\n - Key Achievements:
|
||||
Premier League Golden Boot winner (2022-23), prolific goal scorer, physical
|
||||
presence, and finishing ability.\\n\\n4. **Karim Benzema** \\n - Nationality:
|
||||
French \\n - Position: Forward \\n - Key Achievements: 2022 Ballon d'Or
|
||||
winner, key player for Real Madrid\u2019s recent Champions League victories,
|
||||
excellent technical skills and leadership.\\n\\n5. **Kevin De Bruyne** \\n
|
||||
\ - Nationality: Belgian \\n - Position: Midfielder \\n - Key Achievements:
|
||||
Premier League playmaker, known for vision, passing accuracy, and creativity.\\n\\n6.
|
||||
**Robert Lewandowski** \\n - Nationality: Polish \\n - Position: Striker
|
||||
\ \\n - Key Achievements: Multiple Bundesliga top scorer titles, consistent
|
||||
goal scorer, known for positioning and finishing.\\n\\n7. **Neymar Jr.** \\n
|
||||
\ - Nationality: Brazilian \\n - Position: Forward \\n - Key Achievements:
|
||||
Exceptional dribbling, creativity, and flair; multiple domestic titles and
|
||||
Copa Libertadores winner.\\n\\n8. **Mohamed Salah** \\n - Nationality:
|
||||
Egyptian \\n - Position: Forward \\n - Key Achievements: Premier League
|
||||
Golden Boot, consistent goal scoring with Liverpool, known for speed and finishing.\\n\\n9.
|
||||
**Luka Modri\u0107** \\n - Nationality: Croatian \\n - Position: Midfielder
|
||||
\ \\n - Key Achievements: 2018 Ballon d\u2019Or winner, pivotal midfield
|
||||
maestro, excellent passing and control.\\n\\n10. **Thibaut Courtois** \\n
|
||||
\ - Nationality: Belgian \\n - Position: Goalkeeper \\n - Key Achievements:
|
||||
Exceptional shot-stopper, key player in Real Madrid's recent successes.\\n\\nThis
|
||||
list includes a blend of forwards, midfielders, and a goalkeeper, showcasing
|
||||
the best talents in various positions worldwide. The rankings may vary slightly
|
||||
depending on current form and opinions, but these players consistently rank
|
||||
among the best globally.\",\n \"refusal\": null,\n \"annotations\":
|
||||
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 68,\n \"completion_tokens\":
|
||||
575,\n \"total_tokens\": 643,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_75546bd1a7\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 94d9b7561f204d3b-GRU
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 10 Jun 2025 14:56:46 GMT
|
||||
- Fri, 06 Feb 2026 18:41:57 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '12189'
|
||||
- '7948'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-envoy-upstream-service-time:
|
||||
- '12193'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149998513'
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- req_1098f5a5384f4a26aecf0c9e4e4d1fc0
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are Sports Analyst. You are an expert at gathering and organizing information. You carefully collect details and present them in a structured way.\nYour personal goal is: Gather information about the best soccer players\n\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user", "content": "Top 10 best players in the world?"}, {"role": "assistant", "content": "Thought: I now can give a great answer \nFinal Answer: The top 10 best soccer players in the world, as of October 2023, can be identified based on their recent performances, skills, impact on games, and overall contributions to their teams. Here is the structured list:\n\n1. **Lionel Messi (Inter Miami CF)**\n -
|
||||
Position: Forward\n - Key Attributes: Dribbling, vision, goal-scoring ability.\n - Achievements: Multiple Ballon d''Or winner, Copa America champion, World Cup champion (2022).\n\n2. **Kylian Mbapp\u00e9 (Paris Saint-Germain)**\n - Position: Forward\n - Key Attributes: Speed, technique, finishing.\n - Achievements: FIFA World Cup champion (2018), Ligue 1 titles, multiple domestic cups.\n\n3. **Erling Haaland (Manchester City)**\n - Position: Forward\n - Key Attributes: Power, speed, goal-scoring instinct.\n - Achievements: Bundesliga top scorer, UEFA Champions League winner (2023), Premier League titles.\n\n4. **Kevin De Bruyne (Manchester City)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, creativity.\n - Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League winner (2023).\n\n5. **Karim Benzema (Al-Ittihad)**\n - Position: Forward\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\n - Achievements:
|
||||
2022 Ballon d''Or winner, multiple Champions Leagues with Real Madrid.\n\n6. **Neymar Jr. (Al Hilal)**\n - Position: Forward\n - Key Attributes: Flair, dribbling, creativity.\n - Achievements: Multiple domestic league titles, Champions League runner-up.\n\n7. **Robert Lewandowski (FC Barcelona)**\n - Position: Forward\n - Key Attributes: Finishing, positioning, aerial ability.\n - Achievements: FIFA Best Men''s Player, multiple Bundesliga titles, La Liga champion (2023).\n\n8. **Mohamed Salah (Liverpool)**\n - Position: Forward\n - Key Attributes: Speed, finishing, dribbling.\n - Achievements: Premier League champion, FA Cup, UEFA Champions League winner.\n\n9. **Vin\u00edcius J\u00fanior (Real Madrid)**\n - Position: Forward\n - Key Attributes: Speed, dribbling, creativity.\n - Achievements: UEFA Champions League winner (2022), La Liga champion (2023).\n\n10. **Luka Modri\u0107 (Real Madrid)**\n - Position: Midfielder\n - Key Attributes: Passing, vision,
|
||||
tactical intelligence.\n - Achievements: Multiple Champions League titles, Ballon d''Or winner (2018).\n\nThis list is compiled based on their current form, past performances, and contributions to their respective teams in both domestic and international competitions. Player rankings can vary based on personal opinion and specific criteria used for evaluation, but these players have consistently been regarded as some of the best in the world as of October 2023."}, {"role": "user", "content": "You are not allowed to include Brazilian players"}, {"role": "assistant", "content": "Thought: I now can give a great answer \nFinal Answer: Here is an updated list of the top 10 best soccer players in the world as of October 2023, excluding Brazilian players:\n\n1. **Lionel Messi (Inter Miami CF)**\n - Position: Forward\n - Key Attributes: Dribbling, vision, goal-scoring ability.\n - Achievements: Multiple Ballon d''Or winner, Copa America champion, World Cup champion (2022).\n\n2.
|
||||
**Kylian Mbapp\u00e9 (Paris Saint-Germain)**\n - Position: Forward\n - Key Attributes: Speed, technique, finishing.\n - Achievements: FIFA World Cup champion (2018), Ligue 1 titles, multiple domestic cups.\n\n3. **Erling Haaland (Manchester City)**\n - Position: Forward\n - Key Attributes: Power, speed, goal-scoring instinct.\n - Achievements: Bundesliga top scorer, UEFA Champions League winner (2023), Premier League titles.\n\n4. **Kevin De Bruyne (Manchester City)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, creativity.\n - Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League winner (2023).\n\n5. **Karim Benzema (Al-Ittihad)**\n - Position: Forward\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\n - Achievements: 2022 Ballon d''Or winner, multiple Champions Leagues with Real Madrid.\n\n6. **Robert Lewandowski (FC Barcelona)**\n - Position: Forward\n - Key Attributes: Finishing, positioning,
|
||||
aerial ability.\n - Achievements: FIFA Best Men''s Player, multiple Bundesliga titles, La Liga champion (2023).\n\n7. **Mohamed Salah (Liverpool)**\n - Position: Forward\n - Key Attributes: Speed, finishing, dribbling.\n - Achievements: Premier League champion, FA Cup, UEFA Champions League winner.\n\n8. **Vin\u00edcius J\u00fanior (Real Madrid)**\n - Position: Forward\n - Key Attributes: Speed, dribbling, creativity.\n - Achievements: UEFA Champions League winner (2022), La Liga champion (2023).\n\n9. **Luka Modri\u0107 (Real Madrid)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, tactical intelligence.\n - Achievements: Multiple Champions League titles, Ballon d''Or winner (2018).\n\n10. **Harry Kane (Bayern Munich)**\n - Position: Forward\n - Key Attributes: Goal-scoring, technique, playmaking.\n - Achievements: Golden Boot winner, Premier League titles, UEFA European Championship runner-up.\n\nThis list has been adjusted to exclude Brazilian
|
||||
players and focuses on those who have made significant impacts in their clubs and on the international stage as of October 2023. Each player is recognized for their exceptional skills, performances, and achievements."}, {"role": "user", "content": "You are not allowed to include Brazilian players"}, {"role": "assistant", "content": "Thought: I now can give a great answer \nFinal Answer: Here is the revised list of the top 10 best soccer players in the world as of October 2023, explicitly excluding Brazilian players:\n\n1. **Lionel Messi (Inter Miami CF)**\n - Position: Forward\n - Key Attributes: Dribbling, vision, goal-scoring ability.\n - Achievements: Multiple Ballon d''Or winner, Copa America champion, World Cup champion (2022).\n\n2. **Kylian Mbapp\u00e9 (Paris Saint-Germain)**\n - Position: Forward\n - Key Attributes: Speed, technique, finishing.\n - Achievements: FIFA World Cup champion (2018), multiple Ligue 1 titles, and various domestic cups.\n\n3. **Erling Haaland
|
||||
(Manchester City)**\n - Position: Forward\n - Key Attributes: Power, speed, goal-scoring instinct.\n - Achievements: Bundesliga top scorer, UEFA Champions League winner (2023), Premier League titles.\n\n4. **Kevin De Bruyne (Manchester City)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, creativity.\n - Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League winner (2023).\n\n5. **Karim Benzema (Al-Ittihad)**\n - Position: Forward\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\n - Achievements: 2022 Ballon d''Or winner, multiple Champions Leagues with Real Madrid.\n\n6. **Robert Lewandowski (FC Barcelona)**\n - Position: Forward\n - Key Attributes: Finishing, positioning, aerial ability.\n - Achievements: FIFA Best Men''s Player, multiple Bundesliga titles, La Liga champion (2023).\n\n7. **Mohamed Salah (Liverpool)**\n - Position: Forward\n - Key Attributes: Speed, finishing, dribbling.\n - Achievements:
|
||||
Premier League champion, FA Cup, UEFA Champions League winner.\n\n8. **Luka Modri\u0107 (Real Madrid)**\n - Position: Midfielder\n - Key Attributes: Passing, vision, tactical intelligence.\n - Achievements: Multiple Champions League titles, Ballon d''Or winner (2018).\n\n9. **Harry Kane (Bayern Munich)**\n - Position: Forward\n - Key Attributes: Goal-scoring, technique, playmaking.\n - Achievements: Golden Boot winner, Premier League titles, UEFA European Championship runner-up.\n\n10. **Rodri (Manchester City)**\n - Position: Midfielder\n - Key Attributes: Defensive skills, passing, positional awareness.\n - Achievements: Premier League titles, UEFA Champions League winner (2023).\n\nThis list is curated while adhering to the restriction of excluding Brazilian players. Each player included has demonstrated exceptional skills and remarkable performances, solidifying their status as some of the best in the world as of October 2023."}, {"role": "user", "content":
|
||||
"You are not allowed to include Brazilian players"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, zstd
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '9093'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=8Yv8F0ZCFAo2lf.qoqxao70yxyjVvIV90zQqVF6bVzQ-1749567342-1.0.1.1-fZgnv3RDfunvCO1koxwwFJrHnxSx_rwS_FHvQ6xxDPpKHwYr7dTqIQLZrNgSX5twGyK4F22rUmkuiS6KMVogcinChk8lmHtJBTUVTFjr2KU; _cfuvid=wzh8YnmXvLq1G0RcIVijtzboQtCZyIe2uZiochkBLqE-1749567342267-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.78.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.78.0
|
||||
x-stainless-raw-response:
|
||||
- 'true'
|
||||
x-stainless-read-timeout:
|
||||
- '600.0'
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-BguhCefN1bN2OeYRo5ChhUqNBLUda\",\n \"object\": \"chat.completion\",\n \"created\": 1749567414,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"Thought: I now can give a great answer \\nFinal Answer: Here is a refined list of the top 10 best soccer players in the world as of October 2023, ensuring that no Brazilian players are included:\\n\\n1. **Lionel Messi (Inter Miami CF)**\\n - Position: Forward\\n - Key Attributes: Dribbling, vision, goal-scoring ability.\\n - Achievements: Multiple Ballon d'Or winner, Copa America champion, World Cup champion (2022).\\n\\n2. **Kylian Mbappé (Paris Saint-Germain)**\\n - Position: Forward\\n - Key Attributes: Speed, technique, finishing.\\n - Achievements: FIFA World Cup champion (2018), multiple Ligue 1 titles, various domestic cups.\\n\\n3. **Erling Haaland (Manchester City)**\\\
|
||||
n - Position: Forward\\n - Key Attributes: Power, speed, goal-scoring instinct.\\n - Achievements: Bundesliga top scorer, UEFA Champions League winner (2023), Premier League titles.\\n\\n4. **Kevin De Bruyne (Manchester City)**\\n - Position: Midfielder\\n - Key Attributes: Passing, vision, creativity.\\n - Achievements: Multiple Premier League titles, FA Cups, UEFA Champions League winner (2023).\\n\\n5. **Karim Benzema (Al-Ittihad)**\\n - Position: Forward\\n - Key Attributes: Goal-scoring, playmaking, tactical intelligence.\\n - Achievements: 2022 Ballon d'Or winner, multiple Champions Leagues with Real Madrid.\\n\\n6. **Robert Lewandowski (FC Barcelona)**\\n - Position: Forward\\n - Key Attributes: Finishing, positioning, aerial ability.\\n - Achievements: FIFA Best Men's Player, multiple Bundesliga titles, La Liga champion (2023).\\n\\n7. **Mohamed Salah (Liverpool)**\\n - Position: Forward\\n - Key Attributes: Speed, finishing, dribbling.\\n -\
|
||||
\ Achievements: Premier League champion, FA Cup, UEFA Champions League winner.\\n\\n8. **Luka Modrić (Real Madrid)**\\n - Position: Midfielder\\n - Key Attributes: Passing, vision, tactical intelligence.\\n - Achievements: Multiple Champions League titles, Ballon d'Or winner (2018).\\n\\n9. **Harry Kane (Bayern Munich)**\\n - Position: Forward\\n - Key Attributes: Goal-scoring, technique, playmaking.\\n - Achievements: Golden Boot winner, multiple Premier League titles, UEFA European Championship runner-up.\\n\\n10. **Son Heung-min (Tottenham Hotspur)**\\n - Position: Forward\\n - Key Attributes: Speed, finishing, playmaking.\\n - Achievements: Premier League Golden Boot winner, multiple domestic cup titles.\\n\\nThis list has been carefully revised to exclude all Brazilian players while highlighting some of the most talented individuals in soccer as of October 2023. Each player has showcased remarkable effectiveness and skill, contributing significantly to their\
|
||||
\ teams on both domestic and international stages.\",\n \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 2028,\n \"completion_tokens\": 614,\n \"total_tokens\": 2642,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 1280,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": \"default\",\n \"system_fingerprint\": \"fp_34a54ae93c\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- 94d9b7d24d991d2c-GRU
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 10 Jun 2025 14:57:29 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- nosniff
|
||||
access-control-expose-headers:
|
||||
- X-Request-ID
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '35291'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains; preload
|
||||
x-envoy-upstream-service-time:
|
||||
- '35294'
|
||||
x-ratelimit-limit-requests:
|
||||
- '30000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '150000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '29999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '149997855'
|
||||
x-ratelimit-reset-requests:
|
||||
- 2ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_4676152d4227ac1825d1240ddef231d6
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
|
||||
@@ -1,14 +1,8 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Test Agent. A helpful
|
||||
test assistant\nYour personal goal is: Answer questions\nTo give my best complete
|
||||
final answer to the task respond using the exact following format:\n\nThought:
|
||||
I now can give a great answer\nFinal Answer: Your final answer must be the great
|
||||
and the most complete as possible, it must be outcome described.\n\nI MUST use
|
||||
these formats, my job depends on it!"},{"role":"user","content":"\nCurrent Task:
|
||||
What is 2+2? Reply with just the number.\n\nBegin! This is VERY important to
|
||||
you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}],"model":"gpt-4o-mini"}'
|
||||
test assistant\nYour personal goal is: Answer questions"},{"role":"user","content":"\nCurrent
|
||||
Task: What is 2+2? Reply with just the number.\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
@@ -21,7 +15,7 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '673'
|
||||
- '272'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
@@ -43,23 +37,22 @@ interactions:
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
- 3.13.5
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-Cy7b0HjL79y39EkUcMLrRhPFe3XGj\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1768444914,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
string: "{\n \"id\": \"chatcmpl-D6L4AzMHXLXDfyclWS6fJSwS0cvOl\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770403318,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
|
||||
Answer: 4\",\n \"refusal\": null,\n \"annotations\": []\n },\n
|
||||
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
|
||||
\ \"usage\": {\n \"prompt_tokens\": 136,\n \"completion_tokens\": 13,\n
|
||||
\ \"total_tokens\": 149,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
\"assistant\",\n \"content\": \"4\",\n \"refusal\": null,\n
|
||||
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 50,\n \"completion_tokens\":
|
||||
1,\n \"total_tokens\": 51,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_8bbc38b4db\"\n}\n"
|
||||
\"default\",\n \"system_fingerprint\": \"fp_f4ae844694\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
@@ -68,7 +61,7 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 15 Jan 2026 02:41:55 GMT
|
||||
- Fri, 06 Feb 2026 18:41:58 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
@@ -85,18 +78,14 @@ interactions:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
content-length:
|
||||
- '857'
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '341'
|
||||
- '264'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '358'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
|
||||
@@ -1,14 +1,8 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Standalone Agent. A helpful
|
||||
assistant\nYour personal goal is: Answer questions\nTo give my best complete
|
||||
final answer to the task respond using the exact following format:\n\nThought:
|
||||
I now can give a great answer\nFinal Answer: Your final answer must be the great
|
||||
and the most complete as possible, it must be outcome described.\n\nI MUST use
|
||||
these formats, my job depends on it!"},{"role":"user","content":"\nCurrent Task:
|
||||
What is 5+5? Reply with just the number.\n\nBegin! This is VERY important to
|
||||
you, use the tools available and give your best Final Answer, your job depends
|
||||
on it!\n\nThought:"}],"model":"gpt-4o-mini"}'
|
||||
assistant\nYour personal goal is: Answer questions"},{"role":"user","content":"\nCurrent
|
||||
Task: What is 5+5? Reply with just the number.\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
@@ -21,7 +15,7 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '674'
|
||||
- '273'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
@@ -43,23 +37,22 @@ interactions:
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
- 3.13.5
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-Cy7azhPwUHQ0p5tdhxSAmLPoE8UgC\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1768444913,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
string: "{\n \"id\": \"chatcmpl-D6L3cLs2ndBaXV2wnqYCdi6X1ykvv\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770403284,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
|
||||
Answer: 10\",\n \"refusal\": null,\n \"annotations\": []\n },\n
|
||||
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
|
||||
\ \"usage\": {\n \"prompt_tokens\": 136,\n \"completion_tokens\": 13,\n
|
||||
\ \"total_tokens\": 149,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
\"assistant\",\n \"content\": \"10\",\n \"refusal\": null,\n
|
||||
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 50,\n \"completion_tokens\":
|
||||
1,\n \"total_tokens\": 51,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_29330a9688\"\n}\n"
|
||||
\"default\",\n \"system_fingerprint\": \"fp_f4ae844694\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
@@ -68,7 +61,7 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 15 Jan 2026 02:41:54 GMT
|
||||
- Fri, 06 Feb 2026 18:41:25 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
@@ -85,18 +78,14 @@ interactions:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
content-length:
|
||||
- '858'
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '455'
|
||||
- '270'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '583'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
|
||||
@@ -1,13 +1,8 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are First Agent. A friendly
|
||||
greeter\nYour personal goal is: Greet users\nTo give my best complete final
|
||||
answer to the task respond using the exact following format:\n\nThought: I now
|
||||
can give a great answer\nFinal Answer: Your final answer must be the great and
|
||||
the most complete as possible, it must be outcome described.\n\nI MUST use these
|
||||
formats, my job depends on it!"},{"role":"user","content":"\nCurrent Task: Say
|
||||
hello\n\nBegin! This is VERY important to you, use the tools available and give
|
||||
your best Final Answer, your job depends on it!\n\nThought:"}],"model":"gpt-4o-mini"}'
|
||||
greeter\nYour personal goal is: Greet users"},{"role":"user","content":"\nCurrent
|
||||
Task: Say hello\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
@@ -20,7 +15,7 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '632'
|
||||
- '231'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
@@ -42,24 +37,22 @@ interactions:
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
- 3.13.5
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-CyRKzgODZ9yn3F9OkaXsscLk2Ln3N\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1768520801,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
string: "{\n \"id\": \"chatcmpl-D6L4A8Aad6P1YUxWjQpvyltn8GaKT\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770403318,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
|
||||
Answer: Hello! Welcome! I'm so glad to see you here. If you need any assistance
|
||||
or have any questions, feel free to ask. Have a wonderful day!\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
127,\n \"completion_tokens\": 43,\n \"total_tokens\": 170,\n \"prompt_tokens_details\":
|
||||
\"assistant\",\n \"content\": \"Hello! \U0001F60A How are you today?\",\n
|
||||
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
|
||||
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
41,\n \"completion_tokens\": 8,\n \"total_tokens\": 49,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_c4585b5b9c\"\n}\n"
|
||||
\"default\",\n \"system_fingerprint\": \"fp_f4ae844694\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
@@ -68,7 +61,7 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 15 Jan 2026 23:46:42 GMT
|
||||
- Fri, 06 Feb 2026 18:41:58 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
@@ -85,18 +78,14 @@ interactions:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
content-length:
|
||||
- '990'
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '880'
|
||||
- '325'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '1160'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
@@ -118,13 +107,8 @@ interactions:
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Second Agent. A polite
|
||||
farewell agent\nYour personal goal is: Say goodbye\nTo give my best complete
|
||||
final answer to the task respond using the exact following format:\n\nThought:
|
||||
I now can give a great answer\nFinal Answer: Your final answer must be the great
|
||||
and the most complete as possible, it must be outcome described.\n\nI MUST use
|
||||
these formats, my job depends on it!"},{"role":"user","content":"\nCurrent Task:
|
||||
Say goodbye\n\nBegin! This is VERY important to you, use the tools available
|
||||
and give your best Final Answer, your job depends on it!\n\nThought:"}],"model":"gpt-4o-mini"}'
|
||||
farewell agent\nYour personal goal is: Say goodbye"},{"role":"user","content":"\nCurrent
|
||||
Task: Say goodbye\n\nProvide your complete response:"}],"model":"gpt-4o-mini"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
@@ -137,7 +121,7 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '640'
|
||||
- '239'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
@@ -159,27 +143,24 @@ interactions:
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
- 3.13.5
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-CyRL1Ua2PkK5xXPp3KeF0AnGAk3JP\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1768520803,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
string: "{\n \"id\": \"chatcmpl-D6L4BLMYC3ODccwbKfBIdtrEyd3no\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770403319,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
|
||||
Answer: As we reach the end of our conversation, I want to express my gratitude
|
||||
for the time we've shared. It's been a pleasure assisting you, and I hope
|
||||
you found our interaction helpful and enjoyable. Remember, whenever you need
|
||||
assistance, I'm just a message away. Wishing you all the best in your future
|
||||
endeavors. Goodbye and take care!\",\n \"refusal\": null,\n \"annotations\":
|
||||
\"assistant\",\n \"content\": \"Thank you for the time we've spent
|
||||
together! I wish you all the best in your future endeavors. Take care, and
|
||||
until we meet again, goodbye!\",\n \"refusal\": null,\n \"annotations\":
|
||||
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 126,\n \"completion_tokens\":
|
||||
79,\n \"total_tokens\": 205,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 40,\n \"completion_tokens\":
|
||||
31,\n \"total_tokens\": 71,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_29330a9688\"\n}\n"
|
||||
\"default\",\n \"system_fingerprint\": \"fp_f4ae844694\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
@@ -188,7 +169,7 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 15 Jan 2026 23:46:44 GMT
|
||||
- Fri, 06 Feb 2026 18:41:59 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
@@ -205,18 +186,14 @@ interactions:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
content-length:
|
||||
- '1189'
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '1363'
|
||||
- '726'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '1605'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
|
||||
@@ -2,9 +2,8 @@ interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Calculator. You calculate
|
||||
things.\nYour personal goal is: Perform calculations efficiently"},{"role":"user","content":"\nCurrent
|
||||
Task: Use the failing_tool to do something.\n\nThis is VERY important to you,
|
||||
your job depends on it!"}],"model":"gpt-4o-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"failing_tool","description":"This
|
||||
tool always fails","parameters":{"properties":{},"type":"object"}}}]}'
|
||||
Task: Use the failing_tool to do something."}],"model":"gpt-4o-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"failing_tool","description":"This
|
||||
tool always fails","strict":true,"parameters":{"properties":{},"type":"object","additionalProperties":false,"required":[]}}}]}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
@@ -17,7 +16,7 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '477'
|
||||
- '476'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
@@ -39,26 +38,26 @@ interactions:
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
- 3.13.5
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D0vm2JDsOmy0czXPAr4vnw3wvuqYZ\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769114454,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
string: "{\n \"id\": \"chatcmpl-D6L3dV6acwapgRyxmnzGfuOXemtjJ\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770403285,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
|
||||
\ \"id\": \"call_8xr8rPUDWzLfQ3LOWPHtBUjK\",\n \"type\":
|
||||
\ \"id\": \"call_GCdaOdo32pr1sSk4RzO0tiB9\",\n \"type\":
|
||||
\"function\",\n \"function\": {\n \"name\": \"failing_tool\",\n
|
||||
\ \"arguments\": \"{}\"\n }\n }\n ],\n
|
||||
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
|
||||
null,\n \"finish_reason\": \"tool_calls\"\n }\n ],\n \"usage\":
|
||||
{\n \"prompt_tokens\": 78,\n \"completion_tokens\": 11,\n \"total_tokens\":
|
||||
89,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\":
|
||||
{\n \"prompt_tokens\": 65,\n \"completion_tokens\": 11,\n \"total_tokens\":
|
||||
76,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\":
|
||||
0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\":
|
||||
0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n
|
||||
\ \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_c4585b5b9c\"\n}\n"
|
||||
\"default\",\n \"system_fingerprint\": \"fp_6c0d1490cb\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
@@ -67,7 +66,7 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 22 Jan 2026 20:40:54 GMT
|
||||
- Fri, 06 Feb 2026 18:41:25 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
@@ -87,13 +86,11 @@ interactions:
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '593'
|
||||
- '436'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '621'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
@@ -116,12 +113,9 @@ interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Calculator. You calculate
|
||||
things.\nYour personal goal is: Perform calculations efficiently"},{"role":"user","content":"\nCurrent
|
||||
Task: Use the failing_tool to do something.\n\nThis is VERY important to you,
|
||||
your job depends on it!"},{"role":"assistant","content":null,"tool_calls":[{"id":"call_8xr8rPUDWzLfQ3LOWPHtBUjK","type":"function","function":{"name":"failing_tool","arguments":"{}"}}]},{"role":"tool","tool_call_id":"call_8xr8rPUDWzLfQ3LOWPHtBUjK","content":"Error
|
||||
executing tool: This tool always fails"},{"role":"user","content":"Analyze the
|
||||
tool result. If requirements are met, provide the Final Answer. Otherwise, call
|
||||
the next tool. Deliver only the answer without meta-commentary."}],"model":"gpt-4o-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"failing_tool","description":"This
|
||||
tool always fails","parameters":{"properties":{},"type":"object"}}}]}'
|
||||
Task: Use the failing_tool to do something."},{"role":"assistant","content":null,"tool_calls":[{"id":"call_GCdaOdo32pr1sSk4RzO0tiB9","type":"function","function":{"name":"failing_tool","arguments":"{}"}}]},{"role":"tool","tool_call_id":"call_GCdaOdo32pr1sSk4RzO0tiB9","name":"failing_tool","content":"Error
|
||||
executing tool: This tool always fails"}],"model":"gpt-4o-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"failing_tool","description":"This
|
||||
tool always fails","strict":true,"parameters":{"properties":{},"type":"object","additionalProperties":false,"required":[]}}}]}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
@@ -134,7 +128,7 @@ interactions:
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '941'
|
||||
- '778'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
@@ -158,22 +152,25 @@ interactions:
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
- 3.13.5
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D0vm3xcywoKBW75bhBXfkGJNim6Th\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1769114455,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
string: "{\n \"id\": \"chatcmpl-D6L3dhjDZOoihHvXvRpbJD3ReGu0z\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770403285,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Error: This tool always fails.\",\n
|
||||
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
|
||||
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
141,\n \"completion_tokens\": 8,\n \"total_tokens\": 149,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
\"assistant\",\n \"content\": \"The attempt to use the failing tool
|
||||
resulted in an error, as expected since it is designed to always fail. If
|
||||
there's anything else you would like to calculate or explore, please let me
|
||||
know!\",\n \"refusal\": null,\n \"annotations\": []\n },\n
|
||||
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
|
||||
\ \"usage\": {\n \"prompt_tokens\": 93,\n \"completion_tokens\": 40,\n
|
||||
\ \"total_tokens\": 133,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_c4585b5b9c\"\n}\n"
|
||||
\"default\",\n \"system_fingerprint\": \"fp_6c0d1490cb\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
@@ -182,7 +179,7 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Thu, 22 Jan 2026 20:40:55 GMT
|
||||
- Fri, 06 Feb 2026 18:41:26 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
@@ -200,13 +197,11 @@ interactions:
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '420'
|
||||
- '776'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
x-envoy-upstream-service-time:
|
||||
- '436'
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
|
||||
@@ -43,15 +43,15 @@ interactions:
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
- 3.13.5
|
||||
x-stainless-timeout:
|
||||
- NOT_GIVEN
|
||||
method: POST
|
||||
uri: https://api.anthropic.com/v1/messages
|
||||
response:
|
||||
body:
|
||||
string: '{"model":"claude-3-5-haiku-20241022","id":"msg_0149zKBgM47utdBdrfJjM6YZ","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_011jnBYLgtzXqdmSi7JDyQHj","name":"structured_output","input":{"operation":"Addition","result":42,"explanation":"Adding
|
||||
15 and 27 together results in 42"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":573,"cache_creation_input_tokens":0,"cache_read_input_tokens":0,"cache_creation":{"ephemeral_5m_input_tokens":0,"ephemeral_1h_input_tokens":0},"output_tokens":79,"service_tier":"standard"}}'
|
||||
string: '{"model":"claude-3-5-haiku-20241022","id":"msg_01A41GpDoJbZLUhR8dQzUcUX","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01UNPdzpayoWyqDYVE7fR5oA","name":"structured_output","input":{"operation":"Addition","result":42,"explanation":"Added
|
||||
15 and 27 together"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":573,"cache_creation_input_tokens":0,"cache_read_input_tokens":0,"cache_creation":{"ephemeral_5m_input_tokens":0,"ephemeral_1h_input_tokens":0},"output_tokens":75,"service_tier":"standard","inference_geo":"not_available"}}'
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
@@ -62,7 +62,7 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 18:56:15 GMT
|
||||
- Fri, 06 Feb 2026 18:41:25 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
@@ -88,7 +88,7 @@ interactions:
|
||||
anthropic-ratelimit-requests-remaining:
|
||||
- '3999'
|
||||
anthropic-ratelimit-requests-reset:
|
||||
- '2026-01-30T18:56:14Z'
|
||||
- '2026-02-06T18:41:24Z'
|
||||
anthropic-ratelimit-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-tokens-remaining:
|
||||
@@ -102,7 +102,7 @@ interactions:
|
||||
strict-transport-security:
|
||||
- STS-XXX
|
||||
x-envoy-upstream-service-time:
|
||||
- '1473'
|
||||
- '1247'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
|
||||
@@ -44,21 +44,20 @@ interactions:
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
- 3.13.5
|
||||
x-stainless-timeout:
|
||||
- NOT_GIVEN
|
||||
method: POST
|
||||
uri: https://api.anthropic.com/v1/messages
|
||||
response:
|
||||
body:
|
||||
string: '{"model":"claude-3-5-haiku-20241022","id":"msg_013iHkpmto99iyH5kDvn8uER","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01Kpda2DzHBqWq9a2FS2Bdw6","name":"structured_output","input":{"topic":"Benefits
|
||||
string: '{"model":"claude-3-5-haiku-20241022","id":"msg_016wrV83wm3FLYD4JoTy2Piw","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01V6Pzr7eGfuG4Q3mc25ZXwN","name":"structured_output","input":{"topic":"Benefits
|
||||
of Remote Work","summary":"Remote work offers significant advantages for both
|
||||
employees and employers, transforming traditional work paradigms by providing
|
||||
flexibility, increased productivity, and cost savings.","key_points":["Increased
|
||||
employee flexibility and work-life balance","Reduced commuting time and associated
|
||||
stress","Cost savings for companies on office infrastructure","Access to a
|
||||
global talent pool","Higher employee productivity and job satisfaction","Lower
|
||||
carbon footprint due to reduced travel"]}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":589,"cache_creation_input_tokens":0,"cache_read_input_tokens":0,"cache_creation":{"ephemeral_5m_input_tokens":0,"ephemeral_1h_input_tokens":0},"output_tokens":153,"service_tier":"standard"}}'
|
||||
employees and employers, transforming traditional workplace dynamics.","key_points":["Increased
|
||||
flexibility in work schedule","Reduced commute time and transportation costs","Improved
|
||||
work-life balance","Higher productivity for many employees","Cost savings
|
||||
for companies on office infrastructure","Expanded talent pool for hiring","Enhanced
|
||||
employee job satisfaction"]}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":589,"cache_creation_input_tokens":0,"cache_read_input_tokens":0,"cache_creation":{"ephemeral_5m_input_tokens":0,"ephemeral_1h_input_tokens":0},"output_tokens":142,"service_tier":"standard","inference_geo":"not_available"}}'
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
@@ -69,7 +68,7 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Fri, 30 Jan 2026 18:56:19 GMT
|
||||
- Fri, 06 Feb 2026 18:41:28 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
@@ -95,7 +94,7 @@ interactions:
|
||||
anthropic-ratelimit-requests-remaining:
|
||||
- '3999'
|
||||
anthropic-ratelimit-requests-reset:
|
||||
- '2026-01-30T18:56:16Z'
|
||||
- '2026-02-06T18:41:26Z'
|
||||
anthropic-ratelimit-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-tokens-remaining:
|
||||
@@ -109,7 +108,7 @@ interactions:
|
||||
strict-transport-security:
|
||||
- STS-XXX
|
||||
x-envoy-upstream-service-time:
|
||||
- '3107'
|
||||
- '2650'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
|
||||
@@ -0,0 +1,332 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"max_tokens":4096,"messages":[{"role":"user","content":[{"type":"text","text":"Say
|
||||
hello in one word.","cache_control":{"type":"ephemeral"}}]}],"model":"claude-sonnet-4-5-20250929","stream":false,"system":"You
|
||||
are a helpful assistant. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. "}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
anthropic-version:
|
||||
- '2023-06-01'
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '5918'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.anthropic.com
|
||||
x-api-key:
|
||||
- X-API-KEY-XXX
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 0.73.0
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
x-stainless-timeout:
|
||||
- NOT_GIVEN
|
||||
method: POST
|
||||
uri: https://api.anthropic.com/v1/messages
|
||||
response:
|
||||
body:
|
||||
string: '{"model":"claude-sonnet-4-5-20250929","id":"msg_013xTaKq41TFn6drdxt1mFdx","type":"message","role":"assistant","content":[{"type":"text","text":"Hello!"}],"stop_reason":"end_turn","stop_sequence":null,"usage":{"input_tokens":3,"cache_creation_input_tokens":0,"cache_read_input_tokens":1217,"cache_creation":{"ephemeral_5m_input_tokens":0,"ephemeral_1h_input_tokens":0},"output_tokens":5,"service_tier":"standard","inference_geo":"not_available"}}'
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Security-Policy:
|
||||
- CSP-FILTERED
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 10 Feb 2026 18:27:40 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Robots-Tag:
|
||||
- none
|
||||
anthropic-organization-id:
|
||||
- ANTHROPIC-ORGANIZATION-ID-XXX
|
||||
anthropic-ratelimit-input-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-input-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-input-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-RESET-XXX
|
||||
anthropic-ratelimit-output-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-output-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-output-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-RESET-XXX
|
||||
anthropic-ratelimit-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-RESET-XXX
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
request-id:
|
||||
- REQUEST-ID-XXX
|
||||
strict-transport-security:
|
||||
- STS-XXX
|
||||
x-envoy-upstream-service-time:
|
||||
- '726'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"max_tokens":4096,"messages":[{"role":"user","content":[{"type":"text","text":"Say
|
||||
goodbye in one word.","cache_control":{"type":"ephemeral"}}]}],"model":"claude-sonnet-4-5-20250929","stream":false,"system":"You
|
||||
are a helpful assistant. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. "}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
anthropic-version:
|
||||
- '2023-06-01'
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '5920'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.anthropic.com
|
||||
x-api-key:
|
||||
- X-API-KEY-XXX
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 0.73.0
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
x-stainless-timeout:
|
||||
- NOT_GIVEN
|
||||
method: POST
|
||||
uri: https://api.anthropic.com/v1/messages
|
||||
response:
|
||||
body:
|
||||
string: '{"model":"claude-sonnet-4-5-20250929","id":"msg_01LdueHX7nvf19wD8Uxn4EZD","type":"message","role":"assistant","content":[{"type":"text","text":"Goodbye"}],"stop_reason":"end_turn","stop_sequence":null,"usage":{"input_tokens":3,"cache_creation_input_tokens":0,"cache_read_input_tokens":1217,"cache_creation":{"ephemeral_5m_input_tokens":0,"ephemeral_1h_input_tokens":0},"output_tokens":5,"service_tier":"standard","inference_geo":"not_available"}}'
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Security-Policy:
|
||||
- CSP-FILTERED
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 10 Feb 2026 18:27:41 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Robots-Tag:
|
||||
- none
|
||||
anthropic-organization-id:
|
||||
- ANTHROPIC-ORGANIZATION-ID-XXX
|
||||
anthropic-ratelimit-input-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-input-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-input-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-RESET-XXX
|
||||
anthropic-ratelimit-output-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-output-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-output-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-RESET-XXX
|
||||
anthropic-ratelimit-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-RESET-XXX
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
request-id:
|
||||
- REQUEST-ID-XXX
|
||||
strict-transport-security:
|
||||
- STS-XXX
|
||||
x-envoy-upstream-service-time:
|
||||
- '759'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,336 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"max_tokens":4096,"messages":[{"role":"user","content":[{"type":"text","text":"What
|
||||
is the weather in Tokyo?","cache_control":{"type":"ephemeral"}}]}],"model":"claude-sonnet-4-5-20250929","stream":false,"system":"You
|
||||
are a helpful assistant that uses tools. This is padding text to ensure the
|
||||
prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. ","tool_choice":{"type":"tool","name":"get_weather"},"tools":[{"name":"get_weather","description":"Get
|
||||
the current weather for a location","input_schema":{"type":"object","properties":{"location":{"type":"string","description":"The
|
||||
city name"}},"required":["location"]}}]}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
anthropic-version:
|
||||
- '2023-06-01'
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '6211'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.anthropic.com
|
||||
x-api-key:
|
||||
- X-API-KEY-XXX
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 0.73.0
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
x-stainless-timeout:
|
||||
- NOT_GIVEN
|
||||
method: POST
|
||||
uri: https://api.anthropic.com/v1/messages
|
||||
response:
|
||||
body:
|
||||
string: '{"model":"claude-sonnet-4-5-20250929","id":"msg_01WhFk2ppoz43nbh4uNhXBfL","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01CX1yZuJ5MQaJbXNSrnCiqf","name":"get_weather","input":{"location":"Tokyo"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":24,"cache_creation_input_tokens":0,"cache_read_input_tokens":1857,"cache_creation":{"ephemeral_5m_input_tokens":0,"ephemeral_1h_input_tokens":0},"output_tokens":33,"service_tier":"standard","inference_geo":"not_available"}}'
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Security-Policy:
|
||||
- CSP-FILTERED
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 10 Feb 2026 18:27:38 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Robots-Tag:
|
||||
- none
|
||||
anthropic-organization-id:
|
||||
- ANTHROPIC-ORGANIZATION-ID-XXX
|
||||
anthropic-ratelimit-input-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-input-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-input-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-RESET-XXX
|
||||
anthropic-ratelimit-output-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-output-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-output-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-RESET-XXX
|
||||
anthropic-ratelimit-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-RESET-XXX
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
request-id:
|
||||
- REQUEST-ID-XXX
|
||||
strict-transport-security:
|
||||
- STS-XXX
|
||||
x-envoy-upstream-service-time:
|
||||
- '1390'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"max_tokens":4096,"messages":[{"role":"user","content":[{"type":"text","text":"What
|
||||
is the weather in Paris?","cache_control":{"type":"ephemeral"}}]}],"model":"claude-sonnet-4-5-20250929","stream":false,"system":"You
|
||||
are a helpful assistant that uses tools. This is padding text to ensure the
|
||||
prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. ","tool_choice":{"type":"tool","name":"get_weather"},"tools":[{"name":"get_weather","description":"Get
|
||||
the current weather for a location","input_schema":{"type":"object","properties":{"location":{"type":"string","description":"The
|
||||
city name"}},"required":["location"]}}]}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
anthropic-version:
|
||||
- '2023-06-01'
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '6211'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.anthropic.com
|
||||
x-api-key:
|
||||
- X-API-KEY-XXX
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 0.73.0
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
x-stainless-timeout:
|
||||
- NOT_GIVEN
|
||||
method: POST
|
||||
uri: https://api.anthropic.com/v1/messages
|
||||
response:
|
||||
body:
|
||||
string: '{"model":"claude-sonnet-4-5-20250929","id":"msg_01Nmw5NyAEwCLGjpVnf15rh4","type":"message","role":"assistant","content":[{"type":"tool_use","id":"toolu_01DEe9K7N4EfhPFqxHhqEHCE","name":"get_weather","input":{"location":"Paris"}}],"stop_reason":"tool_use","stop_sequence":null,"usage":{"input_tokens":24,"cache_creation_input_tokens":0,"cache_read_input_tokens":1857,"cache_creation":{"ephemeral_5m_input_tokens":0,"ephemeral_1h_input_tokens":0},"output_tokens":33,"service_tier":"standard","inference_geo":"not_available"}}'
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Security-Policy:
|
||||
- CSP-FILTERED
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 10 Feb 2026 18:27:40 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Robots-Tag:
|
||||
- none
|
||||
anthropic-organization-id:
|
||||
- ANTHROPIC-ORGANIZATION-ID-XXX
|
||||
anthropic-ratelimit-input-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-input-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-input-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-RESET-XXX
|
||||
anthropic-ratelimit-output-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-output-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-output-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-RESET-XXX
|
||||
anthropic-ratelimit-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-RESET-XXX
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
request-id:
|
||||
- REQUEST-ID-XXX
|
||||
strict-transport-security:
|
||||
- STS-XXX
|
||||
x-envoy-upstream-service-time:
|
||||
- '1259'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,411 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"max_tokens":4096,"messages":[{"role":"user","content":[{"type":"text","text":"Say
|
||||
hello in one word.","cache_control":{"type":"ephemeral"}}]}],"model":"claude-sonnet-4-5-20250929","system":"You
|
||||
are a helpful assistant. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. ","stream":true}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
anthropic-version:
|
||||
- '2023-06-01'
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '5917'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.anthropic.com
|
||||
x-api-key:
|
||||
- X-API-KEY-XXX
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 0.73.0
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
x-stainless-stream-helper:
|
||||
- messages
|
||||
x-stainless-timeout:
|
||||
- NOT_GIVEN
|
||||
method: POST
|
||||
uri: https://api.anthropic.com/v1/messages
|
||||
response:
|
||||
body:
|
||||
string: 'event: message_start
|
||||
|
||||
data: {"type":"message_start","message":{"model":"claude-sonnet-4-5-20250929","id":"msg_01LshZroyEGgd3HfDrKdQMLm","type":"message","role":"assistant","content":[],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":3,"cache_creation_input_tokens":0,"cache_read_input_tokens":1217,"cache_creation":{"ephemeral_5m_input_tokens":0,"ephemeral_1h_input_tokens":0},"output_tokens":4,"service_tier":"standard","inference_geo":"not_available"}} }
|
||||
|
||||
|
||||
event: content_block_start
|
||||
|
||||
data: {"type":"content_block_start","index":0,"content_block":{"type":"text","text":""} }
|
||||
|
||||
|
||||
event: ping
|
||||
|
||||
data: {"type": "ping"}
|
||||
|
||||
|
||||
event: content_block_delta
|
||||
|
||||
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":"Hello"} }
|
||||
|
||||
|
||||
event: content_block_stop
|
||||
|
||||
data: {"type":"content_block_stop","index":0 }
|
||||
|
||||
|
||||
event: message_delta
|
||||
|
||||
data: {"type":"message_delta","delta":{"stop_reason":"end_turn","stop_sequence":null},"usage":{"input_tokens":3,"cache_creation_input_tokens":0,"cache_read_input_tokens":1217,"output_tokens":4}
|
||||
}
|
||||
|
||||
|
||||
event: message_stop
|
||||
|
||||
data: {"type":"message_stop" }
|
||||
|
||||
|
||||
'
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Cache-Control:
|
||||
- no-cache
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Security-Policy:
|
||||
- CSP-FILTERED
|
||||
Content-Type:
|
||||
- text/event-stream; charset=utf-8
|
||||
Date:
|
||||
- Tue, 10 Feb 2026 18:27:43 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Robots-Tag:
|
||||
- none
|
||||
anthropic-organization-id:
|
||||
- ANTHROPIC-ORGANIZATION-ID-XXX
|
||||
anthropic-ratelimit-input-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-input-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-input-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-RESET-XXX
|
||||
anthropic-ratelimit-output-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-output-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-output-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-RESET-XXX
|
||||
anthropic-ratelimit-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-RESET-XXX
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
request-id:
|
||||
- REQUEST-ID-XXX
|
||||
strict-transport-security:
|
||||
- STS-XXX
|
||||
x-envoy-upstream-service-time:
|
||||
- '837'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"max_tokens":4096,"messages":[{"role":"user","content":[{"type":"text","text":"Say
|
||||
goodbye in one word.","cache_control":{"type":"ephemeral"}}]}],"model":"claude-sonnet-4-5-20250929","system":"You
|
||||
are a helpful assistant. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. ","stream":true}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
anthropic-version:
|
||||
- '2023-06-01'
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '5919'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.anthropic.com
|
||||
x-api-key:
|
||||
- X-API-KEY-XXX
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 0.73.0
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
x-stainless-stream-helper:
|
||||
- messages
|
||||
x-stainless-timeout:
|
||||
- NOT_GIVEN
|
||||
method: POST
|
||||
uri: https://api.anthropic.com/v1/messages
|
||||
response:
|
||||
body:
|
||||
string: 'event: message_start
|
||||
|
||||
data: {"type":"message_start","message":{"model":"claude-sonnet-4-5-20250929","id":"msg_01MZSWarEUbFXmek8aEpwKDu","type":"message","role":"assistant","content":[],"stop_reason":null,"stop_sequence":null,"usage":{"input_tokens":3,"cache_creation_input_tokens":0,"cache_read_input_tokens":1217,"cache_creation":{"ephemeral_5m_input_tokens":0,"ephemeral_1h_input_tokens":0},"output_tokens":6,"service_tier":"standard","inference_geo":"not_available"}} }
|
||||
|
||||
|
||||
event: content_block_start
|
||||
|
||||
data: {"type":"content_block_start","index":0,"content_block":{"type":"text","text":""}}
|
||||
|
||||
|
||||
event: ping
|
||||
|
||||
data: {"type": "ping"}
|
||||
|
||||
|
||||
event: content_block_delta
|
||||
|
||||
data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":"Goodbye."} }
|
||||
|
||||
|
||||
event: content_block_stop
|
||||
|
||||
data: {"type":"content_block_stop","index":0 }
|
||||
|
||||
|
||||
event: message_delta
|
||||
|
||||
data: {"type":"message_delta","delta":{"stop_reason":"end_turn","stop_sequence":null},"usage":{"input_tokens":3,"cache_creation_input_tokens":0,"cache_read_input_tokens":1217,"output_tokens":6} }
|
||||
|
||||
|
||||
event: message_stop
|
||||
|
||||
data: {"type":"message_stop" }
|
||||
|
||||
|
||||
'
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Cache-Control:
|
||||
- no-cache
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Security-Policy:
|
||||
- CSP-FILTERED
|
||||
Content-Type:
|
||||
- text/event-stream; charset=utf-8
|
||||
Date:
|
||||
- Tue, 10 Feb 2026 18:27:44 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Robots-Tag:
|
||||
- none
|
||||
anthropic-organization-id:
|
||||
- ANTHROPIC-ORGANIZATION-ID-XXX
|
||||
anthropic-ratelimit-input-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-input-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-input-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-RESET-XXX
|
||||
anthropic-ratelimit-output-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-output-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-output-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-RESET-XXX
|
||||
anthropic-ratelimit-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-RESET-XXX
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
request-id:
|
||||
- REQUEST-ID-XXX
|
||||
strict-transport-security:
|
||||
- STS-XXX
|
||||
x-envoy-upstream-service-time:
|
||||
- '870'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,266 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"contents": [{"parts": [{"text": "Say hello in one word."}], "role": "user"}],
|
||||
"systemInstruction": {"parts": [{"text": "You are a helpful assistant. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
"}], "role": "user"}, "generationConfig": {}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '5876'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- generativelanguage.googleapis.com
|
||||
x-goog-api-client:
|
||||
- google-genai-sdk/1.49.0 gl-python/3.13.3
|
||||
x-goog-api-key:
|
||||
- X-GOOG-API-KEY-XXX
|
||||
method: POST
|
||||
uri: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"candidates\": [\n {\n \"content\": {\n \"parts\":
|
||||
[\n {\n \"text\": \"Hello\"\n }\n ],\n
|
||||
\ \"role\": \"model\"\n },\n \"finishReason\": \"STOP\",\n
|
||||
\ \"index\": 0\n }\n ],\n \"usageMetadata\": {\n \"promptTokenCount\":
|
||||
1135,\n \"candidatesTokenCount\": 1,\n \"totalTokenCount\": 1158,\n
|
||||
\ \"promptTokensDetails\": [\n {\n \"modality\": \"TEXT\",\n
|
||||
\ \"tokenCount\": 1135\n }\n ],\n \"thoughtsTokenCount\":
|
||||
22\n },\n \"modelVersion\": \"gemini-2.5-flash\",\n \"responseId\": \"46GLaf60NYmY-8YP--PB6QE\"\n}\n"
|
||||
headers:
|
||||
Alt-Svc:
|
||||
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
|
||||
Content-Type:
|
||||
- application/json; charset=UTF-8
|
||||
Date:
|
||||
- Tue, 10 Feb 2026 21:23:47 GMT
|
||||
Server:
|
||||
- scaffolding on HTTPServer2
|
||||
Server-Timing:
|
||||
- gfet4t7; dur=773
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
Vary:
|
||||
- Origin
|
||||
- X-Origin
|
||||
- Referer
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
X-Frame-Options:
|
||||
- X-FRAME-OPTIONS-XXX
|
||||
X-XSS-Protection:
|
||||
- '0'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"contents": [{"parts": [{"text": "Say goodbye in one word."}], "role":
|
||||
"user"}], "systemInstruction": {"parts": [{"text": "You are a helpful assistant.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. "}], "role": "user"}, "generationConfig": {}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '5878'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- generativelanguage.googleapis.com
|
||||
x-goog-api-client:
|
||||
- google-genai-sdk/1.49.0 gl-python/3.13.3
|
||||
x-goog-api-key:
|
||||
- X-GOOG-API-KEY-XXX
|
||||
method: POST
|
||||
uri: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"candidates\": [\n {\n \"content\": {\n \"parts\":
|
||||
[\n {\n \"text\": \"Farewell.\"\n }\n ],\n
|
||||
\ \"role\": \"model\"\n },\n \"finishReason\": \"STOP\",\n
|
||||
\ \"index\": 0\n }\n ],\n \"usageMetadata\": {\n \"promptTokenCount\":
|
||||
1135,\n \"candidatesTokenCount\": 3,\n \"totalTokenCount\": 1164,\n
|
||||
\ \"promptTokensDetails\": [\n {\n \"modality\": \"TEXT\",\n
|
||||
\ \"tokenCount\": 1135\n }\n ],\n \"thoughtsTokenCount\":
|
||||
26\n },\n \"modelVersion\": \"gemini-2.5-flash\",\n \"responseId\": \"5KGLafeeIv-G-8YP_MfPgAI\"\n}\n"
|
||||
headers:
|
||||
Alt-Svc:
|
||||
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
|
||||
Content-Type:
|
||||
- application/json; charset=UTF-8
|
||||
Date:
|
||||
- Tue, 10 Feb 2026 21:23:48 GMT
|
||||
Server:
|
||||
- scaffolding on HTTPServer2
|
||||
Server-Timing:
|
||||
- gfet4t7; dur=662
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
Vary:
|
||||
- Origin
|
||||
- X-Origin
|
||||
- Referer
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
X-Frame-Options:
|
||||
- X-FRAME-OPTIONS-XXX
|
||||
X-XSS-Protection:
|
||||
- '0'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,280 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"contents": [{"parts": [{"text": "What is the weather in Tokyo?"}], "role":
|
||||
"user"}], "systemInstruction": {"parts": [{"text": "You are a helpful assistant
|
||||
that uses tools. This is padding text to ensure the prompt is large enough for
|
||||
caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. "}], "role": "user"}, "tools": [{"functionDeclarations":
|
||||
[{"description": "Get the current weather for a location", "name": "get_weather",
|
||||
"parameters_json_schema": {"type": "object", "properties": {"location": {"type":
|
||||
"string", "description": "The city name"}}, "required": ["location"]}}]}], "generationConfig":
|
||||
{}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '6172'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- generativelanguage.googleapis.com
|
||||
x-goog-api-client:
|
||||
- google-genai-sdk/1.49.0 gl-python/3.13.3
|
||||
x-goog-api-key:
|
||||
- X-GOOG-API-KEY-XXX
|
||||
method: POST
|
||||
uri: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"candidates\": [\n {\n \"content\": {\n \"parts\":
|
||||
[\n {\n \"functionCall\": {\n \"name\": \"get_weather\",\n
|
||||
\ \"args\": {\n \"location\": \"Tokyo\"\n }\n
|
||||
\ },\n \"thoughtSignature\": \"CpECAb4+9vvTFzaczX2PeZjKEs1f6+MRyTMz+xxqs37q0INQ6e0WLt1soet6CL/uzRML9LsycSeQTraXtXR8qcGj6dnrhKLpovpy8EkrtfK6P57PGpostE/UJ6TIKPlWi0pY1h2u9vyy5yGLzpp0PZM6d6f8rzV9uPFNM+onGvcFOdzghRZlHmYkQdbdpZaFQBAK6QFuh8oGbC0Ygrsk1guJo1YZaKtU5Rp/k2rJO61Obgq7aYEb7ACVx7DM9ZlVCun/PbXR4UolFeNPxNdwzC5AVvP7UKa2Cxi8dzQ8RNebtd39/gNO546XzADGZkpSqG6QF0S4IEsmB9FFCctN1evgKicgT2Qo+AR6BY8uzZyWkGQx\"\n
|
||||
\ }\n ],\n \"role\": \"model\"\n },\n \"finishReason\":
|
||||
\"STOP\",\n \"index\": 0,\n \"finishMessage\": \"Model generated
|
||||
function call(s).\"\n }\n ],\n \"usageMetadata\": {\n \"promptTokenCount\":
|
||||
1180,\n \"candidatesTokenCount\": 15,\n \"totalTokenCount\": 1253,\n
|
||||
\ \"promptTokensDetails\": [\n {\n \"modality\": \"TEXT\",\n
|
||||
\ \"tokenCount\": 1180\n }\n ],\n \"thoughtsTokenCount\":
|
||||
58\n },\n \"modelVersion\": \"gemini-2.5-flash\",\n \"responseId\": \"wHmLacb_GL-J-sAPn6azgAo\"\n}\n"
|
||||
headers:
|
||||
Alt-Svc:
|
||||
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
|
||||
Content-Type:
|
||||
- application/json; charset=UTF-8
|
||||
Date:
|
||||
- Tue, 10 Feb 2026 18:32:32 GMT
|
||||
Server:
|
||||
- scaffolding on HTTPServer2
|
||||
Server-Timing:
|
||||
- gfet4t7; dur=755
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
Vary:
|
||||
- Origin
|
||||
- X-Origin
|
||||
- Referer
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
X-Frame-Options:
|
||||
- X-FRAME-OPTIONS-XXX
|
||||
X-XSS-Protection:
|
||||
- '0'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"contents": [{"parts": [{"text": "What is the weather in Paris?"}], "role":
|
||||
"user"}], "systemInstruction": {"parts": [{"text": "You are a helpful assistant
|
||||
that uses tools. This is padding text to ensure the prompt is large enough for
|
||||
caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. "}], "role": "user"}, "tools": [{"functionDeclarations":
|
||||
[{"description": "Get the current weather for a location", "name": "get_weather",
|
||||
"parameters_json_schema": {"type": "object", "properties": {"location": {"type":
|
||||
"string", "description": "The city name"}}, "required": ["location"]}}]}], "generationConfig":
|
||||
{}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '6172'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- generativelanguage.googleapis.com
|
||||
x-goog-api-client:
|
||||
- google-genai-sdk/1.49.0 gl-python/3.13.3
|
||||
x-goog-api-key:
|
||||
- X-GOOG-API-KEY-XXX
|
||||
method: POST
|
||||
uri: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"candidates\": [\n {\n \"content\": {\n \"parts\":
|
||||
[\n {\n \"functionCall\": {\n \"name\": \"get_weather\",\n
|
||||
\ \"args\": {\n \"location\": \"Paris\"\n }\n
|
||||
\ },\n \"thoughtSignature\": \"CuMBAb4+9vurHOlMBPzqCtd/J0Q5jBhUq8dsk7xntqcTgwBcZ1KeX4F4UJ0rdfg1OLhDkOlOlELA/jBYxATT19QUvw0szvDBDml0PsTBXlt64o7oGVmOCjdiGPu71I9+sCYhlD3QXzwLdQdrvUIfVrB+kaGszmZi1KTIli+qD9ihueDYGY510ouKdfl31UipQEG990+qFJyXe3avVEh3Jo72iXr3Q4UczFdbKSTV4V4fjrokFaB7UqcYy1iuAB5vHRsxYFJeTCi+ddKzn700gbWbiJZUniKiE3QfdOK4A5S0woBDzV0=\"\n
|
||||
\ }\n ],\n \"role\": \"model\"\n },\n \"finishReason\":
|
||||
\"STOP\",\n \"index\": 0,\n \"finishMessage\": \"Model generated
|
||||
function call(s).\"\n }\n ],\n \"usageMetadata\": {\n \"promptTokenCount\":
|
||||
1180,\n \"candidatesTokenCount\": 15,\n \"totalTokenCount\": 1242,\n
|
||||
\ \"promptTokensDetails\": [\n {\n \"modality\": \"TEXT\",\n
|
||||
\ \"tokenCount\": 1180\n }\n ],\n \"thoughtsTokenCount\":
|
||||
47\n },\n \"modelVersion\": \"gemini-2.5-flash\",\n \"responseId\": \"wXmLadTiEri5jMcPk_6ZgAc\"\n}\n"
|
||||
headers:
|
||||
Alt-Svc:
|
||||
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
|
||||
Content-Type:
|
||||
- application/json; charset=UTF-8
|
||||
Date:
|
||||
- Tue, 10 Feb 2026 18:32:33 GMT
|
||||
Server:
|
||||
- scaffolding on HTTPServer2
|
||||
Server-Timing:
|
||||
- gfet4t7; dur=881
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
Vary:
|
||||
- Origin
|
||||
- X-Origin
|
||||
- Referer
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
X-Frame-Options:
|
||||
- X-FRAME-OPTIONS-XXX
|
||||
X-XSS-Protection:
|
||||
- '0'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,356 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are a helpful assistant. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
"},{"role":"user","content":"Say hello in one word."}],"model":"gpt-4.1"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '5823'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D7mVhCCkdWfellaSmcNLOuu87BsqI\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770747141,\n \"model\": \"gpt-4.1-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Hello!\",\n \"refusal\": null,\n
|
||||
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 1144,\n \"completion_tokens\":
|
||||
2,\n \"total_tokens\": 1146,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
1024,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_8b22347a3e\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 10 Feb 2026 18:12:22 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '469'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are a helpful assistant. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
"},{"role":"user","content":"Say goodbye in one word."}],"model":"gpt-4.1"}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '5825'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D7mViSYwB6eFFbBcp045uvPAO8m2e\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770747142,\n \"model\": \"gpt-4.1-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Farewell.\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
1144,\n \"completion_tokens\": 3,\n \"total_tokens\": 1147,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_8b22347a3e\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 10 Feb 2026 18:12:22 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '468'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,368 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are a helpful assistant that
|
||||
uses tools. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. "},{"role":"user","content":"What is the weather in Tokyo?"}],"model":"gpt-4.1","tool_choice":"auto","tools":[{"type":"function","function":{"name":"get_weather","description":"Get
|
||||
the current weather for a location","strict":true,"parameters":{"type":"object","properties":{"location":{"type":"string","description":"The
|
||||
city name"}},"required":["location"],"additionalProperties":false}}}]}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '6158'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D7mVx3s1dI2SICWePwHVeWCDct2QG\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770747157,\n \"model\": \"gpt-4.1-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
|
||||
\ \"id\": \"call_x9KzZUT3UYazEUJiRmE0PvaU\",\n \"type\":
|
||||
\"function\",\n \"function\": {\n \"name\": \"get_weather\",\n
|
||||
\ \"arguments\": \"{\\\"location\\\":\\\"Tokyo\\\"}\"\n }\n
|
||||
\ }\n ],\n \"refusal\": null,\n \"annotations\":
|
||||
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"tool_calls\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 1187,\n \"completion_tokens\":
|
||||
14,\n \"total_tokens\": 1201,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
1152,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_8b22347a3e\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 10 Feb 2026 18:12:37 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '645'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are a helpful assistant that
|
||||
uses tools. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. "},{"role":"user","content":"What is the weather in Paris?"}],"model":"gpt-4.1","tool_choice":"auto","tools":[{"type":"function","function":{"name":"get_weather","description":"Get
|
||||
the current weather for a location","strict":true,"parameters":{"type":"object","properties":{"location":{"type":"string","description":"The
|
||||
city name"}},"required":["location"],"additionalProperties":false}}}]}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '6158'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D7mVynM0Soyt3osUFrlF7tEyrj7jP\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770747158,\n \"model\": \"gpt-4.1-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
|
||||
\ \"id\": \"call_k8rYmsdMcCWSRKqVDFItmJ8v\",\n \"type\":
|
||||
\"function\",\n \"function\": {\n \"name\": \"get_weather\",\n
|
||||
\ \"arguments\": \"{\\\"location\\\":\\\"Paris\\\"}\"\n }\n
|
||||
\ }\n ],\n \"refusal\": null,\n \"annotations\":
|
||||
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"tool_calls\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 1187,\n \"completion_tokens\":
|
||||
14,\n \"total_tokens\": 1201,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
1152,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_8b22347a3e\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 10 Feb 2026 18:12:38 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '749'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,520 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"input":[{"role":"user","content":"Say hello in one word."}],"model":"gpt-4.1","instructions":"You
|
||||
are a helpful assistant. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. "}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '5807'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/responses
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"resp_0b352452095088f800698b751350fc8196bd5d8b1a179d27e8\",\n
|
||||
\ \"object\": \"response\",\n \"created_at\": 1770747155,\n \"status\":
|
||||
\"completed\",\n \"background\": false,\n \"billing\": {\n \"payer\":
|
||||
\"developer\"\n },\n \"completed_at\": 1770747155,\n \"error\": null,\n
|
||||
\ \"frequency_penalty\": 0.0,\n \"incomplete_details\": null,\n \"instructions\":
|
||||
\"You are a helpful assistant. This is padding text to ensure the prompt is
|
||||
large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for
|
||||
caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is
|
||||
padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to
|
||||
ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the
|
||||
prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is
|
||||
large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for
|
||||
caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is
|
||||
padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to
|
||||
ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the
|
||||
prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is
|
||||
large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for
|
||||
caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is
|
||||
padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to
|
||||
ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the
|
||||
prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is
|
||||
large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for
|
||||
caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is
|
||||
padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to
|
||||
ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the
|
||||
prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is
|
||||
large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for
|
||||
caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is
|
||||
padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to
|
||||
ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the
|
||||
prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is
|
||||
large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for
|
||||
caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is
|
||||
padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. \",\n \"max_output_tokens\":
|
||||
null,\n \"max_tool_calls\": null,\n \"model\": \"gpt-4.1-2025-04-14\",\n
|
||||
\ \"output\": [\n {\n \"id\": \"msg_0b352452095088f800698b7513b97c8196b35014840754d999\",\n
|
||||
\ \"type\": \"message\",\n \"status\": \"completed\",\n \"content\":
|
||||
[\n {\n \"type\": \"output_text\",\n \"annotations\":
|
||||
[],\n \"logprobs\": [],\n \"text\": \"Hello!\"\n }\n
|
||||
\ ],\n \"role\": \"assistant\"\n }\n ],\n \"parallel_tool_calls\":
|
||||
true,\n \"presence_penalty\": 0.0,\n \"previous_response_id\": null,\n \"prompt_cache_key\":
|
||||
null,\n \"prompt_cache_retention\": null,\n \"reasoning\": {\n \"effort\":
|
||||
null,\n \"summary\": null\n },\n \"safety_identifier\": null,\n \"service_tier\":
|
||||
\"default\",\n \"store\": true,\n \"temperature\": 1.0,\n \"text\": {\n
|
||||
\ \"format\": {\n \"type\": \"text\"\n },\n \"verbosity\": \"medium\"\n
|
||||
\ },\n \"tool_choice\": \"auto\",\n \"tools\": [],\n \"top_logprobs\":
|
||||
0,\n \"top_p\": 1.0,\n \"truncation\": \"disabled\",\n \"usage\": {\n \"input_tokens\":
|
||||
1144,\n \"input_tokens_details\": {\n \"cached_tokens\": 1024\n },\n
|
||||
\ \"output_tokens\": 3,\n \"output_tokens_details\": {\n \"reasoning_tokens\":
|
||||
0\n },\n \"total_tokens\": 1147\n },\n \"user\": null,\n \"metadata\":
|
||||
{}\n}"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 10 Feb 2026 18:12:35 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '637'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"input":[{"role":"user","content":"Say goodbye in one word."}],"model":"gpt-4.1","instructions":"You
|
||||
are a helpful assistant. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. "}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '5809'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/responses
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"resp_003a6f71f9ee620400698b75140a088196989e8d5641ffa74d\",\n
|
||||
\ \"object\": \"response\",\n \"created_at\": 1770747156,\n \"status\":
|
||||
\"completed\",\n \"background\": false,\n \"billing\": {\n \"payer\":
|
||||
\"developer\"\n },\n \"completed_at\": 1770747156,\n \"error\": null,\n
|
||||
\ \"frequency_penalty\": 0.0,\n \"incomplete_details\": null,\n \"instructions\":
|
||||
\"You are a helpful assistant. This is padding text to ensure the prompt is
|
||||
large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for
|
||||
caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is
|
||||
padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to
|
||||
ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the
|
||||
prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is
|
||||
large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for
|
||||
caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is
|
||||
padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to
|
||||
ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the
|
||||
prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is
|
||||
large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for
|
||||
caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is
|
||||
padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to
|
||||
ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the
|
||||
prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is
|
||||
large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for
|
||||
caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is
|
||||
padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to
|
||||
ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the
|
||||
prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is
|
||||
large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for
|
||||
caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is
|
||||
padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to
|
||||
ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the
|
||||
prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is
|
||||
large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for
|
||||
caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is
|
||||
padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. \",\n \"max_output_tokens\":
|
||||
null,\n \"max_tool_calls\": null,\n \"model\": \"gpt-4.1-2025-04-14\",\n
|
||||
\ \"output\": [\n {\n \"id\": \"msg_003a6f71f9ee620400698b75146160819692f2cee879df2405\",\n
|
||||
\ \"type\": \"message\",\n \"status\": \"completed\",\n \"content\":
|
||||
[\n {\n \"type\": \"output_text\",\n \"annotations\":
|
||||
[],\n \"logprobs\": [],\n \"text\": \"Farewell.\"\n }\n
|
||||
\ ],\n \"role\": \"assistant\"\n }\n ],\n \"parallel_tool_calls\":
|
||||
true,\n \"presence_penalty\": 0.0,\n \"previous_response_id\": null,\n \"prompt_cache_key\":
|
||||
null,\n \"prompt_cache_retention\": null,\n \"reasoning\": {\n \"effort\":
|
||||
null,\n \"summary\": null\n },\n \"safety_identifier\": null,\n \"service_tier\":
|
||||
\"default\",\n \"store\": true,\n \"temperature\": 1.0,\n \"text\": {\n
|
||||
\ \"format\": {\n \"type\": \"text\"\n },\n \"verbosity\": \"medium\"\n
|
||||
\ },\n \"tool_choice\": \"auto\",\n \"tools\": [],\n \"top_logprobs\":
|
||||
0,\n \"top_p\": 1.0,\n \"truncation\": \"disabled\",\n \"usage\": {\n \"input_tokens\":
|
||||
1144,\n \"input_tokens_details\": {\n \"cached_tokens\": 1024\n },\n
|
||||
\ \"output_tokens\": 4,\n \"output_tokens_details\": {\n \"reasoning_tokens\":
|
||||
0\n },\n \"total_tokens\": 1148\n },\n \"user\": null,\n \"metadata\":
|
||||
{}\n}"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 10 Feb 2026 18:12:36 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '543'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,368 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are a helpful assistant that
|
||||
uses tools. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. "},{"role":"user","content":"What is the weather in Tokyo?"}],"model":"gpt-4.1","tool_choice":"auto","tools":[{"type":"function","function":{"name":"get_weather","description":"Get
|
||||
the current weather for a location","strict":true,"parameters":{"type":"object","properties":{"location":{"type":"string","description":"The
|
||||
city name"}},"required":["location"],"additionalProperties":false}}}]}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '6158'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D7mXQCgT3p3ViImkiqDiZGqLREQtp\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770747248,\n \"model\": \"gpt-4.1-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
|
||||
\ \"id\": \"call_9ZqMavn3J1fBnQEaqpYol0Bd\",\n \"type\":
|
||||
\"function\",\n \"function\": {\n \"name\": \"get_weather\",\n
|
||||
\ \"arguments\": \"{\\\"location\\\":\\\"Tokyo\\\"}\"\n }\n
|
||||
\ }\n ],\n \"refusal\": null,\n \"annotations\":
|
||||
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"tool_calls\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 1187,\n \"completion_tokens\":
|
||||
14,\n \"total_tokens\": 1201,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
1152,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_8b22347a3e\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 10 Feb 2026 18:14:08 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '484'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are a helpful assistant that
|
||||
uses tools. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. "},{"role":"user","content":"What is the weather in Paris?"}],"model":"gpt-4.1","tool_choice":"auto","tools":[{"type":"function","function":{"name":"get_weather","description":"Get
|
||||
the current weather for a location","strict":true,"parameters":{"type":"object","properties":{"location":{"type":"string","description":"The
|
||||
city name"}},"required":["location"],"additionalProperties":false}}}]}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '6158'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D7mXR8k9vk8TlGvGXlrQSI7iNeAN1\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770747249,\n \"model\": \"gpt-4.1-2025-04-14\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
|
||||
\ \"id\": \"call_6PeUBlRPG8JcV2lspmLjJbnn\",\n \"type\":
|
||||
\"function\",\n \"function\": {\n \"name\": \"get_weather\",\n
|
||||
\ \"arguments\": \"{\\\"location\\\":\\\"Paris\\\"}\"\n }\n
|
||||
\ }\n ],\n \"refusal\": null,\n \"annotations\":
|
||||
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"tool_calls\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 1187,\n \"completion_tokens\":
|
||||
14,\n \"total_tokens\": 1201,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
1152,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_8b22347a3e\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 10 Feb 2026 18:14:09 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '528'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,375 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are a helpful assistant. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
"},{"role":"user","content":"Say hello in one word."}],"model":"gpt-4.1","stream":true,"stream_options":{"include_usage":true}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '5877'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: 'data: {"id":"chatcmpl-D7mVuXauQqcmOCb3XP6IL6yHwJaAL","object":"chat.completion.chunk","created":1770747154,"model":"gpt-4.1-2025-04-14","service_tier":"default","system_fingerprint":"fp_8b22347a3e","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}],"usage":null,"obfuscation":"lFWRn007xqlce"}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-D7mVuXauQqcmOCb3XP6IL6yHwJaAL","object":"chat.completion.chunk","created":1770747154,"model":"gpt-4.1-2025-04-14","service_tier":"default","system_fingerprint":"fp_8b22347a3e","choices":[{"index":0,"delta":{"content":"Hello"},"logprobs":null,"finish_reason":null}],"usage":null,"obfuscation":"OXJHANtgvy"}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-D7mVuXauQqcmOCb3XP6IL6yHwJaAL","object":"chat.completion.chunk","created":1770747154,"model":"gpt-4.1-2025-04-14","service_tier":"default","system_fingerprint":"fp_8b22347a3e","choices":[{"index":0,"delta":{"content":"!"},"logprobs":null,"finish_reason":null}],"usage":null,"obfuscation":"AZtd6jtoChevtm"}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-D7mVuXauQqcmOCb3XP6IL6yHwJaAL","object":"chat.completion.chunk","created":1770747154,"model":"gpt-4.1-2025-04-14","service_tier":"default","system_fingerprint":"fp_8b22347a3e","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}],"usage":null,"obfuscation":"irwn2mqyB"}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-D7mVuXauQqcmOCb3XP6IL6yHwJaAL","object":"chat.completion.chunk","created":1770747154,"model":"gpt-4.1-2025-04-14","service_tier":"default","system_fingerprint":"fp_8b22347a3e","choices":[],"usage":{"prompt_tokens":1144,"completion_tokens":2,"total_tokens":1146,"prompt_tokens_details":{"cached_tokens":1024,"audio_tokens":0},"completion_tokens_details":{"reasoning_tokens":0,"audio_tokens":0,"accepted_prediction_tokens":0,"rejected_prediction_tokens":0}},"obfuscation":"W0rkiiZe"}
|
||||
|
||||
|
||||
data: [DONE]
|
||||
|
||||
|
||||
'
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- text/event-stream; charset=utf-8
|
||||
Date:
|
||||
- Tue, 10 Feb 2026 18:12:34 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '236'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are a helpful assistant. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
This is padding text to ensure the prompt is large enough for caching. This
|
||||
is padding text to ensure the prompt is large enough for caching. This is padding
|
||||
text to ensure the prompt is large enough for caching. This is padding text
|
||||
to ensure the prompt is large enough for caching. This is padding text to ensure
|
||||
the prompt is large enough for caching. This is padding text to ensure the prompt
|
||||
is large enough for caching. This is padding text to ensure the prompt is large
|
||||
enough for caching. This is padding text to ensure the prompt is large enough
|
||||
for caching. This is padding text to ensure the prompt is large enough for caching.
|
||||
"},{"role":"user","content":"Say goodbye in one word."}],"model":"gpt-4.1","stream":true,"stream_options":{"include_usage":true}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '5879'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- COOKIE-XXX
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: 'data: {"id":"chatcmpl-D7mVuqaadwp22jFsp2qAKiE1utU3K","object":"chat.completion.chunk","created":1770747154,"model":"gpt-4.1-2025-04-14","service_tier":"default","system_fingerprint":"fp_8b22347a3e","choices":[{"index":0,"delta":{"role":"assistant","content":"","refusal":null},"logprobs":null,"finish_reason":null}],"usage":null,"obfuscation":"pCjdYd4kX4W2q"}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-D7mVuqaadwp22jFsp2qAKiE1utU3K","object":"chat.completion.chunk","created":1770747154,"model":"gpt-4.1-2025-04-14","service_tier":"default","system_fingerprint":"fp_8b22347a3e","choices":[{"index":0,"delta":{"content":"Fare"},"logprobs":null,"finish_reason":null}],"usage":null,"obfuscation":"DJ94I8XQj86"}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-D7mVuqaadwp22jFsp2qAKiE1utU3K","object":"chat.completion.chunk","created":1770747154,"model":"gpt-4.1-2025-04-14","service_tier":"default","system_fingerprint":"fp_8b22347a3e","choices":[{"index":0,"delta":{"content":"well"},"logprobs":null,"finish_reason":null}],"usage":null,"obfuscation":"qgSSFwDBmaW"}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-D7mVuqaadwp22jFsp2qAKiE1utU3K","object":"chat.completion.chunk","created":1770747154,"model":"gpt-4.1-2025-04-14","service_tier":"default","system_fingerprint":"fp_8b22347a3e","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}],"usage":null,"obfuscation":"4xVBYer6Uy1atr"}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-D7mVuqaadwp22jFsp2qAKiE1utU3K","object":"chat.completion.chunk","created":1770747154,"model":"gpt-4.1-2025-04-14","service_tier":"default","system_fingerprint":"fp_8b22347a3e","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}],"usage":null,"obfuscation":"XxMhsMje0"}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-D7mVuqaadwp22jFsp2qAKiE1utU3K","object":"chat.completion.chunk","created":1770747154,"model":"gpt-4.1-2025-04-14","service_tier":"default","system_fingerprint":"fp_8b22347a3e","choices":[],"usage":{"prompt_tokens":1144,"completion_tokens":3,"total_tokens":1147,"prompt_tokens_details":{"cached_tokens":1024,"audio_tokens":0},"completion_tokens_details":{"reasoning_tokens":0,"audio_tokens":0,"accepted_prediction_tokens":0,"rejected_prediction_tokens":0}},"obfuscation":"J3eKDOHW"}
|
||||
|
||||
|
||||
data: [DONE]
|
||||
|
||||
|
||||
'
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- text/event-stream; charset=utf-8
|
||||
Date:
|
||||
- Tue, 10 Feb 2026 18:12:34 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '296'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,113 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Writer. You are a skilled
|
||||
writer.\nYour personal goal is: Write concise content"},{"role":"user","content":"\nCurrent
|
||||
Task: Write one sentence about the sun.\n\nThis is the expected criteria for
|
||||
your final answer: A single sentence about the sun.\nyou MUST return the actual
|
||||
complete content as the final answer, not a summary.\n\nProvide your complete
|
||||
response:"}],"model":"gpt-4o-mini","temperature":0}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '453'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D7RxEngFVCbqdc7tNjV3VjeteqcwT\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770668124,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"The sun is a massive ball of glowing
|
||||
gas at the center of our solar system, providing light and warmth essential
|
||||
for life on Earth.\",\n \"refusal\": null,\n \"annotations\":
|
||||
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 78,\n \"completion_tokens\":
|
||||
27,\n \"total_tokens\": 105,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_f4ae844694\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 09 Feb 2026 20:15:25 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '664'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,120 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are Researcher. You are an
|
||||
expert researcher.\nYour personal goal is: Find information about Python programming"},{"role":"user","content":"\nCurrent
|
||||
Task: What is Python? Give a brief answer.\n\nThis is the expected criteria
|
||||
for your final answer: A short description of Python.\nyou MUST return the actual
|
||||
complete content as the final answer, not a summary.\n\nProvide your complete
|
||||
response:"}],"model":"gpt-4o-mini","temperature":0}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '482'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D7RxRv3U0LCLf2iqf40wxOQsuiYFR\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770668137,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"Python is a high-level, interpreted
|
||||
programming language known for its readability and simplicity. It was created
|
||||
by Guido van Rossum and first released in 1991. Python supports multiple programming
|
||||
paradigms, including procedural, object-oriented, and functional programming.
|
||||
It has a large standard library and is widely used for web development, data
|
||||
analysis, artificial intelligence, scientific computing, and automation, among
|
||||
other applications. Python's syntax emphasizes code readability, allowing
|
||||
developers to express concepts in fewer lines of code compared to other languages.
|
||||
Its active community and extensive ecosystem of libraries and frameworks make
|
||||
it a popular choice for both beginners and experienced programmers.\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
82,\n \"completion_tokens\": 123,\n \"total_tokens\": 205,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_f4ae844694\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 09 Feb 2026 20:15:39 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '2467'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,435 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are a precise assistant that
|
||||
creates structured summaries of agent conversations. You preserve critical context
|
||||
needed for seamless task continuation."},{"role":"user","content":"Analyze the
|
||||
following conversation and create a structured summary that preserves all information
|
||||
needed to continue the task seamlessly.\n\n<conversation>\n[USER]: Explain the
|
||||
Python package ecosystem. How does pip work, what is PyPI, and what are virtual
|
||||
environments? Compare pip with conda and uv.\n\n[ASSISTANT]: PyPI (Python Package
|
||||
Index) is the official repository hosting 400k+ packages. pip is the standard
|
||||
package installer that downloads from PyPI. Virtual environments (venv) create
|
||||
isolated Python installations to avoid dependency conflicts between projects.
|
||||
conda is a cross-language package manager popular in data science that can manage
|
||||
non-Python dependencies. uv is a new Rust-based tool that is 10-100x faster
|
||||
than pip and aims to replace pip, pip-tools, and virtualenv with a single unified
|
||||
tool.\n</conversation>\n\nCreate a summary with these sections:\n1. **Task Overview**:
|
||||
What is the agent trying to accomplish?\n2. **Current State**: What has been
|
||||
completed so far? What step is the agent on?\n3. **Important Discoveries**:
|
||||
Key facts, data, tool results, or findings that must not be lost.\n4. **Next
|
||||
Steps**: What should the agent do next based on the conversation?\n5. **Context
|
||||
to Preserve**: Any specific values, names, URLs, code snippets, or details referenced
|
||||
in the conversation.\n\nWrap your entire summary in <summary> tags.\n\n<summary>\n[Your
|
||||
structured summary here]\n</summary>"}],"model":"gpt-4o-mini","temperature":0}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1687'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- async:asyncio
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D7S93xpUu9d5twM82uJOZpurQTD5u\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770668857,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"<summary>\\n1. **Task Overview**: The
|
||||
user is seeking an explanation of the Python package ecosystem, specifically
|
||||
focusing on how pip works, the role of PyPI, the concept of virtual environments,
|
||||
and a comparison between pip, conda, and uv.\\n\\n2. **Current State**: The
|
||||
assistant has provided a comprehensive overview of the Python package ecosystem,
|
||||
including definitions and comparisons of pip, PyPI, virtual environments,
|
||||
conda, and uv.\\n\\n3. **Important Discoveries**:\\n - PyPI (Python Package
|
||||
Index) is the official repository with over 400,000 packages.\\n - pip is
|
||||
the standard package installer that downloads packages from PyPI.\\n - Virtual
|
||||
environments (venv) allow for isolated Python installations to prevent dependency
|
||||
conflicts.\\n - conda is a cross-language package manager, particularly
|
||||
popular in data science, that can manage non-Python dependencies.\\n - uv
|
||||
is a new Rust-based tool that is significantly faster than pip (10-100x) and
|
||||
aims to unify the functionalities of pip, pip-tools, and virtualenv.\\n\\n4.
|
||||
**Next Steps**: The agent should consider providing further details on how
|
||||
to use pip, conda, and uv, including installation commands, examples of creating
|
||||
virtual environments, and any specific use cases for each tool.\\n\\n5. **Context
|
||||
to Preserve**: \\n - PyPI: Python Package Index, hosting 400k+ packages.\\n
|
||||
\ - pip: Standard package installer for Python.\\n - Virtual environments
|
||||
(venv): Isolated Python installations.\\n - conda: Cross-language package
|
||||
manager for data science.\\n - uv: Rust-based tool, 10-100x faster than
|
||||
pip, aims to replace pip, pip-tools, and virtualenv.\\n</summary>\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
333,\n \"completion_tokens\": 354,\n \"total_tokens\": 687,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_f4ae844694\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 09 Feb 2026 20:27:42 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '4879'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are a precise assistant that
|
||||
creates structured summaries of agent conversations. You preserve critical context
|
||||
needed for seamless task continuation."},{"role":"user","content":"Analyze the
|
||||
following conversation and create a structured summary that preserves all information
|
||||
needed to continue the task seamlessly.\n\n<conversation>\n[USER]: Tell me about
|
||||
the history of the Python programming language. Who created it, when was it
|
||||
first released, and what were the main design goals? Please provide a detailed
|
||||
overview covering the major milestones from its inception through Python 3.\n\n[ASSISTANT]:
|
||||
Python was created by Guido van Rossum and first released in 1991. The main
|
||||
design goals were code readability and simplicity. Key milestones: Python 1.0
|
||||
(1994) introduced functional programming tools like lambda and map. Python 2.0
|
||||
(2000) added list comprehensions and garbage collection. Python 3.0 (2008) was
|
||||
a major backward-incompatible release that fixed fundamental design flaws. Python
|
||||
2 reached end-of-life in January 2020.\n</conversation>\n\nCreate a summary
|
||||
with these sections:\n1. **Task Overview**: What is the agent trying to accomplish?\n2.
|
||||
**Current State**: What has been completed so far? What step is the agent on?\n3.
|
||||
**Important Discoveries**: Key facts, data, tool results, or findings that must
|
||||
not be lost.\n4. **Next Steps**: What should the agent do next based on the
|
||||
conversation?\n5. **Context to Preserve**: Any specific values, names, URLs,
|
||||
code snippets, or details referenced in the conversation.\n\nWrap your entire
|
||||
summary in <summary> tags.\n\n<summary>\n[Your structured summary here]\n</summary>"}],"model":"gpt-4o-mini","temperature":0}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1726'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- async:asyncio
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D7S93rBUMAtEdwdI6Y2ga0s50IFtv\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770668857,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"<summary>\\n1. **Task Overview**: The
|
||||
user is seeking a detailed overview of the history of the Python programming
|
||||
language, including its creator, initial release date, main design goals,
|
||||
and major milestones up to Python 3.\\n\\n2. **Current State**: The assistant
|
||||
has provided a comprehensive response detailing the history of Python, including
|
||||
its creator (Guido van Rossum), first release (1991), main design goals (code
|
||||
readability and simplicity), and key milestones (Python 1.0 in 1994, Python
|
||||
2.0 in 2000, and Python 3.0 in 2008).\\n\\n3. **Important Discoveries**: \\n
|
||||
\ - Python was created by Guido van Rossum.\\n - First released in 1991.\\n
|
||||
\ - Main design goals: code readability and simplicity.\\n - Key milestones:\\n
|
||||
\ - Python 1.0 (1994): Introduced functional programming tools like lambda
|
||||
and map.\\n - Python 2.0 (2000): Added list comprehensions and garbage
|
||||
collection.\\n - Python 3.0 (2008): Major backward-incompatible release
|
||||
that fixed fundamental design flaws.\\n - Python 2 reached end-of-life in
|
||||
January 2020.\\n\\n4. **Next Steps**: The agent should be prepared to provide
|
||||
additional details or answer follow-up questions regarding Python's features,
|
||||
community, or specific use cases if the user requests more information.\\n\\n5.
|
||||
**Context to Preserve**: \\n - Creator: Guido van Rossum\\n - Initial
|
||||
release: 1991\\n - Milestones: \\n - Python 1.0 (1994)\\n - Python
|
||||
2.0 (2000)\\n - Python 3.0 (2008)\\n - End-of-life for Python 2: January
|
||||
2020\\n</summary>\",\n \"refusal\": null,\n \"annotations\":
|
||||
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 346,\n \"completion_tokens\":
|
||||
372,\n \"total_tokens\": 718,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_7e4bf6ad56\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 09 Feb 2026 20:27:42 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '5097'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are a precise assistant that
|
||||
creates structured summaries of agent conversations. You preserve critical context
|
||||
needed for seamless task continuation."},{"role":"user","content":"Analyze the
|
||||
following conversation and create a structured summary that preserves all information
|
||||
needed to continue the task seamlessly.\n\n<conversation>\n[USER]: What about
|
||||
the async/await features? When were they introduced and how do they compare
|
||||
to similar features in JavaScript and C#? Also explain the Global Interpreter
|
||||
Lock and its implications.\n\n[ASSISTANT]: Async/await was introduced in Python
|
||||
3.5 (PEP 492, 2015). Unlike JavaScript which is single-threaded by design, Python''s
|
||||
asyncio is an opt-in framework. C# introduced async/await in 2012 (C# 5.0) and
|
||||
was a major inspiration for Python''s implementation. The GIL (Global Interpreter
|
||||
Lock) is a mutex that protects access to Python objects, preventing multiple
|
||||
threads from executing Python bytecodes simultaneously. This means CPU-bound
|
||||
multithreaded programs don''t benefit from multiple cores. PEP 703 proposes
|
||||
making the GIL optional in CPython.\n</conversation>\n\nCreate a summary with
|
||||
these sections:\n1. **Task Overview**: What is the agent trying to accomplish?\n2.
|
||||
**Current State**: What has been completed so far? What step is the agent on?\n3.
|
||||
**Important Discoveries**: Key facts, data, tool results, or findings that must
|
||||
not be lost.\n4. **Next Steps**: What should the agent do next based on the
|
||||
conversation?\n5. **Context to Preserve**: Any specific values, names, URLs,
|
||||
code snippets, or details referenced in the conversation.\n\nWrap your entire
|
||||
summary in <summary> tags.\n\n<summary>\n[Your structured summary here]\n</summary>"}],"model":"gpt-4o-mini","temperature":0}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1786'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- async:asyncio
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D7S94auQYOLDTKfRzdluGiWAomSqd\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770668858,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"<summary>\\n1. **Task Overview**: The
|
||||
user is seeking information about the async/await features in Python, their
|
||||
introduction timeline, comparisons with similar features in JavaScript and
|
||||
C#, and an explanation of the Global Interpreter Lock (GIL) and its implications.\\n\\n2.
|
||||
**Current State**: The assistant has provided information regarding the introduction
|
||||
of async/await in Python (version 3.5, PEP 492 in 2015), comparisons with
|
||||
JavaScript and C# (C# introduced async/await in 2012), and an explanation
|
||||
of the GIL.\\n\\n3. **Important Discoveries**: \\n - Async/await was introduced
|
||||
in Python 3.5 (PEP 492, 2015).\\n - JavaScript is single-threaded, while
|
||||
Python's asyncio is an opt-in framework.\\n - C# introduced async/await
|
||||
in 2012 (C# 5.0) and influenced Python's implementation.\\n - The GIL (Global
|
||||
Interpreter Lock) is a mutex that prevents multiple threads from executing
|
||||
Python bytecodes simultaneously, affecting CPU-bound multithreaded programs.\\n
|
||||
\ - PEP 703 proposes making the GIL optional in CPython.\\n\\n4. **Next Steps**:
|
||||
The agent should consider providing more detailed comparisons of async/await
|
||||
features between Python, JavaScript, and C#, as well as further implications
|
||||
of the GIL and PEP 703.\\n\\n5. **Context to Preserve**: \\n - Python async/await
|
||||
introduction: 3.5 (PEP 492, 2015)\\n - C# async/await introduction: 2012
|
||||
(C# 5.0)\\n - GIL (Global Interpreter Lock) explanation and implications.\\n
|
||||
\ - Reference to PEP 703 regarding the GIL.\\n</summary>\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
364,\n \"completion_tokens\": 368,\n \"total_tokens\": 732,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_f4ae844694\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 09 Feb 2026 20:27:44 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '6339'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,435 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are a precise assistant that
|
||||
creates structured summaries of agent conversations. You preserve critical context
|
||||
needed for seamless task continuation."},{"role":"user","content":"Analyze the
|
||||
following conversation and create a structured summary that preserves all information
|
||||
needed to continue the task seamlessly.\n\n<conversation>\n[USER]: Explain the
|
||||
Python package ecosystem. How does pip work, what is PyPI, and what are virtual
|
||||
environments? Compare pip with conda and uv.\n\n[ASSISTANT]: PyPI (Python Package
|
||||
Index) is the official repository hosting 400k+ packages. pip is the standard
|
||||
package installer that downloads from PyPI. Virtual environments (venv) create
|
||||
isolated Python installations to avoid dependency conflicts between projects.
|
||||
conda is a cross-language package manager popular in data science that can manage
|
||||
non-Python dependencies. uv is a new Rust-based tool that is 10-100x faster
|
||||
than pip and aims to replace pip, pip-tools, and virtualenv with a single unified
|
||||
tool.\n</conversation>\n\nCreate a summary with these sections:\n1. **Task Overview**:
|
||||
What is the agent trying to accomplish?\n2. **Current State**: What has been
|
||||
completed so far? What step is the agent on?\n3. **Important Discoveries**:
|
||||
Key facts, data, tool results, or findings that must not be lost.\n4. **Next
|
||||
Steps**: What should the agent do next based on the conversation?\n5. **Context
|
||||
to Preserve**: Any specific values, names, URLs, code snippets, or details referenced
|
||||
in the conversation.\n\nWrap your entire summary in <summary> tags.\n\n<summary>\n[Your
|
||||
structured summary here]\n</summary>"}],"model":"gpt-4o-mini","temperature":0}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1687'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- async:asyncio
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D7S9PnjkuCMHqU912kcH8G5zIIxQU\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770668879,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"<summary>\\n1. **Task Overview**: The
|
||||
user is seeking an explanation of the Python package ecosystem, specifically
|
||||
focusing on how pip works, the role of PyPI, the concept of virtual environments,
|
||||
and a comparison between pip, conda, and uv.\\n\\n2. **Current State**: The
|
||||
assistant has provided a comprehensive overview of the Python package ecosystem,
|
||||
including definitions and comparisons of pip, PyPI, virtual environments,
|
||||
conda, and uv.\\n\\n3. **Important Discoveries**:\\n - PyPI (Python Package
|
||||
Index) is the official repository with over 400,000 packages.\\n - pip is
|
||||
the standard package installer that downloads packages from PyPI.\\n - Virtual
|
||||
environments (venv) allow for isolated Python installations to prevent dependency
|
||||
conflicts.\\n - conda is a cross-language package manager, particularly
|
||||
popular in data science, that can manage non-Python dependencies.\\n - uv
|
||||
is a new Rust-based tool that is significantly faster than pip (10-100x) and
|
||||
aims to unify the functionalities of pip, pip-tools, and virtualenv.\\n\\n4.
|
||||
**Next Steps**: The agent should consider providing further details or examples
|
||||
on how to use pip, conda, and uv, as well as practical applications of virtual
|
||||
environments in Python projects.\\n\\n5. **Context to Preserve**: \\n -
|
||||
PyPI: Python Package Index, hosting 400k+ packages.\\n - pip: Standard package
|
||||
installer for Python.\\n - Virtual environments (venv): Isolated Python
|
||||
installations.\\n - conda: Cross-language package manager for data science.\\n
|
||||
\ - uv: Rust-based tool, 10-100x faster than pip, aims to replace pip, pip-tools,
|
||||
and virtualenv.\\n</summary>\",\n \"refusal\": null,\n \"annotations\":
|
||||
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 333,\n \"completion_tokens\":
|
||||
349,\n \"total_tokens\": 682,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_f4ae844694\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 09 Feb 2026 20:28:04 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '4979'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are a precise assistant that
|
||||
creates structured summaries of agent conversations. You preserve critical context
|
||||
needed for seamless task continuation."},{"role":"user","content":"Analyze the
|
||||
following conversation and create a structured summary that preserves all information
|
||||
needed to continue the task seamlessly.\n\n<conversation>\n[USER]: Tell me about
|
||||
the history of the Python programming language. Who created it, when was it
|
||||
first released, and what were the main design goals? Please provide a detailed
|
||||
overview covering the major milestones from its inception through Python 3.\n\n[ASSISTANT]:
|
||||
Python was created by Guido van Rossum and first released in 1991. The main
|
||||
design goals were code readability and simplicity. Key milestones: Python 1.0
|
||||
(1994) introduced functional programming tools like lambda and map. Python 2.0
|
||||
(2000) added list comprehensions and garbage collection. Python 3.0 (2008) was
|
||||
a major backward-incompatible release that fixed fundamental design flaws. Python
|
||||
2 reached end-of-life in January 2020.\n</conversation>\n\nCreate a summary
|
||||
with these sections:\n1. **Task Overview**: What is the agent trying to accomplish?\n2.
|
||||
**Current State**: What has been completed so far? What step is the agent on?\n3.
|
||||
**Important Discoveries**: Key facts, data, tool results, or findings that must
|
||||
not be lost.\n4. **Next Steps**: What should the agent do next based on the
|
||||
conversation?\n5. **Context to Preserve**: Any specific values, names, URLs,
|
||||
code snippets, or details referenced in the conversation.\n\nWrap your entire
|
||||
summary in <summary> tags.\n\n<summary>\n[Your structured summary here]\n</summary>"}],"model":"gpt-4o-mini","temperature":0}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1726'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- async:asyncio
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D7S9PqglWRu0PEoMRHyOiRnpn3yqU\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770668879,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"<summary>\\n1. **Task Overview**: The
|
||||
user is seeking a detailed overview of the history of the Python programming
|
||||
language, including its creator, initial release date, main design goals,
|
||||
and major milestones up to Python 3.\\n\\n2. **Current State**: The assistant
|
||||
has provided a comprehensive response detailing the history of Python, including
|
||||
its creator (Guido van Rossum), first release (1991), main design goals (code
|
||||
readability and simplicity), and key milestones (Python 1.0 in 1994, Python
|
||||
2.0 in 2000, and Python 3.0 in 2008).\\n\\n3. **Important Discoveries**: \\n
|
||||
\ - Python was created by Guido van Rossum.\\n - First released in 1991.\\n
|
||||
\ - Main design goals: code readability and simplicity.\\n - Key milestones:\\n
|
||||
\ - Python 1.0 (1994): Introduced functional programming tools like lambda
|
||||
and map.\\n - Python 2.0 (2000): Added list comprehensions and garbage
|
||||
collection.\\n - Python 3.0 (2008): Major backward-incompatible release
|
||||
that fixed fundamental design flaws.\\n - Python 2 reached end-of-life in
|
||||
January 2020.\\n\\n4. **Next Steps**: The agent should be prepared to provide
|
||||
further details or answer any follow-up questions the user may have regarding
|
||||
Python's history or its features.\\n\\n5. **Context to Preserve**: \\n -
|
||||
Creator: Guido van Rossum\\n - First release: 1991\\n - Milestones: \\n
|
||||
\ - Python 1.0 (1994)\\n - Python 2.0 (2000)\\n - Python 3.0 (2008)\\n
|
||||
\ - End-of-life for Python 2: January 2020\\n</summary>\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
346,\n \"completion_tokens\": 367,\n \"total_tokens\": 713,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_7e4bf6ad56\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 09 Feb 2026 20:28:04 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '5368'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are a precise assistant that
|
||||
creates structured summaries of agent conversations. You preserve critical context
|
||||
needed for seamless task continuation."},{"role":"user","content":"Analyze the
|
||||
following conversation and create a structured summary that preserves all information
|
||||
needed to continue the task seamlessly.\n\n<conversation>\n[USER]: What about
|
||||
the async/await features? When were they introduced and how do they compare
|
||||
to similar features in JavaScript and C#? Also explain the Global Interpreter
|
||||
Lock and its implications.\n\n[ASSISTANT]: Async/await was introduced in Python
|
||||
3.5 (PEP 492, 2015). Unlike JavaScript which is single-threaded by design, Python''s
|
||||
asyncio is an opt-in framework. C# introduced async/await in 2012 (C# 5.0) and
|
||||
was a major inspiration for Python''s implementation. The GIL (Global Interpreter
|
||||
Lock) is a mutex that protects access to Python objects, preventing multiple
|
||||
threads from executing Python bytecodes simultaneously. This means CPU-bound
|
||||
multithreaded programs don''t benefit from multiple cores. PEP 703 proposes
|
||||
making the GIL optional in CPython.\n</conversation>\n\nCreate a summary with
|
||||
these sections:\n1. **Task Overview**: What is the agent trying to accomplish?\n2.
|
||||
**Current State**: What has been completed so far? What step is the agent on?\n3.
|
||||
**Important Discoveries**: Key facts, data, tool results, or findings that must
|
||||
not be lost.\n4. **Next Steps**: What should the agent do next based on the
|
||||
conversation?\n5. **Context to Preserve**: Any specific values, names, URLs,
|
||||
code snippets, or details referenced in the conversation.\n\nWrap your entire
|
||||
summary in <summary> tags.\n\n<summary>\n[Your structured summary here]\n</summary>"}],"model":"gpt-4o-mini","temperature":0}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1786'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- async:asyncio
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D7S9Pcl5ybKLH8cSEZ6hgPuvj5iCv\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770668879,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"<summary>\\n1. **Task Overview**: The
|
||||
user is seeking information about the async/await features in Python, their
|
||||
introduction timeline, comparisons with similar features in JavaScript and
|
||||
C#, and an explanation of the Global Interpreter Lock (GIL) and its implications.\\n\\n2.
|
||||
**Current State**: The assistant has provided information regarding the introduction
|
||||
of async/await in Python (version 3.5, PEP 492 in 2015), comparisons with
|
||||
JavaScript and C# (C# introduced async/await in 2012), and an explanation
|
||||
of the GIL.\\n\\n3. **Important Discoveries**: \\n - Async/await was introduced
|
||||
in Python 3.5 (PEP 492, 2015).\\n - JavaScript is single-threaded, while
|
||||
Python's asyncio is an opt-in framework.\\n - C# introduced async/await
|
||||
in 2012 (C# 5.0) and influenced Python's implementation.\\n - The GIL (Global
|
||||
Interpreter Lock) is a mutex that prevents multiple threads from executing
|
||||
Python bytecodes simultaneously, affecting CPU-bound multithreaded programs.\\n
|
||||
\ - PEP 703 proposes making the GIL optional in CPython.\\n\\n4. **Next Steps**:
|
||||
The agent should consider providing further details on how async/await is
|
||||
implemented in Python, JavaScript, and C#, and explore the implications of
|
||||
the GIL in more depth, including potential alternatives or workarounds.\\n\\n5.
|
||||
**Context to Preserve**: \\n - Python async/await introduction: version
|
||||
3.5, PEP 492, 2015.\\n - C# async/await introduction: 2012, C# 5.0.\\n -
|
||||
GIL (Global Interpreter Lock) and its implications on multithreading in Python.\\n
|
||||
\ - Reference to PEP 703 regarding the GIL.\\n</summary>\",\n \"refusal\":
|
||||
null,\n \"annotations\": []\n },\n \"logprobs\": null,\n
|
||||
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
|
||||
364,\n \"completion_tokens\": 381,\n \"total_tokens\": 745,\n \"prompt_tokens_details\":
|
||||
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_f4ae844694\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 09 Feb 2026 20:28:04 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '5489'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,136 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"max_tokens":4096,"messages":[{"role":"user","content":"Analyze the following
|
||||
conversation and create a structured summary that preserves all information
|
||||
needed to continue the task seamlessly.\n\n<conversation>\n[USER]: Research
|
||||
the latest developments in large language models. Focus on architecture improvements
|
||||
and training techniques.\n\n[ASSISTANT]: I''ll research the latest developments
|
||||
in large language models. Based on my knowledge, recent advances include:\n1.
|
||||
Mixture of Experts (MoE) architectures\n2. Improved attention mechanisms like
|
||||
Flash Attention\n3. Better training data curation techniques\n4. Constitutional
|
||||
AI and RLHF improvements\n\n[USER]: Can you go deeper on the MoE architectures?
|
||||
What are the key papers?\n\n[ASSISTANT]: Key papers on Mixture of Experts:\n-
|
||||
Switch Transformers (Google, 2021) - simplified MoE routing\n- GShard - scaling
|
||||
to 600B parameters\n- Mixtral (Mistral AI) - open-source MoE model\nThe main
|
||||
advantage is computational efficiency: only a subset of experts is activated
|
||||
per token.\n</conversation>\n\nCreate a summary with these sections:\n1. **Task
|
||||
Overview**: What is the agent trying to accomplish?\n2. **Current State**: What
|
||||
has been completed so far? What step is the agent on?\n3. **Important Discoveries**:
|
||||
Key facts, data, tool results, or findings that must not be lost.\n4. **Next
|
||||
Steps**: What should the agent do next based on the conversation?\n5. **Context
|
||||
to Preserve**: Any specific values, names, URLs, code snippets, or details referenced
|
||||
in the conversation.\n\nWrap your entire summary in <summary> tags.\n\n<summary>\n[Your
|
||||
structured summary here]\n</summary>"}],"model":"claude-3-5-haiku-latest","stream":false,"system":"You
|
||||
are a precise assistant that creates structured summaries of agent conversations.
|
||||
You preserve critical context needed for seamless task continuation.","temperature":0}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
anthropic-version:
|
||||
- '2023-06-01'
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1870'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.anthropic.com
|
||||
x-api-key:
|
||||
- X-API-KEY-XXX
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 0.73.0
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
x-stainless-timeout:
|
||||
- NOT_GIVEN
|
||||
method: POST
|
||||
uri: https://api.anthropic.com/v1/messages
|
||||
response:
|
||||
body:
|
||||
string: '{"model":"claude-3-5-haiku-20241022","id":"msg_01SK3LP6RedPBmpvD1HfKD23","type":"message","role":"assistant","content":[{"type":"text","text":"<summary>\n1.
|
||||
**Task Overview**:\n- Research latest developments in large language models\n-
|
||||
Focus on architecture improvements and training techniques\n\n2. **Current
|
||||
State**:\n- Initial research completed on broad developments\n- Currently
|
||||
exploring Mixture of Experts (MoE) architectures in depth\n- Detailed discussion
|
||||
of key MoE research papers initiated\n\n3. **Important Discoveries**:\nMoE
|
||||
Architecture Insights:\n- Computational efficiency through selective expert
|
||||
activation\n- Key research papers:\n * Switch Transformers (Google, 2021)\n *
|
||||
GShard\n * Mixtral (Mistral AI)\n- Main benefit: Only subset of experts activated
|
||||
per token\n\n4. **Next Steps**:\n- Conduct deeper analysis of MoE architecture
|
||||
mechanisms\n- Compare routing strategies across different MoE implementations\n-
|
||||
Investigate performance metrics and scalability of MoE models\n\n5. **Context
|
||||
to Preserve**:\n- Research Focus: Large Language Model Architectures\n- Specific
|
||||
Interest: Mixture of Experts (MoE) Architectures\n- Key Researchers/Organizations:
|
||||
Google, Mistral AI\n- Years of Significant Papers: 2021 onwards\n</summary>"}],"stop_reason":"end_turn","stop_sequence":null,"usage":{"input_tokens":400,"cache_creation_input_tokens":0,"cache_read_input_tokens":0,"cache_creation":{"ephemeral_5m_input_tokens":0,"ephemeral_1h_input_tokens":0},"output_tokens":270,"service_tier":"standard","inference_geo":"not_available"}}'
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Security-Policy:
|
||||
- CSP-FILTERED
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 09 Feb 2026 20:18:41 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Robots-Tag:
|
||||
- none
|
||||
anthropic-organization-id:
|
||||
- ANTHROPIC-ORGANIZATION-ID-XXX
|
||||
anthropic-ratelimit-input-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-input-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-input-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-INPUT-TOKENS-RESET-XXX
|
||||
anthropic-ratelimit-output-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-output-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-output-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-OUTPUT-TOKENS-RESET-XXX
|
||||
anthropic-ratelimit-requests-limit:
|
||||
- '4000'
|
||||
anthropic-ratelimit-requests-remaining:
|
||||
- '3999'
|
||||
anthropic-ratelimit-requests-reset:
|
||||
- '2026-02-09T20:18:35Z'
|
||||
anthropic-ratelimit-tokens-limit:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-LIMIT-XXX
|
||||
anthropic-ratelimit-tokens-remaining:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-REMAINING-XXX
|
||||
anthropic-ratelimit-tokens-reset:
|
||||
- ANTHROPIC-RATELIMIT-TOKENS-RESET-XXX
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
request-id:
|
||||
- REQUEST-ID-XXX
|
||||
strict-transport-security:
|
||||
- STS-XXX
|
||||
x-envoy-upstream-service-time:
|
||||
- '5639'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,110 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"role": "system", "content": "You are a precise assistant
|
||||
that creates structured summaries of agent conversations. You preserve critical
|
||||
context needed for seamless task continuation."}, {"role": "user", "content":
|
||||
"Analyze the following conversation and create a structured summary that preserves
|
||||
all information needed to continue the task seamlessly.\n\n<conversation>\n[USER]:
|
||||
Research the latest developments in large language models. Focus on architecture
|
||||
improvements and training techniques.\n\n[ASSISTANT]: I''ll research the latest
|
||||
developments in large language models. Based on my knowledge, recent advances
|
||||
include:\n1. Mixture of Experts (MoE) architectures\n2. Improved attention mechanisms
|
||||
like Flash Attention\n3. Better training data curation techniques\n4. Constitutional
|
||||
AI and RLHF improvements\n\n[USER]: Can you go deeper on the MoE architectures?
|
||||
What are the key papers?\n\n[ASSISTANT]: Key papers on Mixture of Experts:\n-
|
||||
Switch Transformers (Google, 2021) - simplified MoE routing\n- GShard - scaling
|
||||
to 600B parameters\n- Mixtral (Mistral AI) - open-source MoE model\nThe main
|
||||
advantage is computational efficiency: only a subset of experts is activated
|
||||
per token.\n</conversation>\n\nCreate a summary with these sections:\n1. **Task
|
||||
Overview**: What is the agent trying to accomplish?\n2. **Current State**: What
|
||||
has been completed so far? What step is the agent on?\n3. **Important Discoveries**:
|
||||
Key facts, data, tool results, or findings that must not be lost.\n4. **Next
|
||||
Steps**: What should the agent do next based on the conversation?\n5. **Context
|
||||
to Preserve**: Any specific values, names, URLs, code snippets, or details referenced
|
||||
in the conversation.\n\nWrap your entire summary in <summary> tags.\n\n<summary>\n[Your
|
||||
structured summary here]\n</summary>"}], "stream": false, "temperature": 0}'
|
||||
headers:
|
||||
Accept:
|
||||
- application/json
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '1849'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
api-key:
|
||||
- X-API-KEY-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
x-ms-client-request-id:
|
||||
- X-MS-CLIENT-REQUEST-ID-XXX
|
||||
method: POST
|
||||
uri: https://fake-azure-endpoint.openai.azure.com/openai/deployments/gpt-4o-mini/chat/completions?api-version=2024-12-01-preview
|
||||
response:
|
||||
body:
|
||||
string: '{"choices":[{"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"protected_material_code":{"filtered":false,"detected":false},"protected_material_text":{"filtered":false,"detected":false},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}},"finish_reason":"stop","index":0,"logprobs":null,"message":{"annotations":[],"content":"\u003csummary\u003e\n1.
|
||||
**Task Overview**: The user has requested research on the latest developments
|
||||
in large language models, specifically focusing on architecture improvements
|
||||
and training techniques.\n\n2. **Current State**: The assistant has provided
|
||||
an initial overview of recent advances in large language models, including
|
||||
Mixture of Experts (MoE) architectures, improved attention mechanisms, better
|
||||
training data curation techniques, and advancements in Constitutional AI and
|
||||
Reinforcement Learning from Human Feedback (RLHF).\n\n3. **Important Discoveries**:
|
||||
\n - Recent advances in large language models include:\n 1. Mixture
|
||||
of Experts (MoE) architectures\n 2. Improved attention mechanisms like
|
||||
Flash Attention\n 3. Better training data curation techniques\n 4.
|
||||
Constitutional AI and RLHF improvements\n - Key papers on Mixture of Experts:\n -
|
||||
Switch Transformers (Google, 2021) - simplified MoE routing\n - GShard
|
||||
- scaling to 600B parameters\n - Mixtral (Mistral AI) - open-source MoE
|
||||
model\n - The main advantage of MoE architectures is computational efficiency,
|
||||
as only a subset of experts is activated per token.\n\n4. **Next Steps**:
|
||||
The assistant should delve deeper into the Mixture of Experts architectures,
|
||||
potentially summarizing the key findings and implications from the identified
|
||||
papers.\n\n5. **Context to Preserve**: \n - Key papers: \n - Switch
|
||||
Transformers (Google, 2021)\n - GShard\n - Mixtral (Mistral AI)\n -
|
||||
Focus on computational efficiency of MoE architectures.\n\u003c/summary\u003e","refusal":null,"role":"assistant"}}],"created":1770849953,"id":"chatcmpl-D8DFx1H1zzEerW5H0BWfuwmio2sz1","model":"gpt-4o-mini-2024-07-18","object":"chat.completion","prompt_filter_results":[{"prompt_index":0,"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"jailbreak":{"filtered":false,"detected":false},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}}}],"system_fingerprint":"fp_f97eff32c5","usage":{"completion_tokens":328,"completion_tokens_details":{"accepted_prediction_tokens":0,"audio_tokens":0,"reasoning_tokens":0,"rejected_prediction_tokens":0},"prompt_tokens":368,"prompt_tokens_details":{"audio_tokens":0,"cached_tokens":0},"total_tokens":696}}
|
||||
|
||||
'
|
||||
headers:
|
||||
Content-Length:
|
||||
- '2786'
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Wed, 11 Feb 2026 22:45:56 GMT
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
apim-request-id:
|
||||
- APIM-REQUEST-ID-XXX
|
||||
azureml-model-session:
|
||||
- AZUREML-MODEL-SESSION-XXX
|
||||
x-accel-buffering:
|
||||
- 'no'
|
||||
x-content-type-options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
x-ms-client-request-id:
|
||||
- X-MS-CLIENT-REQUEST-ID-XXX
|
||||
x-ms-deployment-name:
|
||||
- gpt-4o-mini
|
||||
x-ms-rai-invoked:
|
||||
- 'true'
|
||||
x-ms-region:
|
||||
- X-MS-REGION-XXX
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,103 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"contents": [{"parts": [{"text": "Analyze the following conversation and
|
||||
create a structured summary that preserves all information needed to continue
|
||||
the task seamlessly.\n\n<conversation>\n[USER]: Research the latest developments
|
||||
in large language models. Focus on architecture improvements and training techniques.\n\n[ASSISTANT]:
|
||||
I''ll research the latest developments in large language models. Based on my
|
||||
knowledge, recent advances include:\n1. Mixture of Experts (MoE) architectures\n2.
|
||||
Improved attention mechanisms like Flash Attention\n3. Better training data
|
||||
curation techniques\n4. Constitutional AI and RLHF improvements\n\n[USER]: Can
|
||||
you go deeper on the MoE architectures? What are the key papers?\n\n[ASSISTANT]:
|
||||
Key papers on Mixture of Experts:\n- Switch Transformers (Google, 2021) - simplified
|
||||
MoE routing\n- GShard - scaling to 600B parameters\n- Mixtral (Mistral AI) -
|
||||
open-source MoE model\nThe main advantage is computational efficiency: only
|
||||
a subset of experts is activated per token.\n</conversation>\n\nCreate a summary
|
||||
with these sections:\n1. **Task Overview**: What is the agent trying to accomplish?\n2.
|
||||
**Current State**: What has been completed so far? What step is the agent on?\n3.
|
||||
**Important Discoveries**: Key facts, data, tool results, or findings that must
|
||||
not be lost.\n4. **Next Steps**: What should the agent do next based on the
|
||||
conversation?\n5. **Context to Preserve**: Any specific values, names, URLs,
|
||||
code snippets, or details referenced in the conversation.\n\nWrap your entire
|
||||
summary in <summary> tags.\n\n<summary>\n[Your structured summary here]\n</summary>"}],
|
||||
"role": "user"}], "systemInstruction": {"parts": [{"text": "You are a precise
|
||||
assistant that creates structured summaries of agent conversations. You preserve
|
||||
critical context needed for seamless task continuation."}], "role": "user"},
|
||||
"generationConfig": {"temperature": 0.0}}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- '*/*'
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1895'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- generativelanguage.googleapis.com
|
||||
x-goog-api-client:
|
||||
- google-genai-sdk/1.49.0 gl-python/3.13.3
|
||||
x-goog-api-key:
|
||||
- X-GOOG-API-KEY-XXX
|
||||
method: POST
|
||||
uri: https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"candidates\": [\n {\n \"content\": {\n \"parts\":
|
||||
[\n {\n \"text\": \"```xml\\n\\u003csummary\\u003e\\n**Task
|
||||
Overview**: Research the latest developments in large language models, focusing
|
||||
on architecture improvements and training techniques.\\n\\n**Current State**:
|
||||
The agent has identified several key areas of advancement in LLMs: Mixture
|
||||
of Experts (MoE) architectures, improved attention mechanisms (Flash Attention),
|
||||
better training data curation, and Constitutional AI/RLHF improvements. The
|
||||
user has requested a deeper dive into MoE architectures. The agent has provided
|
||||
an initial overview of MoE architectures and listed some key papers.\\n\\n**Important
|
||||
Discoveries**:\\n* Key MoE papers: Switch Transformers (Google, 2021), GShard,
|
||||
Mixtral (Mistral AI).\\n* MoE advantage: Computational efficiency through
|
||||
selective activation of experts.\\n\\n**Next Steps**: Continue researching
|
||||
MoE architectures based on the user's request for more detail. The agent should
|
||||
elaborate further on the listed papers and potentially find more recent or
|
||||
relevant publications.\\n\\n**Context to Preserve**:\\n* Focus areas: Architecture
|
||||
improvements and training techniques for LLMs.\\n* Specific architectures:
|
||||
Mixture of Experts (MoE), Flash Attention.\\n* Training techniques: Data
|
||||
curation, Constitutional AI, RLHF.\\n* Specific papers: Switch Transformers
|
||||
(Google, 2021), GShard, Mixtral (Mistral AI).\\n\\u003c/summary\\u003e\\n```\"\n
|
||||
\ }\n ],\n \"role\": \"model\"\n },\n \"finishReason\":
|
||||
\"STOP\",\n \"avgLogprobs\": -0.14186729703630721\n }\n ],\n \"usageMetadata\":
|
||||
{\n \"promptTokenCount\": 373,\n \"candidatesTokenCount\": 280,\n \"totalTokenCount\":
|
||||
653,\n \"promptTokensDetails\": [\n {\n \"modality\": \"TEXT\",\n
|
||||
\ \"tokenCount\": 373\n }\n ],\n \"candidatesTokensDetails\":
|
||||
[\n {\n \"modality\": \"TEXT\",\n \"tokenCount\": 280\n
|
||||
\ }\n ]\n },\n \"modelVersion\": \"gemini-2.0-flash\",\n \"responseId\":
|
||||
\"GEGKabP3OcGH-8YPzZCj2Ao\"\n}\n"
|
||||
headers:
|
||||
Alt-Svc:
|
||||
- h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
|
||||
Content-Type:
|
||||
- application/json; charset=UTF-8
|
||||
Date:
|
||||
- Mon, 09 Feb 2026 20:18:35 GMT
|
||||
Server:
|
||||
- scaffolding on HTTPServer2
|
||||
Server-Timing:
|
||||
- gfet4t7; dur=2310
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
Vary:
|
||||
- Origin
|
||||
- X-Origin
|
||||
- Referer
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
X-Frame-Options:
|
||||
- X-FRAME-OPTIONS-XXX
|
||||
X-XSS-Protection:
|
||||
- '0'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,148 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are a precise assistant that
|
||||
creates structured summaries of agent conversations. You preserve critical context
|
||||
needed for seamless task continuation."},{"role":"user","content":"Analyze the
|
||||
following conversation and create a structured summary that preserves all information
|
||||
needed to continue the task seamlessly.\n\n<conversation>\n[USER]: Research
|
||||
the latest developments in large language models. Focus on architecture improvements
|
||||
and training techniques.\n\n[ASSISTANT]: I''ll research the latest developments
|
||||
in large language models. Based on my knowledge, recent advances include:\n1.
|
||||
Mixture of Experts (MoE) architectures\n2. Improved attention mechanisms like
|
||||
Flash Attention\n3. Better training data curation techniques\n4. Constitutional
|
||||
AI and RLHF improvements\n\n[USER]: Can you go deeper on the MoE architectures?
|
||||
What are the key papers?\n\n[ASSISTANT]: Key papers on Mixture of Experts:\n-
|
||||
Switch Transformers (Google, 2021) - simplified MoE routing\n- GShard - scaling
|
||||
to 600B parameters\n- Mixtral (Mistral AI) - open-source MoE model\nThe main
|
||||
advantage is computational efficiency: only a subset of experts is activated
|
||||
per token.\n</conversation>\n\nCreate a summary with these sections:\n1. **Task
|
||||
Overview**: What is the agent trying to accomplish?\n2. **Current State**: What
|
||||
has been completed so far? What step is the agent on?\n3. **Important Discoveries**:
|
||||
Key facts, data, tool results, or findings that must not be lost.\n4. **Next
|
||||
Steps**: What should the agent do next based on the conversation?\n5. **Context
|
||||
to Preserve**: Any specific values, names, URLs, code snippets, or details referenced
|
||||
in the conversation.\n\nWrap your entire summary in <summary> tags.\n\n<summary>\n[Your
|
||||
structured summary here]\n</summary>"}],"model":"gpt-4o-mini","temperature":0}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1844'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D7RxGISdQet8JsWImiwzHQ2S9gSD4\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770668126,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"<summary>\\n1. **Task Overview**: The
|
||||
agent is tasked with researching the latest developments in large language
|
||||
models, specifically focusing on architecture improvements and training techniques.\\n\\n2.
|
||||
**Current State**: The agent has identified several recent advances in large
|
||||
language models, including Mixture of Experts (MoE) architectures, improved
|
||||
attention mechanisms, better training data curation techniques, and advancements
|
||||
in Constitutional AI and Reinforcement Learning from Human Feedback (RLHF).\\n\\n3.
|
||||
**Important Discoveries**: \\n - Recent advances in large language models
|
||||
include:\\n 1. Mixture of Experts (MoE) architectures\\n 2. Improved
|
||||
attention mechanisms like Flash Attention\\n 3. Better training data curation
|
||||
techniques\\n 4. Constitutional AI and RLHF improvements\\n - Key papers
|
||||
on Mixture of Experts:\\n - Switch Transformers (Google, 2021) - simplified
|
||||
MoE routing\\n - GShard - scaling to 600B parameters\\n - Mixtral
|
||||
(Mistral AI) - open-source MoE model\\n - The main advantage of MoE architectures
|
||||
is computational efficiency, as only a subset of experts is activated per
|
||||
token.\\n\\n4. **Next Steps**: The agent should delve deeper into the Mixture
|
||||
of Experts architectures, reviewing the key papers mentioned and summarizing
|
||||
their contributions and implications for large language models.\\n\\n5. **Context
|
||||
to Preserve**: \\n - Key papers: \\n - Switch Transformers (Google,
|
||||
2021)\\n - GShard\\n - Mixtral (Mistral AI)\\n - Focus on computational
|
||||
efficiency of MoE architectures.\\n</summary>\",\n \"refusal\": null,\n
|
||||
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
|
||||
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 368,\n \"completion_tokens\":
|
||||
328,\n \"total_tokens\": 696,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_f4ae844694\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 09 Feb 2026 20:15:32 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '5395'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -0,0 +1,145 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages":[{"role":"system","content":"You are a precise assistant that
|
||||
creates structured summaries of agent conversations. You preserve critical context
|
||||
needed for seamless task continuation."},{"role":"user","content":"Analyze the
|
||||
following conversation and create a structured summary that preserves all information
|
||||
needed to continue the task seamlessly.\n\n<conversation>\n[USER]: Research
|
||||
the latest developments in large language models. Focus on architecture improvements
|
||||
and training techniques.\n\n[ASSISTANT]: I''ll research the latest developments
|
||||
in large language models. Based on my knowledge, recent advances include:\n1.
|
||||
Mixture of Experts (MoE) architectures\n2. Improved attention mechanisms like
|
||||
Flash Attention\n3. Better training data curation techniques\n4. Constitutional
|
||||
AI and RLHF improvements\n\n[USER]: Can you go deeper on the MoE architectures?
|
||||
What are the key papers?\n\n[ASSISTANT]: Key papers on Mixture of Experts:\n-
|
||||
Switch Transformers (Google, 2021) - simplified MoE routing\n- GShard - scaling
|
||||
to 600B parameters\n- Mixtral (Mistral AI) - open-source MoE model\nThe main
|
||||
advantage is computational efficiency: only a subset of experts is activated
|
||||
per token.\n</conversation>\n\nCreate a summary with these sections:\n1. **Task
|
||||
Overview**: What is the agent trying to accomplish?\n2. **Current State**: What
|
||||
has been completed so far? What step is the agent on?\n3. **Important Discoveries**:
|
||||
Key facts, data, tool results, or findings that must not be lost.\n4. **Next
|
||||
Steps**: What should the agent do next based on the conversation?\n5. **Context
|
||||
to Preserve**: Any specific values, names, URLs, code snippets, or details referenced
|
||||
in the conversation.\n\nWrap your entire summary in <summary> tags.\n\n<summary>\n[Your
|
||||
structured summary here]\n</summary>"}],"model":"gpt-4o-mini","temperature":0}'
|
||||
headers:
|
||||
User-Agent:
|
||||
- X-USER-AGENT-XXX
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- ACCEPT-ENCODING-XXX
|
||||
authorization:
|
||||
- AUTHORIZATION-XXX
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1844'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
x-stainless-arch:
|
||||
- X-STAINLESS-ARCH-XXX
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- X-STAINLESS-OS-XXX
|
||||
x-stainless-package-version:
|
||||
- 1.83.0
|
||||
x-stainless-read-timeout:
|
||||
- X-STAINLESS-READ-TIMEOUT-XXX
|
||||
x-stainless-retry-count:
|
||||
- '0'
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.13.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: "{\n \"id\": \"chatcmpl-D7RxM4n36QoACHrC0QocV1pXIwvtD\",\n \"object\":
|
||||
\"chat.completion\",\n \"created\": 1770668132,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
|
||||
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
|
||||
\"assistant\",\n \"content\": \"<summary>\\n1. **Task Overview**: The
|
||||
user has requested research on the latest developments in large language models,
|
||||
specifically focusing on architecture improvements and training techniques.\\n\\n2.
|
||||
**Current State**: The assistant has identified several recent advances in
|
||||
large language models, including Mixture of Experts (MoE) architectures, improved
|
||||
attention mechanisms, better training data curation techniques, and advancements
|
||||
in Constitutional AI and Reinforcement Learning from Human Feedback (RLHF).\\n\\n3.
|
||||
**Important Discoveries**: \\n - Key papers on Mixture of Experts (MoE)
|
||||
architectures:\\n - \\\"Switch Transformers\\\" (Google, 2021) - simplified
|
||||
MoE routing.\\n - \\\"GShard\\\" - scaling to 600B parameters.\\n -
|
||||
\\\"Mixtral\\\" (Mistral AI) - open-source MoE model.\\n - The main advantage
|
||||
of MoE architectures is computational efficiency, as only a subset of experts
|
||||
is activated per token.\\n\\n4. **Next Steps**: The assistant should delve
|
||||
deeper into the Mixture of Experts architectures, potentially summarizing
|
||||
the findings from the key papers mentioned.\\n\\n5. **Context to Preserve**:
|
||||
\\n - Key papers: \\\"Switch Transformers,\\\" \\\"GShard,\\\" \\\"Mixtral.\\\"\\n
|
||||
\ - Notable organizations: Google, Mistral AI.\\n - Focus areas: MoE architectures,
|
||||
computational efficiency.\\n</summary>\",\n \"refusal\": null,\n \"annotations\":
|
||||
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
|
||||
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 368,\n \"completion_tokens\":
|
||||
275,\n \"total_tokens\": 643,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
|
||||
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
|
||||
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
|
||||
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
|
||||
\"default\",\n \"system_fingerprint\": \"fp_f4ae844694\"\n}\n"
|
||||
headers:
|
||||
CF-RAY:
|
||||
- CF-RAY-XXX
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Mon, 09 Feb 2026 20:15:36 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Strict-Transport-Security:
|
||||
- STS-XXX
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
X-Content-Type-Options:
|
||||
- X-CONTENT-TYPE-XXX
|
||||
access-control-expose-headers:
|
||||
- ACCESS-CONTROL-XXX
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cf-cache-status:
|
||||
- DYNAMIC
|
||||
openai-organization:
|
||||
- OPENAI-ORG-XXX
|
||||
openai-processing-ms:
|
||||
- '4188'
|
||||
openai-project:
|
||||
- OPENAI-PROJECT-XXX
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
set-cookie:
|
||||
- SET-COOKIE-XXX
|
||||
x-openai-proxy-wasm:
|
||||
- v0.1
|
||||
x-ratelimit-limit-requests:
|
||||
- X-RATELIMIT-LIMIT-REQUESTS-XXX
|
||||
x-ratelimit-limit-tokens:
|
||||
- X-RATELIMIT-LIMIT-TOKENS-XXX
|
||||
x-ratelimit-remaining-requests:
|
||||
- X-RATELIMIT-REMAINING-REQUESTS-XXX
|
||||
x-ratelimit-remaining-tokens:
|
||||
- X-RATELIMIT-REMAINING-TOKENS-XXX
|
||||
x-ratelimit-reset-requests:
|
||||
- X-RATELIMIT-RESET-REQUESTS-XXX
|
||||
x-ratelimit-reset-tokens:
|
||||
- X-RATELIMIT-RESET-TOKENS-XXX
|
||||
x-request-id:
|
||||
- X-REQUEST-ID-XXX
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -1,6 +1,8 @@
|
||||
import os
|
||||
import unittest
|
||||
from unittest.mock import ANY, MagicMock, patch
|
||||
from unittest.mock import ANY, AsyncMock, MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.cli.plus_api import PlusAPI
|
||||
|
||||
@@ -68,37 +70,6 @@ class TestPlusAPI(unittest.TestCase):
|
||||
)
|
||||
self.assertEqual(response, mock_response)
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI._make_request")
|
||||
def test_get_agent(self, mock_make_request):
|
||||
mock_response = MagicMock()
|
||||
mock_make_request.return_value = mock_response
|
||||
|
||||
response = self.api.get_agent("test_agent_handle")
|
||||
mock_make_request.assert_called_once_with(
|
||||
"GET", "/crewai_plus/api/v1/agents/test_agent_handle"
|
||||
)
|
||||
self.assertEqual(response, mock_response)
|
||||
|
||||
@patch("crewai.cli.plus_api.Settings")
|
||||
@patch("requests.Session.request")
|
||||
def test_get_agent_with_org_uuid(self, mock_make_request, mock_settings_class):
|
||||
mock_settings = MagicMock()
|
||||
mock_settings.org_uuid = self.org_uuid
|
||||
mock_settings.enterprise_base_url = os.getenv('CREWAI_PLUS_URL')
|
||||
mock_settings_class.return_value = mock_settings
|
||||
# re-initialize Client
|
||||
self.api = PlusAPI(self.api_key)
|
||||
|
||||
mock_response = MagicMock()
|
||||
mock_make_request.return_value = mock_response
|
||||
|
||||
response = self.api.get_agent("test_agent_handle")
|
||||
|
||||
self.assert_request_with_org_id(
|
||||
mock_make_request, "GET", "/crewai_plus/api/v1/agents/test_agent_handle"
|
||||
)
|
||||
self.assertEqual(response, mock_response)
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI._make_request")
|
||||
def test_get_tool(self, mock_make_request):
|
||||
mock_response = MagicMock()
|
||||
@@ -338,3 +309,49 @@ class TestPlusAPI(unittest.TestCase):
|
||||
custom_api.base_url,
|
||||
"https://custom-url-from-env.com",
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch("httpx.AsyncClient")
|
||||
async def test_get_agent(mock_async_client_class):
|
||||
api = PlusAPI("test_api_key")
|
||||
mock_response = MagicMock()
|
||||
mock_client_instance = AsyncMock()
|
||||
mock_client_instance.get.return_value = mock_response
|
||||
mock_async_client_class.return_value.__aenter__.return_value = mock_client_instance
|
||||
|
||||
response = await api.get_agent("test_agent_handle")
|
||||
|
||||
mock_client_instance.get.assert_called_once_with(
|
||||
f"{api.base_url}/crewai_plus/api/v1/agents/test_agent_handle",
|
||||
headers=api.headers,
|
||||
)
|
||||
assert response == mock_response
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch("httpx.AsyncClient")
|
||||
@patch("crewai.cli.plus_api.Settings")
|
||||
async def test_get_agent_with_org_uuid(mock_settings_class, mock_async_client_class):
|
||||
org_uuid = "test-org-uuid"
|
||||
mock_settings = MagicMock()
|
||||
mock_settings.org_uuid = org_uuid
|
||||
mock_settings.enterprise_base_url = os.getenv("CREWAI_PLUS_URL")
|
||||
mock_settings_class.return_value = mock_settings
|
||||
|
||||
api = PlusAPI("test_api_key")
|
||||
|
||||
mock_response = MagicMock()
|
||||
mock_client_instance = AsyncMock()
|
||||
mock_client_instance.get.return_value = mock_response
|
||||
mock_async_client_class.return_value.__aenter__.return_value = mock_client_instance
|
||||
|
||||
response = await api.get_agent("test_agent_handle")
|
||||
|
||||
mock_client_instance.get.assert_called_once_with(
|
||||
f"{api.base_url}/crewai_plus/api/v1/agents/test_agent_handle",
|
||||
headers=api.headers,
|
||||
)
|
||||
assert "X-Crewai-Organization-Id" in api.headers
|
||||
assert api.headers["X-Crewai-Organization-Id"] == org_uuid
|
||||
assert response == mock_response
|
||||
|
||||
@@ -1,15 +1,19 @@
|
||||
"""Test for version management."""
|
||||
|
||||
import json
|
||||
from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
from crewai import __version__
|
||||
from crewai.cli.version import (
|
||||
_find_latest_non_yanked_version,
|
||||
_get_cache_file,
|
||||
_is_cache_valid,
|
||||
_is_version_yanked,
|
||||
get_crewai_version,
|
||||
get_latest_version_from_pypi,
|
||||
is_current_version_yanked,
|
||||
is_newer_version_available,
|
||||
)
|
||||
|
||||
@@ -19,10 +23,8 @@ def test_dynamic_versioning_consistency() -> None:
|
||||
cli_version = get_crewai_version()
|
||||
package_version = __version__
|
||||
|
||||
# Both should return the same version string
|
||||
assert cli_version == package_version
|
||||
|
||||
# Version should not be empty
|
||||
assert package_version is not None
|
||||
assert len(package_version.strip()) > 0
|
||||
|
||||
@@ -63,12 +65,18 @@ class TestVersionChecking:
|
||||
def test_get_latest_version_from_pypi_success(
|
||||
self, mock_urlopen: MagicMock, mock_exists: MagicMock
|
||||
) -> None:
|
||||
"""Test successful PyPI version fetch."""
|
||||
# Mock cache not existing to force fetch from PyPI
|
||||
"""Test successful PyPI version fetch uses releases data."""
|
||||
mock_exists.return_value = False
|
||||
|
||||
releases = {
|
||||
"1.0.0": [{"yanked": False}],
|
||||
"2.0.0": [{"yanked": False}],
|
||||
"2.1.0": [{"yanked": True, "yanked_reason": "bad release"}],
|
||||
}
|
||||
mock_response = MagicMock()
|
||||
mock_response.read.return_value = b'{"info": {"version": "2.0.0"}}'
|
||||
mock_response.read.return_value = json.dumps(
|
||||
{"info": {"version": "2.1.0"}, "releases": releases}
|
||||
).encode()
|
||||
mock_urlopen.return_value.__enter__.return_value = mock_response
|
||||
|
||||
version = get_latest_version_from_pypi()
|
||||
@@ -82,7 +90,6 @@ class TestVersionChecking:
|
||||
"""Test PyPI version fetch failure."""
|
||||
from urllib.error import URLError
|
||||
|
||||
# Mock cache not existing to force fetch from PyPI
|
||||
mock_exists.return_value = False
|
||||
|
||||
mock_urlopen.side_effect = URLError("Network error")
|
||||
@@ -133,18 +140,247 @@ class TestVersionChecking:
|
||||
assert latest is None
|
||||
|
||||
|
||||
class TestFindLatestNonYankedVersion:
|
||||
"""Test _find_latest_non_yanked_version helper."""
|
||||
|
||||
def test_skips_yanked_versions(self) -> None:
|
||||
"""Test that yanked versions are skipped."""
|
||||
releases = {
|
||||
"1.0.0": [{"yanked": False}],
|
||||
"2.0.0": [{"yanked": True}],
|
||||
}
|
||||
assert _find_latest_non_yanked_version(releases) == "1.0.0"
|
||||
|
||||
def test_returns_highest_non_yanked(self) -> None:
|
||||
"""Test that the highest non-yanked version is returned."""
|
||||
releases = {
|
||||
"1.0.0": [{"yanked": False}],
|
||||
"1.5.0": [{"yanked": False}],
|
||||
"2.0.0": [{"yanked": True}],
|
||||
}
|
||||
assert _find_latest_non_yanked_version(releases) == "1.5.0"
|
||||
|
||||
def test_returns_none_when_all_yanked(self) -> None:
|
||||
"""Test that None is returned when all versions are yanked."""
|
||||
releases = {
|
||||
"1.0.0": [{"yanked": True}],
|
||||
"2.0.0": [{"yanked": True}],
|
||||
}
|
||||
assert _find_latest_non_yanked_version(releases) is None
|
||||
|
||||
def test_skips_prerelease_versions(self) -> None:
|
||||
"""Test that pre-release versions are skipped."""
|
||||
releases = {
|
||||
"1.0.0": [{"yanked": False}],
|
||||
"2.0.0a1": [{"yanked": False}],
|
||||
"2.0.0rc1": [{"yanked": False}],
|
||||
}
|
||||
assert _find_latest_non_yanked_version(releases) == "1.0.0"
|
||||
|
||||
def test_skips_versions_with_empty_files(self) -> None:
|
||||
"""Test that versions with no files are skipped."""
|
||||
releases: dict[str, list[dict[str, bool]]] = {
|
||||
"1.0.0": [{"yanked": False}],
|
||||
"2.0.0": [],
|
||||
}
|
||||
assert _find_latest_non_yanked_version(releases) == "1.0.0"
|
||||
|
||||
def test_handles_invalid_version_strings(self) -> None:
|
||||
"""Test that invalid version strings are skipped."""
|
||||
releases = {
|
||||
"1.0.0": [{"yanked": False}],
|
||||
"not-a-version": [{"yanked": False}],
|
||||
}
|
||||
assert _find_latest_non_yanked_version(releases) == "1.0.0"
|
||||
|
||||
def test_partially_yanked_files_not_considered_yanked(self) -> None:
|
||||
"""Test that a version with some non-yanked files is not yanked."""
|
||||
releases = {
|
||||
"1.0.0": [{"yanked": False}],
|
||||
"2.0.0": [{"yanked": True}, {"yanked": False}],
|
||||
}
|
||||
assert _find_latest_non_yanked_version(releases) == "2.0.0"
|
||||
|
||||
|
||||
class TestIsVersionYanked:
|
||||
"""Test _is_version_yanked helper."""
|
||||
|
||||
def test_non_yanked_version(self) -> None:
|
||||
"""Test a non-yanked version returns False."""
|
||||
releases = {"1.0.0": [{"yanked": False}]}
|
||||
is_yanked, reason = _is_version_yanked("1.0.0", releases)
|
||||
assert is_yanked is False
|
||||
assert reason == ""
|
||||
|
||||
def test_yanked_version_with_reason(self) -> None:
|
||||
"""Test a yanked version returns True with reason."""
|
||||
releases = {
|
||||
"1.0.0": [{"yanked": True, "yanked_reason": "critical bug"}],
|
||||
}
|
||||
is_yanked, reason = _is_version_yanked("1.0.0", releases)
|
||||
assert is_yanked is True
|
||||
assert reason == "critical bug"
|
||||
|
||||
def test_yanked_version_without_reason(self) -> None:
|
||||
"""Test a yanked version returns True with empty reason."""
|
||||
releases = {"1.0.0": [{"yanked": True}]}
|
||||
is_yanked, reason = _is_version_yanked("1.0.0", releases)
|
||||
assert is_yanked is True
|
||||
assert reason == ""
|
||||
|
||||
def test_unknown_version(self) -> None:
|
||||
"""Test an unknown version returns False."""
|
||||
releases = {"1.0.0": [{"yanked": False}]}
|
||||
is_yanked, reason = _is_version_yanked("9.9.9", releases)
|
||||
assert is_yanked is False
|
||||
assert reason == ""
|
||||
|
||||
def test_partially_yanked_files(self) -> None:
|
||||
"""Test a version with mixed yanked/non-yanked files is not yanked."""
|
||||
releases = {
|
||||
"1.0.0": [{"yanked": True}, {"yanked": False}],
|
||||
}
|
||||
is_yanked, reason = _is_version_yanked("1.0.0", releases)
|
||||
assert is_yanked is False
|
||||
assert reason == ""
|
||||
|
||||
def test_multiple_yanked_files_picks_first_reason(self) -> None:
|
||||
"""Test that the first available reason is returned."""
|
||||
releases = {
|
||||
"1.0.0": [
|
||||
{"yanked": True, "yanked_reason": ""},
|
||||
{"yanked": True, "yanked_reason": "second reason"},
|
||||
],
|
||||
}
|
||||
is_yanked, reason = _is_version_yanked("1.0.0", releases)
|
||||
assert is_yanked is True
|
||||
assert reason == "second reason"
|
||||
|
||||
|
||||
class TestIsCurrentVersionYanked:
|
||||
"""Test is_current_version_yanked public function."""
|
||||
|
||||
@patch("crewai.cli.version.get_crewai_version")
|
||||
@patch("crewai.cli.version._get_cache_file")
|
||||
def test_reads_from_valid_cache(
|
||||
self, mock_cache_file: MagicMock, mock_version: MagicMock, tmp_path: Path
|
||||
) -> None:
|
||||
"""Test reading yanked status from a valid cache."""
|
||||
mock_version.return_value = "1.0.0"
|
||||
cache_file = tmp_path / "version_cache.json"
|
||||
cache_data = {
|
||||
"version": "2.0.0",
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"current_version": "1.0.0",
|
||||
"current_version_yanked": True,
|
||||
"current_version_yanked_reason": "bad release",
|
||||
}
|
||||
cache_file.write_text(json.dumps(cache_data))
|
||||
mock_cache_file.return_value = cache_file
|
||||
|
||||
is_yanked, reason = is_current_version_yanked()
|
||||
assert is_yanked is True
|
||||
assert reason == "bad release"
|
||||
|
||||
@patch("crewai.cli.version.get_crewai_version")
|
||||
@patch("crewai.cli.version._get_cache_file")
|
||||
def test_not_yanked_from_cache(
|
||||
self, mock_cache_file: MagicMock, mock_version: MagicMock, tmp_path: Path
|
||||
) -> None:
|
||||
"""Test non-yanked status from a valid cache."""
|
||||
mock_version.return_value = "2.0.0"
|
||||
cache_file = tmp_path / "version_cache.json"
|
||||
cache_data = {
|
||||
"version": "2.0.0",
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"current_version": "2.0.0",
|
||||
"current_version_yanked": False,
|
||||
"current_version_yanked_reason": "",
|
||||
}
|
||||
cache_file.write_text(json.dumps(cache_data))
|
||||
mock_cache_file.return_value = cache_file
|
||||
|
||||
is_yanked, reason = is_current_version_yanked()
|
||||
assert is_yanked is False
|
||||
assert reason == ""
|
||||
|
||||
@patch("crewai.cli.version.get_latest_version_from_pypi")
|
||||
@patch("crewai.cli.version.get_crewai_version")
|
||||
@patch("crewai.cli.version._get_cache_file")
|
||||
def test_triggers_fetch_on_stale_cache(
|
||||
self,
|
||||
mock_cache_file: MagicMock,
|
||||
mock_version: MagicMock,
|
||||
mock_fetch: MagicMock,
|
||||
tmp_path: Path,
|
||||
) -> None:
|
||||
"""Test that a stale cache triggers a re-fetch."""
|
||||
mock_version.return_value = "1.0.0"
|
||||
cache_file = tmp_path / "version_cache.json"
|
||||
old_time = datetime.now() - timedelta(hours=25)
|
||||
cache_data = {
|
||||
"version": "2.0.0",
|
||||
"timestamp": old_time.isoformat(),
|
||||
"current_version": "1.0.0",
|
||||
"current_version_yanked": True,
|
||||
"current_version_yanked_reason": "old reason",
|
||||
}
|
||||
cache_file.write_text(json.dumps(cache_data))
|
||||
mock_cache_file.return_value = cache_file
|
||||
|
||||
fresh_cache = {
|
||||
"version": "2.0.0",
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"current_version": "1.0.0",
|
||||
"current_version_yanked": False,
|
||||
"current_version_yanked_reason": "",
|
||||
}
|
||||
|
||||
def write_fresh_cache() -> str:
|
||||
cache_file.write_text(json.dumps(fresh_cache))
|
||||
return "2.0.0"
|
||||
|
||||
mock_fetch.side_effect = lambda: write_fresh_cache()
|
||||
|
||||
is_yanked, reason = is_current_version_yanked()
|
||||
assert is_yanked is False
|
||||
mock_fetch.assert_called_once()
|
||||
|
||||
@patch("crewai.cli.version.get_latest_version_from_pypi")
|
||||
@patch("crewai.cli.version.get_crewai_version")
|
||||
@patch("crewai.cli.version._get_cache_file")
|
||||
def test_returns_false_on_fetch_failure(
|
||||
self,
|
||||
mock_cache_file: MagicMock,
|
||||
mock_version: MagicMock,
|
||||
mock_fetch: MagicMock,
|
||||
tmp_path: Path,
|
||||
) -> None:
|
||||
"""Test that fetch failure returns not yanked."""
|
||||
mock_version.return_value = "1.0.0"
|
||||
cache_file = tmp_path / "version_cache.json"
|
||||
mock_cache_file.return_value = cache_file
|
||||
mock_fetch.return_value = None
|
||||
|
||||
is_yanked, reason = is_current_version_yanked()
|
||||
assert is_yanked is False
|
||||
assert reason == ""
|
||||
|
||||
|
||||
class TestConsoleFormatterVersionCheck:
|
||||
"""Test version check display in ConsoleFormatter."""
|
||||
|
||||
@patch("crewai.events.utils.console_formatter.is_current_version_yanked")
|
||||
@patch("crewai.events.utils.console_formatter.is_newer_version_available")
|
||||
@patch.dict("os.environ", {"CI": ""})
|
||||
def test_version_message_shows_when_update_available_and_verbose(
|
||||
self, mock_check: MagicMock
|
||||
self, mock_check: MagicMock, mock_yanked: MagicMock
|
||||
) -> None:
|
||||
"""Test version message shows when update available and verbose enabled."""
|
||||
from crewai.events.utils.console_formatter import ConsoleFormatter
|
||||
|
||||
mock_check.return_value = (True, "1.0.0", "2.0.0")
|
||||
mock_yanked.return_value = (False, "")
|
||||
|
||||
formatter = ConsoleFormatter(verbose=True)
|
||||
with patch.object(formatter.console, "print") as mock_print:
|
||||
@@ -165,14 +401,16 @@ class TestConsoleFormatterVersionCheck:
|
||||
formatter._show_version_update_message_if_needed()
|
||||
mock_print.assert_not_called()
|
||||
|
||||
@patch("crewai.events.utils.console_formatter.is_current_version_yanked")
|
||||
@patch("crewai.events.utils.console_formatter.is_newer_version_available")
|
||||
def test_version_message_hides_when_no_update_available(
|
||||
self, mock_check: MagicMock
|
||||
self, mock_check: MagicMock, mock_yanked: MagicMock
|
||||
) -> None:
|
||||
"""Test version message hidden when no update available."""
|
||||
from crewai.events.utils.console_formatter import ConsoleFormatter
|
||||
|
||||
mock_check.return_value = (False, "2.0.0", "2.0.0")
|
||||
mock_yanked.return_value = (False, "")
|
||||
|
||||
formatter = ConsoleFormatter(verbose=True)
|
||||
with patch.object(formatter.console, "print") as mock_print:
|
||||
@@ -208,3 +446,60 @@ class TestConsoleFormatterVersionCheck:
|
||||
with patch.object(formatter.console, "print") as mock_print:
|
||||
formatter._show_version_update_message_if_needed()
|
||||
mock_print.assert_not_called()
|
||||
|
||||
@patch("crewai.events.utils.console_formatter.is_current_version_yanked")
|
||||
@patch("crewai.events.utils.console_formatter.is_newer_version_available")
|
||||
@patch.dict("os.environ", {"CI": ""})
|
||||
def test_yanked_warning_shows_when_version_is_yanked(
|
||||
self, mock_check: MagicMock, mock_yanked: MagicMock
|
||||
) -> None:
|
||||
"""Test yanked warning panel shows when current version is yanked."""
|
||||
from crewai.events.utils.console_formatter import ConsoleFormatter
|
||||
|
||||
mock_check.return_value = (False, "1.0.0", "1.0.0")
|
||||
mock_yanked.return_value = (True, "critical bug")
|
||||
|
||||
formatter = ConsoleFormatter(verbose=True)
|
||||
with patch.object(formatter.console, "print") as mock_print:
|
||||
formatter._show_version_update_message_if_needed()
|
||||
assert mock_print.call_count == 2
|
||||
panel = mock_print.call_args_list[0][0][0]
|
||||
assert "Yanked Version" in panel.title
|
||||
assert "critical bug" in str(panel.renderable)
|
||||
|
||||
@patch("crewai.events.utils.console_formatter.is_current_version_yanked")
|
||||
@patch("crewai.events.utils.console_formatter.is_newer_version_available")
|
||||
@patch.dict("os.environ", {"CI": ""})
|
||||
def test_yanked_warning_shows_without_reason(
|
||||
self, mock_check: MagicMock, mock_yanked: MagicMock
|
||||
) -> None:
|
||||
"""Test yanked warning panel shows even without a reason."""
|
||||
from crewai.events.utils.console_formatter import ConsoleFormatter
|
||||
|
||||
mock_check.return_value = (False, "1.0.0", "1.0.0")
|
||||
mock_yanked.return_value = (True, "")
|
||||
|
||||
formatter = ConsoleFormatter(verbose=True)
|
||||
with patch.object(formatter.console, "print") as mock_print:
|
||||
formatter._show_version_update_message_if_needed()
|
||||
assert mock_print.call_count == 2
|
||||
panel = mock_print.call_args_list[0][0][0]
|
||||
assert "Yanked Version" in panel.title
|
||||
assert "Reason:" not in str(panel.renderable)
|
||||
|
||||
@patch("crewai.events.utils.console_formatter.is_current_version_yanked")
|
||||
@patch("crewai.events.utils.console_formatter.is_newer_version_available")
|
||||
@patch.dict("os.environ", {"CI": ""})
|
||||
def test_both_update_and_yanked_warning_show(
|
||||
self, mock_check: MagicMock, mock_yanked: MagicMock
|
||||
) -> None:
|
||||
"""Test both update and yanked panels show when applicable."""
|
||||
from crewai.events.utils.console_formatter import ConsoleFormatter
|
||||
|
||||
mock_check.return_value = (True, "1.0.0", "2.0.0")
|
||||
mock_yanked.return_value = (True, "security issue")
|
||||
|
||||
formatter = ConsoleFormatter(verbose=True)
|
||||
with patch.object(formatter.console, "print") as mock_print:
|
||||
formatter._show_version_update_message_if_needed()
|
||||
assert mock_print.call_count == 4
|
||||
|
||||
@@ -990,3 +990,134 @@ def test_anthropic_agent_kickoff_structured_output_with_tools():
|
||||
assert result.pydantic.result == 42, f"Expected result 42 but got {result.pydantic.result}"
|
||||
assert result.pydantic.operation, "Operation should not be empty"
|
||||
assert result.pydantic.explanation, "Explanation should not be empty"
|
||||
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_anthropic_cached_prompt_tokens():
|
||||
"""
|
||||
Test that Anthropic correctly extracts and tracks cached_prompt_tokens
|
||||
from cache_read_input_tokens. Uses cache_control to enable prompt caching
|
||||
and sends the same large prompt twice so the second call hits the cache.
|
||||
"""
|
||||
# Anthropic requires cache_control blocks and >=1024 tokens for caching
|
||||
padding = "This is padding text to ensure the prompt is large enough for caching. " * 80
|
||||
system_msg = f"You are a helpful assistant. {padding}"
|
||||
|
||||
llm = LLM(model="anthropic/claude-sonnet-4-5-20250929")
|
||||
|
||||
def _ephemeral_user(text: str):
|
||||
return [{"type": "text", "text": text, "cache_control": {"type": "ephemeral"}}]
|
||||
|
||||
# First call: creates the cache
|
||||
llm.call([
|
||||
{"role": "system", "content": system_msg},
|
||||
{"role": "user", "content": _ephemeral_user("Say hello in one word.")},
|
||||
])
|
||||
|
||||
# Second call: same system prompt should hit the cache
|
||||
llm.call([
|
||||
{"role": "system", "content": system_msg},
|
||||
{"role": "user", "content": _ephemeral_user("Say goodbye in one word.")},
|
||||
])
|
||||
|
||||
usage = llm.get_token_usage_summary()
|
||||
assert usage.total_tokens > 0
|
||||
assert usage.prompt_tokens > 0
|
||||
assert usage.completion_tokens > 0
|
||||
assert usage.successful_requests == 2
|
||||
# The second call should have cached prompt tokens
|
||||
assert usage.cached_prompt_tokens > 0
|
||||
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_anthropic_streaming_cached_prompt_tokens():
|
||||
"""
|
||||
Test that Anthropic streaming correctly extracts and tracks cached_prompt_tokens.
|
||||
"""
|
||||
padding = "This is padding text to ensure the prompt is large enough for caching. " * 80
|
||||
system_msg = f"You are a helpful assistant. {padding}"
|
||||
|
||||
llm = LLM(model="anthropic/claude-sonnet-4-5-20250929", stream=True)
|
||||
|
||||
def _ephemeral_user(text: str):
|
||||
return [{"type": "text", "text": text, "cache_control": {"type": "ephemeral"}}]
|
||||
|
||||
# First call: creates the cache
|
||||
llm.call([
|
||||
{"role": "system", "content": system_msg},
|
||||
{"role": "user", "content": _ephemeral_user("Say hello in one word.")},
|
||||
])
|
||||
|
||||
# Second call: same system prompt should hit the cache
|
||||
llm.call([
|
||||
{"role": "system", "content": system_msg},
|
||||
{"role": "user", "content": _ephemeral_user("Say goodbye in one word.")},
|
||||
])
|
||||
|
||||
usage = llm.get_token_usage_summary()
|
||||
assert usage.total_tokens > 0
|
||||
assert usage.successful_requests == 2
|
||||
# The second call should have cached prompt tokens
|
||||
assert usage.cached_prompt_tokens > 0
|
||||
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_anthropic_cached_prompt_tokens_with_tools():
|
||||
"""
|
||||
Test that Anthropic correctly tracks cached_prompt_tokens when tools are used.
|
||||
The large system prompt should be cached across tool-calling requests.
|
||||
"""
|
||||
padding = "This is padding text to ensure the prompt is large enough for caching. " * 80
|
||||
system_msg = f"You are a helpful assistant that uses tools. {padding}"
|
||||
|
||||
def get_weather(location: str) -> str:
|
||||
return f"The weather in {location} is sunny and 72°F"
|
||||
|
||||
tools = [
|
||||
{
|
||||
"name": "get_weather",
|
||||
"description": "Get the current weather for a location",
|
||||
"input_schema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"location": {
|
||||
"type": "string",
|
||||
"description": "The city name"
|
||||
}
|
||||
},
|
||||
"required": ["location"],
|
||||
},
|
||||
}
|
||||
]
|
||||
|
||||
llm = LLM(model="anthropic/claude-sonnet-4-5-20250929")
|
||||
|
||||
def _ephemeral_user(text: str):
|
||||
return [{"type": "text", "text": text, "cache_control": {"type": "ephemeral"}}]
|
||||
|
||||
# First call with tool: creates the cache
|
||||
llm.call(
|
||||
[
|
||||
{"role": "system", "content": system_msg},
|
||||
{"role": "user", "content": _ephemeral_user("What is the weather in Tokyo?")},
|
||||
],
|
||||
tools=tools,
|
||||
available_functions={"get_weather": get_weather},
|
||||
)
|
||||
|
||||
# Second call with same system prompt + tools: should hit the cache
|
||||
llm.call(
|
||||
[
|
||||
{"role": "system", "content": system_msg},
|
||||
{"role": "user", "content": _ephemeral_user("What is the weather in Paris?")},
|
||||
],
|
||||
tools=tools,
|
||||
available_functions={"get_weather": get_weather},
|
||||
)
|
||||
|
||||
usage = llm.get_token_usage_summary()
|
||||
assert usage.total_tokens > 0
|
||||
assert usage.prompt_tokens > 0
|
||||
assert usage.successful_requests == 2
|
||||
# The second call should have cached prompt tokens
|
||||
assert usage.cached_prompt_tokens > 0
|
||||
|
||||
@@ -102,7 +102,6 @@ def test_azure_tool_use_conversation_flow():
|
||||
# Verify that the API was called
|
||||
assert mock_complete.called
|
||||
|
||||
|
||||
@pytest.mark.usefixtures("mock_azure_credentials")
|
||||
def test_azure_completion_module_is_imported():
|
||||
"""
|
||||
|
||||
@@ -42,65 +42,6 @@ def test_gemini_completion_is_used_when_gemini_provider():
|
||||
assert llm.provider == "gemini"
|
||||
assert llm.model == "gemini-2.0-flash-001"
|
||||
|
||||
|
||||
|
||||
|
||||
def test_gemini_tool_use_conversation_flow():
|
||||
"""
|
||||
Test that the Gemini completion properly handles tool use conversation flow
|
||||
"""
|
||||
from unittest.mock import Mock, patch
|
||||
from crewai.llms.providers.gemini.completion import GeminiCompletion
|
||||
|
||||
# Create GeminiCompletion instance
|
||||
completion = GeminiCompletion(model="gemini-2.0-flash-001")
|
||||
|
||||
# Mock tool function
|
||||
def mock_weather_tool(location: str) -> str:
|
||||
return f"The weather in {location} is sunny and 75°F"
|
||||
|
||||
available_functions = {"get_weather": mock_weather_tool}
|
||||
|
||||
# Mock the Google Gemini client responses
|
||||
with patch.object(completion.client.models, 'generate_content') as mock_generate:
|
||||
# Mock function call in response
|
||||
mock_function_call = Mock()
|
||||
mock_function_call.name = "get_weather"
|
||||
mock_function_call.args = {"location": "San Francisco"}
|
||||
|
||||
mock_part = Mock()
|
||||
mock_part.function_call = mock_function_call
|
||||
|
||||
mock_content = Mock()
|
||||
mock_content.parts = [mock_part]
|
||||
|
||||
mock_candidate = Mock()
|
||||
mock_candidate.content = mock_content
|
||||
|
||||
mock_response = Mock()
|
||||
mock_response.candidates = [mock_candidate]
|
||||
mock_response.text = "Based on the weather data, it's a beautiful day in San Francisco with sunny skies and 75°F temperature."
|
||||
mock_response.usage_metadata = Mock()
|
||||
mock_response.usage_metadata.prompt_token_count = 100
|
||||
mock_response.usage_metadata.candidates_token_count = 50
|
||||
mock_response.usage_metadata.total_token_count = 150
|
||||
|
||||
mock_generate.return_value = mock_response
|
||||
|
||||
# Test the call
|
||||
messages = [{"role": "user", "content": "What's the weather like in San Francisco?"}]
|
||||
result = completion.call(
|
||||
messages=messages,
|
||||
available_functions=available_functions
|
||||
)
|
||||
|
||||
# Verify the tool was executed and returned the result
|
||||
assert result == "The weather in San Francisco is sunny and 75°F"
|
||||
|
||||
# Verify that the API was called
|
||||
assert mock_generate.called
|
||||
|
||||
|
||||
def test_gemini_completion_module_is_imported():
|
||||
"""
|
||||
Test that the completion module is properly imported when using Google provider
|
||||
@@ -1114,3 +1055,97 @@ def test_gemini_structured_output_preserves_json_with_stop_word_patterns():
|
||||
assert "Action:" in result.action_taken
|
||||
assert "Observation:" in result.observation_result
|
||||
assert "Final Answer:" in result.final_answer
|
||||
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_gemini_cached_prompt_tokens():
|
||||
"""
|
||||
Test that Gemini correctly extracts and tracks cached_prompt_tokens
|
||||
from cached_content_token_count in the usage metadata.
|
||||
Sends two calls with the same large prompt to trigger caching.
|
||||
"""
|
||||
padding = "This is padding text to ensure the prompt is large enough for caching. " * 80
|
||||
system_msg = f"You are a helpful assistant. {padding}"
|
||||
|
||||
llm = LLM(model="google/gemini-2.5-flash")
|
||||
|
||||
# First call
|
||||
llm.call([
|
||||
{"role": "system", "content": system_msg},
|
||||
{"role": "user", "content": "Say hello in one word."},
|
||||
])
|
||||
|
||||
# Second call: same system prompt
|
||||
llm.call([
|
||||
{"role": "system", "content": system_msg},
|
||||
{"role": "user", "content": "Say goodbye in one word."},
|
||||
])
|
||||
|
||||
usage = llm.get_token_usage_summary()
|
||||
assert usage.total_tokens > 0
|
||||
assert usage.prompt_tokens > 0
|
||||
assert usage.completion_tokens > 0
|
||||
assert usage.successful_requests == 2
|
||||
# cached_prompt_tokens should be populated (may be 0 if Gemini
|
||||
# doesn't cache for this particular request, but the field should exist)
|
||||
assert usage.cached_prompt_tokens >= 0
|
||||
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_gemini_cached_prompt_tokens_with_tools():
|
||||
"""
|
||||
Test that Gemini correctly tracks cached_prompt_tokens when tools are used.
|
||||
The large system prompt should be cached across tool-calling requests.
|
||||
"""
|
||||
padding = "This is padding text to ensure the prompt is large enough for caching. " * 80
|
||||
system_msg = f"You are a helpful assistant that uses tools. {padding}"
|
||||
|
||||
def get_weather(location: str) -> str:
|
||||
return f"The weather in {location} is sunny and 72°F"
|
||||
|
||||
tools = [
|
||||
{
|
||||
"name": "get_weather",
|
||||
"description": "Get the current weather for a location",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"location": {
|
||||
"type": "string",
|
||||
"description": "The city name"
|
||||
}
|
||||
},
|
||||
"required": ["location"],
|
||||
},
|
||||
}
|
||||
]
|
||||
|
||||
llm = LLM(model="google/gemini-2.5-flash")
|
||||
|
||||
# First call with tool
|
||||
llm.call(
|
||||
[
|
||||
{"role": "system", "content": system_msg},
|
||||
{"role": "user", "content": "What is the weather in Tokyo?"},
|
||||
],
|
||||
tools=tools,
|
||||
available_functions={"get_weather": get_weather},
|
||||
)
|
||||
|
||||
# Second call with same system prompt + tools
|
||||
llm.call(
|
||||
[
|
||||
{"role": "system", "content": system_msg},
|
||||
{"role": "user", "content": "What is the weather in Paris?"},
|
||||
],
|
||||
tools=tools,
|
||||
available_functions={"get_weather": get_weather},
|
||||
)
|
||||
|
||||
usage = llm.get_token_usage_summary()
|
||||
assert usage.total_tokens > 0
|
||||
assert usage.prompt_tokens > 0
|
||||
assert usage.successful_requests == 2
|
||||
# cached_prompt_tokens should be populated (may be 0 if Gemini
|
||||
# doesn't cache for this particular request, but the field should exist)
|
||||
assert usage.cached_prompt_tokens >= 0
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import os
|
||||
import sys
|
||||
import types
|
||||
from typing import Any
|
||||
from unittest.mock import patch, MagicMock
|
||||
import openai
|
||||
import pytest
|
||||
@@ -1578,3 +1579,379 @@ def test_openai_structured_output_preserves_json_with_stop_word_patterns():
|
||||
assert "Action:" in result.action_taken
|
||||
assert "Observation:" in result.observation_result
|
||||
assert "Final Answer:" in result.final_answer
|
||||
|
||||
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_openai_completions_cached_prompt_tokens():
|
||||
"""
|
||||
Test that the Chat Completions API correctly extracts and tracks
|
||||
cached_prompt_tokens from prompt_tokens_details.cached_tokens.
|
||||
Sends the same large prompt twice so the second call hits the cache.
|
||||
"""
|
||||
# Build a large system prompt to trigger prompt caching (>1024 tokens)
|
||||
padding = "This is padding text to ensure the prompt is large enough for caching. " * 80
|
||||
system_msg = f"You are a helpful assistant. {padding}"
|
||||
|
||||
llm = OpenAICompletion(model="gpt-4.1")
|
||||
|
||||
# First call: creates the cache
|
||||
llm.call([
|
||||
{"role": "system", "content": system_msg},
|
||||
{"role": "user", "content": "Say hello in one word."},
|
||||
])
|
||||
|
||||
# Second call: same system prompt should hit the cache
|
||||
llm.call([
|
||||
{"role": "system", "content": system_msg},
|
||||
{"role": "user", "content": "Say goodbye in one word."},
|
||||
])
|
||||
|
||||
usage = llm.get_token_usage_summary()
|
||||
assert usage.total_tokens > 0
|
||||
assert usage.prompt_tokens > 0
|
||||
assert usage.completion_tokens > 0
|
||||
assert usage.successful_requests == 2
|
||||
# The second call should have cached prompt tokens
|
||||
assert usage.cached_prompt_tokens > 0
|
||||
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_openai_responses_api_cached_prompt_tokens():
|
||||
"""
|
||||
Test that the Responses API correctly extracts and tracks
|
||||
cached_prompt_tokens from input_tokens_details.cached_tokens.
|
||||
"""
|
||||
padding = "This is padding text to ensure the prompt is large enough for caching. " * 80
|
||||
system_msg = f"You are a helpful assistant. {padding}"
|
||||
|
||||
llm = OpenAICompletion(model="gpt-4.1", api="responses")
|
||||
|
||||
# First call: creates the cache
|
||||
llm.call([
|
||||
{"role": "system", "content": system_msg},
|
||||
{"role": "user", "content": "Say hello in one word."},
|
||||
])
|
||||
|
||||
# Second call: same system prompt should hit the cache
|
||||
llm.call([
|
||||
{"role": "system", "content": system_msg},
|
||||
{"role": "user", "content": "Say goodbye in one word."},
|
||||
])
|
||||
|
||||
usage = llm.get_token_usage_summary()
|
||||
assert usage.total_tokens > 0
|
||||
assert usage.prompt_tokens > 0
|
||||
assert usage.completion_tokens > 0
|
||||
assert usage.successful_requests == 2
|
||||
# The second call should have cached prompt tokens
|
||||
assert usage.cached_prompt_tokens > 0
|
||||
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_openai_streaming_cached_prompt_tokens():
|
||||
"""
|
||||
Test that streaming Chat Completions API correctly extracts and tracks
|
||||
cached_prompt_tokens.
|
||||
"""
|
||||
padding = "This is padding text to ensure the prompt is large enough for caching. " * 80
|
||||
system_msg = f"You are a helpful assistant. {padding}"
|
||||
|
||||
llm = OpenAICompletion(model="gpt-4.1", stream=True)
|
||||
|
||||
# First call: creates the cache
|
||||
llm.call([
|
||||
{"role": "system", "content": system_msg},
|
||||
{"role": "user", "content": "Say hello in one word."},
|
||||
])
|
||||
|
||||
# Second call: same system prompt should hit the cache
|
||||
llm.call([
|
||||
{"role": "system", "content": system_msg},
|
||||
{"role": "user", "content": "Say goodbye in one word."},
|
||||
])
|
||||
|
||||
usage = llm.get_token_usage_summary()
|
||||
assert usage.total_tokens > 0
|
||||
assert usage.successful_requests == 2
|
||||
# The second call should have cached prompt tokens
|
||||
assert usage.cached_prompt_tokens > 0
|
||||
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_openai_completions_cached_prompt_tokens_with_tools():
|
||||
"""
|
||||
Test that the Chat Completions API correctly tracks cached_prompt_tokens
|
||||
when tools are used. The large system prompt should be cached across calls.
|
||||
"""
|
||||
padding = "This is padding text to ensure the prompt is large enough for caching. " * 80
|
||||
system_msg = f"You are a helpful assistant that uses tools. {padding}"
|
||||
|
||||
def get_weather(location: str) -> str:
|
||||
return f"The weather in {location} is sunny and 72°F"
|
||||
|
||||
tools = [
|
||||
{
|
||||
"name": "get_weather",
|
||||
"description": "Get the current weather for a location",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"location": {
|
||||
"type": "string",
|
||||
"description": "The city name"
|
||||
}
|
||||
},
|
||||
"required": ["location"],
|
||||
"additionalProperties": False,
|
||||
},
|
||||
}
|
||||
]
|
||||
|
||||
llm = OpenAICompletion(model="gpt-4.1")
|
||||
|
||||
# First call with tool: creates the cache
|
||||
llm.call(
|
||||
[
|
||||
{"role": "system", "content": system_msg},
|
||||
{"role": "user", "content": "What is the weather in Tokyo?"},
|
||||
],
|
||||
tools=tools,
|
||||
available_functions={"get_weather": get_weather},
|
||||
)
|
||||
|
||||
# Second call with same system prompt + tools: should hit the cache
|
||||
llm.call(
|
||||
[
|
||||
{"role": "system", "content": system_msg},
|
||||
{"role": "user", "content": "What is the weather in Paris?"},
|
||||
],
|
||||
tools=tools,
|
||||
available_functions={"get_weather": get_weather},
|
||||
)
|
||||
|
||||
usage = llm.get_token_usage_summary()
|
||||
assert usage.total_tokens > 0
|
||||
assert usage.prompt_tokens > 0
|
||||
assert usage.successful_requests == 2
|
||||
# The second call should have cached prompt tokens
|
||||
assert usage.cached_prompt_tokens > 0
|
||||
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_openai_responses_api_cached_prompt_tokens_with_tools():
|
||||
"""
|
||||
Test that the Responses API correctly tracks cached_prompt_tokens
|
||||
when function tools are used.
|
||||
"""
|
||||
padding = "This is padding text to ensure the prompt is large enough for caching. " * 80
|
||||
system_msg = f"You are a helpful assistant that uses tools. {padding}"
|
||||
|
||||
def get_weather(location: str) -> str:
|
||||
return f"The weather in {location} is sunny and 72°F"
|
||||
|
||||
tools = [
|
||||
{
|
||||
"name": "get_weather",
|
||||
"description": "Get the current weather for a location",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"location": {
|
||||
"type": "string",
|
||||
"description": "The city name"
|
||||
}
|
||||
},
|
||||
"required": ["location"],
|
||||
},
|
||||
}
|
||||
]
|
||||
|
||||
llm = OpenAICompletion(model="gpt-4.1", api='response')
|
||||
|
||||
# First call with tool
|
||||
llm.call(
|
||||
[
|
||||
{"role": "system", "content": system_msg},
|
||||
{"role": "user", "content": "What is the weather in Tokyo?"},
|
||||
],
|
||||
tools=tools,
|
||||
available_functions={"get_weather": get_weather},
|
||||
)
|
||||
|
||||
# Second call: same system prompt + tools should hit cache
|
||||
llm.call(
|
||||
[
|
||||
{"role": "system", "content": system_msg},
|
||||
{"role": "user", "content": "What is the weather in Paris?"},
|
||||
],
|
||||
tools=tools,
|
||||
available_functions={"get_weather": get_weather},
|
||||
)
|
||||
|
||||
usage = llm.get_token_usage_summary()
|
||||
assert usage.total_tokens > 0
|
||||
assert usage.successful_requests == 2
|
||||
assert usage.cached_prompt_tokens > 0
|
||||
def test_openai_streaming_returns_tool_calls_without_available_functions():
|
||||
"""Test that streaming returns tool calls list when available_functions is None.
|
||||
|
||||
This mirrors the non-streaming path where tool_calls are returned for
|
||||
the executor to handle. Reproduces the bug where streaming with tool
|
||||
calls would return empty text instead of tool_calls when
|
||||
available_functions was not provided (as the crew executor does).
|
||||
"""
|
||||
llm = LLM(model="openai/gpt-4o-mini", stream=True)
|
||||
|
||||
mock_chunk_1 = MagicMock()
|
||||
mock_chunk_1.choices = [MagicMock()]
|
||||
mock_chunk_1.choices[0].delta = MagicMock()
|
||||
mock_chunk_1.choices[0].delta.content = None
|
||||
mock_chunk_1.choices[0].delta.tool_calls = [MagicMock()]
|
||||
mock_chunk_1.choices[0].delta.tool_calls[0].index = 0
|
||||
mock_chunk_1.choices[0].delta.tool_calls[0].id = "call_abc123"
|
||||
mock_chunk_1.choices[0].delta.tool_calls[0].function = MagicMock()
|
||||
mock_chunk_1.choices[0].delta.tool_calls[0].function.name = "calculator"
|
||||
mock_chunk_1.choices[0].delta.tool_calls[0].function.arguments = '{"expr'
|
||||
mock_chunk_1.choices[0].finish_reason = None
|
||||
mock_chunk_1.usage = None
|
||||
mock_chunk_1.id = "chatcmpl-1"
|
||||
|
||||
mock_chunk_2 = MagicMock()
|
||||
mock_chunk_2.choices = [MagicMock()]
|
||||
mock_chunk_2.choices[0].delta = MagicMock()
|
||||
mock_chunk_2.choices[0].delta.content = None
|
||||
mock_chunk_2.choices[0].delta.tool_calls = [MagicMock()]
|
||||
mock_chunk_2.choices[0].delta.tool_calls[0].index = 0
|
||||
mock_chunk_2.choices[0].delta.tool_calls[0].id = None
|
||||
mock_chunk_2.choices[0].delta.tool_calls[0].function = MagicMock()
|
||||
mock_chunk_2.choices[0].delta.tool_calls[0].function.name = None
|
||||
mock_chunk_2.choices[0].delta.tool_calls[0].function.arguments = 'ession": "1+1"}'
|
||||
mock_chunk_2.choices[0].finish_reason = None
|
||||
mock_chunk_2.usage = None
|
||||
mock_chunk_2.id = "chatcmpl-1"
|
||||
|
||||
mock_chunk_3 = MagicMock()
|
||||
mock_chunk_3.choices = [MagicMock()]
|
||||
mock_chunk_3.choices[0].delta = MagicMock()
|
||||
mock_chunk_3.choices[0].delta.content = None
|
||||
mock_chunk_3.choices[0].delta.tool_calls = None
|
||||
mock_chunk_3.choices[0].finish_reason = "tool_calls"
|
||||
mock_chunk_3.usage = MagicMock()
|
||||
mock_chunk_3.usage.prompt_tokens = 10
|
||||
mock_chunk_3.usage.completion_tokens = 5
|
||||
mock_chunk_3.id = "chatcmpl-1"
|
||||
|
||||
with patch.object(
|
||||
llm.client.chat.completions, "create", return_value=iter([mock_chunk_1, mock_chunk_2, mock_chunk_3])
|
||||
):
|
||||
result = llm.call(
|
||||
messages=[{"role": "user", "content": "Calculate 1+1"}],
|
||||
tools=[{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "calculator",
|
||||
"description": "Calculate expression",
|
||||
"parameters": {"type": "object", "properties": {"expression": {"type": "string"}}},
|
||||
},
|
||||
}],
|
||||
available_functions=None,
|
||||
)
|
||||
|
||||
assert isinstance(result, list), f"Expected list of tool calls, got {type(result)}: {result}"
|
||||
assert len(result) == 1
|
||||
assert result[0]["function"]["name"] == "calculator"
|
||||
assert result[0]["function"]["arguments"] == '{"expression": "1+1"}'
|
||||
assert result[0]["id"] == "call_abc123"
|
||||
assert result[0]["type"] == "function"
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_openai_async_streaming_returns_tool_calls_without_available_functions():
|
||||
"""Test that async streaming returns tool calls list when available_functions is None.
|
||||
|
||||
Same as the sync test but for the async path (_ahandle_streaming_completion).
|
||||
"""
|
||||
llm = LLM(model="openai/gpt-4o-mini", stream=True)
|
||||
|
||||
mock_chunk_1 = MagicMock()
|
||||
mock_chunk_1.choices = [MagicMock()]
|
||||
mock_chunk_1.choices[0].delta = MagicMock()
|
||||
mock_chunk_1.choices[0].delta.content = None
|
||||
mock_chunk_1.choices[0].delta.tool_calls = [MagicMock()]
|
||||
mock_chunk_1.choices[0].delta.tool_calls[0].index = 0
|
||||
mock_chunk_1.choices[0].delta.tool_calls[0].id = "call_abc123"
|
||||
mock_chunk_1.choices[0].delta.tool_calls[0].function = MagicMock()
|
||||
mock_chunk_1.choices[0].delta.tool_calls[0].function.name = "calculator"
|
||||
mock_chunk_1.choices[0].delta.tool_calls[0].function.arguments = '{"expr'
|
||||
mock_chunk_1.choices[0].finish_reason = None
|
||||
mock_chunk_1.usage = None
|
||||
mock_chunk_1.id = "chatcmpl-1"
|
||||
|
||||
mock_chunk_2 = MagicMock()
|
||||
mock_chunk_2.choices = [MagicMock()]
|
||||
mock_chunk_2.choices[0].delta = MagicMock()
|
||||
mock_chunk_2.choices[0].delta.content = None
|
||||
mock_chunk_2.choices[0].delta.tool_calls = [MagicMock()]
|
||||
mock_chunk_2.choices[0].delta.tool_calls[0].index = 0
|
||||
mock_chunk_2.choices[0].delta.tool_calls[0].id = None
|
||||
mock_chunk_2.choices[0].delta.tool_calls[0].function = MagicMock()
|
||||
mock_chunk_2.choices[0].delta.tool_calls[0].function.name = None
|
||||
mock_chunk_2.choices[0].delta.tool_calls[0].function.arguments = 'ession": "1+1"}'
|
||||
mock_chunk_2.choices[0].finish_reason = None
|
||||
mock_chunk_2.usage = None
|
||||
mock_chunk_2.id = "chatcmpl-1"
|
||||
|
||||
mock_chunk_3 = MagicMock()
|
||||
mock_chunk_3.choices = [MagicMock()]
|
||||
mock_chunk_3.choices[0].delta = MagicMock()
|
||||
mock_chunk_3.choices[0].delta.content = None
|
||||
mock_chunk_3.choices[0].delta.tool_calls = None
|
||||
mock_chunk_3.choices[0].finish_reason = "tool_calls"
|
||||
mock_chunk_3.usage = MagicMock()
|
||||
mock_chunk_3.usage.prompt_tokens = 10
|
||||
mock_chunk_3.usage.completion_tokens = 5
|
||||
mock_chunk_3.id = "chatcmpl-1"
|
||||
|
||||
class MockAsyncStream:
|
||||
"""Async iterator that mimics OpenAI's async streaming response."""
|
||||
|
||||
def __init__(self, chunks: list[Any]) -> None:
|
||||
self._chunks = chunks
|
||||
self._index = 0
|
||||
|
||||
def __aiter__(self) -> "MockAsyncStream":
|
||||
return self
|
||||
|
||||
async def __anext__(self) -> Any:
|
||||
if self._index >= len(self._chunks):
|
||||
raise StopAsyncIteration
|
||||
chunk = self._chunks[self._index]
|
||||
self._index += 1
|
||||
return chunk
|
||||
|
||||
async def mock_create(**kwargs: Any) -> MockAsyncStream:
|
||||
return MockAsyncStream([mock_chunk_1, mock_chunk_2, mock_chunk_3])
|
||||
|
||||
with patch.object(
|
||||
llm.async_client.chat.completions, "create", side_effect=mock_create
|
||||
):
|
||||
result = await llm.acall(
|
||||
messages=[{"role": "user", "content": "Calculate 1+1"}],
|
||||
tools=[{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "calculator",
|
||||
"description": "Calculate expression",
|
||||
"parameters": {"type": "object", "properties": {"expression": {"type": "string"}}},
|
||||
},
|
||||
}],
|
||||
available_functions=None,
|
||||
)
|
||||
|
||||
assert isinstance(result, list), f"Expected list of tool calls, got {type(result)}: {result}"
|
||||
assert len(result) == 1
|
||||
assert result[0]["function"]["name"] == "calculator"
|
||||
assert result[0]["function"]["arguments"] == '{"expression": "1+1"}'
|
||||
assert result[0]["id"] == "call_abc123"
|
||||
assert result[0]["type"] == "function"
|
||||
|
||||
@@ -157,6 +157,176 @@ class TestMultiStepFlows:
|
||||
|
||||
assert execution_order == ["generate", "review", "finalize"]
|
||||
|
||||
def test_chained_router_feedback_steps(self):
|
||||
"""Test that a router outcome can trigger another router method.
|
||||
|
||||
Regression test: @listen("outcome") combined with @human_feedback(emit=...)
|
||||
creates a method that is both a listener and a router. The flow must find
|
||||
and execute it when the upstream router emits the matching outcome.
|
||||
"""
|
||||
execution_order: list[str] = []
|
||||
|
||||
class ChainedRouterFlow(Flow):
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="First review:",
|
||||
emit=["approved", "rejected"],
|
||||
llm="gpt-4o-mini",
|
||||
)
|
||||
def draft(self):
|
||||
execution_order.append("draft")
|
||||
return "draft content"
|
||||
|
||||
@listen("approved")
|
||||
@human_feedback(
|
||||
message="Final review:",
|
||||
emit=["publish", "revise"],
|
||||
llm="gpt-4o-mini",
|
||||
)
|
||||
def final_review(self, prev: HumanFeedbackResult):
|
||||
execution_order.append("final_review")
|
||||
return "final content"
|
||||
|
||||
@listen("rejected")
|
||||
def on_rejected(self, prev: HumanFeedbackResult):
|
||||
execution_order.append("on_rejected")
|
||||
return "rejected"
|
||||
|
||||
@listen("publish")
|
||||
def on_publish(self, prev: HumanFeedbackResult):
|
||||
execution_order.append("on_publish")
|
||||
return "published"
|
||||
|
||||
@listen("revise")
|
||||
def on_revise(self, prev: HumanFeedbackResult):
|
||||
execution_order.append("on_revise")
|
||||
return "revised"
|
||||
|
||||
flow = ChainedRouterFlow()
|
||||
|
||||
with (
|
||||
patch.object(
|
||||
flow,
|
||||
"_request_human_feedback",
|
||||
side_effect=["looks good", "ship it"],
|
||||
),
|
||||
patch.object(
|
||||
flow,
|
||||
"_collapse_to_outcome",
|
||||
side_effect=["approved", "publish"],
|
||||
),
|
||||
):
|
||||
result = flow.kickoff()
|
||||
|
||||
assert execution_order == ["draft", "final_review", "on_publish"]
|
||||
assert result == "published"
|
||||
assert len(flow.human_feedback_history) == 2
|
||||
assert flow.human_feedback_history[0].outcome == "approved"
|
||||
assert flow.human_feedback_history[1].outcome == "publish"
|
||||
|
||||
def test_chained_router_rejected_path(self):
|
||||
"""Test that a start-router outcome routes to a non-router listener."""
|
||||
execution_order: list[str] = []
|
||||
|
||||
class ChainedRouterFlow(Flow):
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Review:",
|
||||
emit=["approved", "rejected"],
|
||||
llm="gpt-4o-mini",
|
||||
)
|
||||
def draft(self):
|
||||
execution_order.append("draft")
|
||||
return "draft"
|
||||
|
||||
@listen("approved")
|
||||
@human_feedback(
|
||||
message="Final:",
|
||||
emit=["publish", "revise"],
|
||||
llm="gpt-4o-mini",
|
||||
)
|
||||
def final_review(self, prev: HumanFeedbackResult):
|
||||
execution_order.append("final_review")
|
||||
return "final"
|
||||
|
||||
@listen("rejected")
|
||||
def on_rejected(self, prev: HumanFeedbackResult):
|
||||
execution_order.append("on_rejected")
|
||||
return "rejected"
|
||||
|
||||
flow = ChainedRouterFlow()
|
||||
|
||||
with (
|
||||
patch.object(
|
||||
flow, "_request_human_feedback", return_value="bad"
|
||||
),
|
||||
patch.object(
|
||||
flow, "_collapse_to_outcome", return_value="rejected"
|
||||
),
|
||||
):
|
||||
result = flow.kickoff()
|
||||
|
||||
assert execution_order == ["draft", "on_rejected"]
|
||||
assert result == "rejected"
|
||||
assert len(flow.human_feedback_history) == 1
|
||||
assert flow.human_feedback_history[0].outcome == "rejected"
|
||||
|
||||
def test_router_and_non_router_listeners_for_same_outcome(self):
|
||||
"""Test that both router and non-router listeners fire for the same outcome."""
|
||||
execution_order: list[str] = []
|
||||
|
||||
class MixedListenerFlow(Flow):
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Review:",
|
||||
emit=["approved", "rejected"],
|
||||
llm="gpt-4o-mini",
|
||||
)
|
||||
def draft(self):
|
||||
execution_order.append("draft")
|
||||
return "draft"
|
||||
|
||||
@listen("approved")
|
||||
@human_feedback(
|
||||
message="Final:",
|
||||
emit=["publish", "revise"],
|
||||
llm="gpt-4o-mini",
|
||||
)
|
||||
def router_listener(self, prev: HumanFeedbackResult):
|
||||
execution_order.append("router_listener")
|
||||
return "final"
|
||||
|
||||
@listen("approved")
|
||||
def plain_listener(self, prev: HumanFeedbackResult):
|
||||
execution_order.append("plain_listener")
|
||||
return "logged"
|
||||
|
||||
@listen("publish")
|
||||
def on_publish(self, prev: HumanFeedbackResult):
|
||||
execution_order.append("on_publish")
|
||||
return "published"
|
||||
|
||||
flow = MixedListenerFlow()
|
||||
|
||||
with (
|
||||
patch.object(
|
||||
flow,
|
||||
"_request_human_feedback",
|
||||
side_effect=["approve it", "publish it"],
|
||||
),
|
||||
patch.object(
|
||||
flow,
|
||||
"_collapse_to_outcome",
|
||||
side_effect=["approved", "publish"],
|
||||
),
|
||||
):
|
||||
flow.kickoff()
|
||||
|
||||
assert "draft" in execution_order
|
||||
assert "router_listener" in execution_order
|
||||
assert "plain_listener" in execution_order
|
||||
assert "on_publish" in execution_order
|
||||
|
||||
|
||||
class TestStateManagement:
|
||||
"""Tests for state management with human feedback."""
|
||||
|
||||
@@ -2,13 +2,23 @@
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from typing import Any
|
||||
from unittest.mock import MagicMock, patch
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
|
||||
import pytest
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.tools.base_tool import BaseTool
|
||||
from crewai.utilities.agent_utils import convert_tools_to_openai_schema, summarize_messages
|
||||
from crewai.utilities.agent_utils import (
|
||||
_asummarize_chunks,
|
||||
_estimate_token_count,
|
||||
_extract_summary_tags,
|
||||
_format_messages_for_summary,
|
||||
_split_messages_into_chunks,
|
||||
convert_tools_to_openai_schema,
|
||||
summarize_messages,
|
||||
)
|
||||
|
||||
|
||||
class CalculatorInput(BaseModel):
|
||||
@@ -214,6 +224,17 @@ class TestConvertToolsToOpenaiSchema:
|
||||
assert max_results_prop["default"] == 10
|
||||
|
||||
|
||||
def _make_mock_i18n() -> MagicMock:
|
||||
"""Create a mock i18n with the new structured prompt keys."""
|
||||
mock_i18n = MagicMock()
|
||||
mock_i18n.slice.side_effect = lambda key: {
|
||||
"summarizer_system_message": "You are a precise assistant that creates structured summaries.",
|
||||
"summarize_instruction": "Summarize the conversation:\n{conversation}",
|
||||
"summary": "<summary>\n{merged_summary}\n</summary>\nContinue the task.",
|
||||
}.get(key, "")
|
||||
return mock_i18n
|
||||
|
||||
|
||||
class TestSummarizeMessages:
|
||||
"""Tests for summarize_messages function."""
|
||||
|
||||
@@ -229,26 +250,22 @@ class TestSummarizeMessages:
|
||||
|
||||
mock_llm = MagicMock()
|
||||
mock_llm.get_context_window_size.return_value = 1000
|
||||
mock_llm.call.return_value = "Summarized conversation about image analysis."
|
||||
|
||||
mock_i18n = MagicMock()
|
||||
mock_i18n.slice.side_effect = lambda key: {
|
||||
"summarizer_system_message": "Summarize the following.",
|
||||
"summarize_instruction": "Summarize: {group}",
|
||||
"summary": "Summary: {merged_summary}",
|
||||
}.get(key, "")
|
||||
mock_llm.call.return_value = "<summary>Summarized conversation about image analysis.</summary>"
|
||||
|
||||
summarize_messages(
|
||||
messages=messages,
|
||||
llm=mock_llm,
|
||||
callbacks=[],
|
||||
i18n=mock_i18n,
|
||||
i18n=_make_mock_i18n(),
|
||||
)
|
||||
|
||||
assert len(messages) == 1
|
||||
assert messages[0]["role"] == "user"
|
||||
assert "files" in messages[0]
|
||||
assert messages[0]["files"] == mock_files
|
||||
# System message preserved + summary message = 2
|
||||
assert len(messages) == 2
|
||||
assert messages[0]["role"] == "system"
|
||||
summary_msg = messages[1]
|
||||
assert summary_msg["role"] == "user"
|
||||
assert "files" in summary_msg
|
||||
assert summary_msg["files"] == mock_files
|
||||
|
||||
def test_merges_files_from_multiple_user_messages(self) -> None:
|
||||
"""Test that files from multiple user messages are merged."""
|
||||
@@ -264,20 +281,13 @@ class TestSummarizeMessages:
|
||||
|
||||
mock_llm = MagicMock()
|
||||
mock_llm.get_context_window_size.return_value = 1000
|
||||
mock_llm.call.return_value = "Summarized conversation."
|
||||
|
||||
mock_i18n = MagicMock()
|
||||
mock_i18n.slice.side_effect = lambda key: {
|
||||
"summarizer_system_message": "Summarize the following.",
|
||||
"summarize_instruction": "Summarize: {group}",
|
||||
"summary": "Summary: {merged_summary}",
|
||||
}.get(key, "")
|
||||
mock_llm.call.return_value = "<summary>Summarized conversation.</summary>"
|
||||
|
||||
summarize_messages(
|
||||
messages=messages,
|
||||
llm=mock_llm,
|
||||
callbacks=[],
|
||||
i18n=mock_i18n,
|
||||
i18n=_make_mock_i18n(),
|
||||
)
|
||||
|
||||
assert len(messages) == 1
|
||||
@@ -297,20 +307,13 @@ class TestSummarizeMessages:
|
||||
|
||||
mock_llm = MagicMock()
|
||||
mock_llm.get_context_window_size.return_value = 1000
|
||||
mock_llm.call.return_value = "A greeting exchange."
|
||||
|
||||
mock_i18n = MagicMock()
|
||||
mock_i18n.slice.side_effect = lambda key: {
|
||||
"summarizer_system_message": "Summarize the following.",
|
||||
"summarize_instruction": "Summarize: {group}",
|
||||
"summary": "Summary: {merged_summary}",
|
||||
}.get(key, "")
|
||||
mock_llm.call.return_value = "<summary>A greeting exchange.</summary>"
|
||||
|
||||
summarize_messages(
|
||||
messages=messages,
|
||||
llm=mock_llm,
|
||||
callbacks=[],
|
||||
i18n=mock_i18n,
|
||||
i18n=_make_mock_i18n(),
|
||||
)
|
||||
|
||||
assert len(messages) == 1
|
||||
@@ -327,21 +330,595 @@ class TestSummarizeMessages:
|
||||
|
||||
mock_llm = MagicMock()
|
||||
mock_llm.get_context_window_size.return_value = 1000
|
||||
mock_llm.call.return_value = "Summary"
|
||||
|
||||
mock_i18n = MagicMock()
|
||||
mock_i18n.slice.side_effect = lambda key: {
|
||||
"summarizer_system_message": "Summarize.",
|
||||
"summarize_instruction": "Summarize: {group}",
|
||||
"summary": "Summary: {merged_summary}",
|
||||
}.get(key, "")
|
||||
mock_llm.call.return_value = "<summary>Summary</summary>"
|
||||
|
||||
summarize_messages(
|
||||
messages=messages,
|
||||
llm=mock_llm,
|
||||
callbacks=[],
|
||||
i18n=mock_i18n,
|
||||
i18n=_make_mock_i18n(),
|
||||
)
|
||||
|
||||
assert id(messages) == original_list_id
|
||||
assert len(messages) == 1
|
||||
|
||||
def test_preserves_system_messages(self) -> None:
|
||||
"""Test that system messages are preserved and not summarized."""
|
||||
messages: list[dict[str, Any]] = [
|
||||
{"role": "system", "content": "You are a research assistant."},
|
||||
{"role": "user", "content": "Find information about AI."},
|
||||
{"role": "assistant", "content": "I found several resources on AI."},
|
||||
]
|
||||
|
||||
mock_llm = MagicMock()
|
||||
mock_llm.get_context_window_size.return_value = 1000
|
||||
mock_llm.call.return_value = "<summary>User asked about AI, assistant found resources.</summary>"
|
||||
|
||||
summarize_messages(
|
||||
messages=messages,
|
||||
llm=mock_llm,
|
||||
callbacks=[],
|
||||
i18n=_make_mock_i18n(),
|
||||
)
|
||||
|
||||
assert len(messages) == 2
|
||||
assert messages[0]["role"] == "system"
|
||||
assert messages[0]["content"] == "You are a research assistant."
|
||||
assert messages[1]["role"] == "user"
|
||||
|
||||
def test_formats_conversation_with_role_labels(self) -> None:
|
||||
"""Test that the LLM receives role-labeled conversation text."""
|
||||
messages: list[dict[str, Any]] = [
|
||||
{"role": "system", "content": "System prompt."},
|
||||
{"role": "user", "content": "Hello there"},
|
||||
{"role": "assistant", "content": "Hi! How can I help?"},
|
||||
]
|
||||
|
||||
mock_llm = MagicMock()
|
||||
mock_llm.get_context_window_size.return_value = 1000
|
||||
mock_llm.call.return_value = "<summary>Greeting exchange.</summary>"
|
||||
|
||||
summarize_messages(
|
||||
messages=messages,
|
||||
llm=mock_llm,
|
||||
callbacks=[],
|
||||
i18n=_make_mock_i18n(),
|
||||
)
|
||||
|
||||
# Check what was passed to llm.call
|
||||
call_args = mock_llm.call.call_args[0][0]
|
||||
user_msg_content = call_args[1]["content"]
|
||||
assert "[USER]:" in user_msg_content
|
||||
assert "[ASSISTANT]:" in user_msg_content
|
||||
# System content should NOT appear in summarization input
|
||||
assert "System prompt." not in user_msg_content
|
||||
|
||||
def test_extracts_summary_from_tags(self) -> None:
|
||||
"""Test that <summary> tags are extracted from LLM response."""
|
||||
messages: list[dict[str, Any]] = [
|
||||
{"role": "user", "content": "Do something."},
|
||||
{"role": "assistant", "content": "Done."},
|
||||
]
|
||||
|
||||
mock_llm = MagicMock()
|
||||
mock_llm.get_context_window_size.return_value = 1000
|
||||
mock_llm.call.return_value = "Here is the summary:\n<summary>The extracted summary content.</summary>\nExtra text."
|
||||
|
||||
summarize_messages(
|
||||
messages=messages,
|
||||
llm=mock_llm,
|
||||
callbacks=[],
|
||||
i18n=_make_mock_i18n(),
|
||||
)
|
||||
|
||||
assert "The extracted summary content." in messages[0]["content"]
|
||||
|
||||
def test_handles_tool_messages(self) -> None:
|
||||
"""Test that tool messages are properly formatted in summarization."""
|
||||
messages: list[dict[str, Any]] = [
|
||||
{"role": "user", "content": "Search for Python."},
|
||||
{"role": "assistant", "content": None, "tool_calls": [
|
||||
{"function": {"name": "web_search", "arguments": '{"query": "Python"}'}}
|
||||
]},
|
||||
{"role": "tool", "content": "Python is a programming language.", "name": "web_search"},
|
||||
{"role": "assistant", "content": "Python is a programming language."},
|
||||
]
|
||||
|
||||
mock_llm = MagicMock()
|
||||
mock_llm.get_context_window_size.return_value = 1000
|
||||
mock_llm.call.return_value = "<summary>User searched for Python info.</summary>"
|
||||
|
||||
summarize_messages(
|
||||
messages=messages,
|
||||
llm=mock_llm,
|
||||
callbacks=[],
|
||||
i18n=_make_mock_i18n(),
|
||||
)
|
||||
|
||||
# Verify the conversation text sent to LLM contains tool labels
|
||||
call_args = mock_llm.call.call_args[0][0]
|
||||
user_msg_content = call_args[1]["content"]
|
||||
assert "[TOOL_RESULT (web_search)]:" in user_msg_content
|
||||
|
||||
def test_only_system_messages_no_op(self) -> None:
|
||||
"""Test that only system messages results in no-op (no summarization)."""
|
||||
messages: list[dict[str, Any]] = [
|
||||
{"role": "system", "content": "You are a helpful assistant."},
|
||||
{"role": "system", "content": "Additional system instructions."},
|
||||
]
|
||||
|
||||
mock_llm = MagicMock()
|
||||
mock_llm.get_context_window_size.return_value = 1000
|
||||
|
||||
summarize_messages(
|
||||
messages=messages,
|
||||
llm=mock_llm,
|
||||
callbacks=[],
|
||||
i18n=_make_mock_i18n(),
|
||||
)
|
||||
|
||||
# No LLM call should have been made
|
||||
mock_llm.call.assert_not_called()
|
||||
# System messages should remain untouched
|
||||
assert len(messages) == 2
|
||||
assert messages[0]["content"] == "You are a helpful assistant."
|
||||
assert messages[1]["content"] == "Additional system instructions."
|
||||
|
||||
|
||||
class TestFormatMessagesForSummary:
|
||||
"""Tests for _format_messages_for_summary helper."""
|
||||
|
||||
def test_skips_system_messages(self) -> None:
|
||||
messages: list[dict[str, Any]] = [
|
||||
{"role": "system", "content": "System prompt"},
|
||||
{"role": "user", "content": "Hello"},
|
||||
]
|
||||
result = _format_messages_for_summary(messages)
|
||||
assert "System prompt" not in result
|
||||
assert "[USER]: Hello" in result
|
||||
|
||||
def test_formats_user_and_assistant(self) -> None:
|
||||
messages: list[dict[str, Any]] = [
|
||||
{"role": "user", "content": "Question"},
|
||||
{"role": "assistant", "content": "Answer"},
|
||||
]
|
||||
result = _format_messages_for_summary(messages)
|
||||
assert "[USER]: Question" in result
|
||||
assert "[ASSISTANT]: Answer" in result
|
||||
|
||||
def test_formats_tool_messages(self) -> None:
|
||||
messages: list[dict[str, Any]] = [
|
||||
{"role": "tool", "content": "Result data", "name": "search_tool"},
|
||||
]
|
||||
result = _format_messages_for_summary(messages)
|
||||
assert "[TOOL_RESULT (search_tool)]:" in result
|
||||
assert "Result data" in result
|
||||
|
||||
def test_handles_none_content_with_tool_calls(self) -> None:
|
||||
messages: list[dict[str, Any]] = [
|
||||
{"role": "assistant", "content": None, "tool_calls": [
|
||||
{"function": {"name": "calculator", "arguments": "{}"}}
|
||||
]},
|
||||
]
|
||||
result = _format_messages_for_summary(messages)
|
||||
assert "[Called tools: calculator]" in result
|
||||
|
||||
def test_handles_none_content_without_tool_calls(self) -> None:
|
||||
messages: list[dict[str, Any]] = [
|
||||
{"role": "assistant", "content": None},
|
||||
]
|
||||
result = _format_messages_for_summary(messages)
|
||||
assert "[ASSISTANT]:" in result
|
||||
|
||||
def test_handles_multimodal_content(self) -> None:
|
||||
messages: list[dict[str, Any]] = [
|
||||
{"role": "user", "content": [
|
||||
{"type": "text", "text": "Describe this image"},
|
||||
{"type": "image_url", "image_url": {"url": "data:image/png;base64,..."}}
|
||||
]},
|
||||
]
|
||||
result = _format_messages_for_summary(messages)
|
||||
assert "[USER]: Describe this image" in result
|
||||
|
||||
def test_empty_messages(self) -> None:
|
||||
result = _format_messages_for_summary([])
|
||||
assert result == ""
|
||||
|
||||
|
||||
class TestExtractSummaryTags:
|
||||
"""Tests for _extract_summary_tags helper."""
|
||||
|
||||
def test_extracts_content_from_tags(self) -> None:
|
||||
text = "Preamble\n<summary>The actual summary.</summary>\nPostamble"
|
||||
assert _extract_summary_tags(text) == "The actual summary."
|
||||
|
||||
def test_handles_multiline_content(self) -> None:
|
||||
text = "<summary>\nLine 1\nLine 2\nLine 3\n</summary>"
|
||||
result = _extract_summary_tags(text)
|
||||
assert "Line 1" in result
|
||||
assert "Line 2" in result
|
||||
assert "Line 3" in result
|
||||
|
||||
def test_falls_back_when_no_tags(self) -> None:
|
||||
text = "Just a plain summary without tags."
|
||||
assert _extract_summary_tags(text) == text
|
||||
|
||||
def test_handles_empty_string(self) -> None:
|
||||
assert _extract_summary_tags("") == ""
|
||||
|
||||
def test_extracts_first_match(self) -> None:
|
||||
text = "<summary>First</summary> text <summary>Second</summary>"
|
||||
assert _extract_summary_tags(text) == "First"
|
||||
|
||||
|
||||
class TestSplitMessagesIntoChunks:
|
||||
"""Tests for _split_messages_into_chunks helper."""
|
||||
|
||||
def test_single_chunk_when_under_limit(self) -> None:
|
||||
messages: list[dict[str, Any]] = [
|
||||
{"role": "user", "content": "Hello"},
|
||||
{"role": "assistant", "content": "Hi"},
|
||||
]
|
||||
chunks = _split_messages_into_chunks(messages, max_tokens=1000)
|
||||
assert len(chunks) == 1
|
||||
assert len(chunks[0]) == 2
|
||||
|
||||
def test_splits_at_message_boundaries(self) -> None:
|
||||
messages: list[dict[str, Any]] = [
|
||||
{"role": "user", "content": "A" * 100}, # ~25 tokens
|
||||
{"role": "assistant", "content": "B" * 100}, # ~25 tokens
|
||||
{"role": "user", "content": "C" * 100}, # ~25 tokens
|
||||
]
|
||||
# max_tokens=30 should cause splits
|
||||
chunks = _split_messages_into_chunks(messages, max_tokens=30)
|
||||
assert len(chunks) == 3
|
||||
|
||||
def test_excludes_system_messages(self) -> None:
|
||||
messages: list[dict[str, Any]] = [
|
||||
{"role": "system", "content": "System prompt"},
|
||||
{"role": "user", "content": "Hello"},
|
||||
]
|
||||
chunks = _split_messages_into_chunks(messages, max_tokens=1000)
|
||||
assert len(chunks) == 1
|
||||
# The system message should not be in any chunk
|
||||
for chunk in chunks:
|
||||
for msg in chunk:
|
||||
assert msg.get("role") != "system"
|
||||
|
||||
def test_empty_messages(self) -> None:
|
||||
chunks = _split_messages_into_chunks([], max_tokens=1000)
|
||||
assert chunks == []
|
||||
|
||||
def test_only_system_messages(self) -> None:
|
||||
messages: list[dict[str, Any]] = [
|
||||
{"role": "system", "content": "System prompt"},
|
||||
]
|
||||
chunks = _split_messages_into_chunks(messages, max_tokens=1000)
|
||||
assert chunks == []
|
||||
|
||||
def test_handles_none_content(self) -> None:
|
||||
messages: list[dict[str, Any]] = [
|
||||
{"role": "assistant", "content": None},
|
||||
{"role": "user", "content": "Follow up"},
|
||||
]
|
||||
chunks = _split_messages_into_chunks(messages, max_tokens=1000)
|
||||
assert len(chunks) == 1
|
||||
assert len(chunks[0]) == 2
|
||||
|
||||
|
||||
class TestEstimateTokenCount:
|
||||
"""Tests for _estimate_token_count helper."""
|
||||
|
||||
def test_empty_string(self) -> None:
|
||||
assert _estimate_token_count("") == 0
|
||||
|
||||
def test_short_string(self) -> None:
|
||||
assert _estimate_token_count("hello") == 1 # 5 // 4 = 1
|
||||
|
||||
def test_longer_string(self) -> None:
|
||||
assert _estimate_token_count("a" * 100) == 25 # 100 // 4 = 25
|
||||
|
||||
def test_approximation_is_conservative(self) -> None:
|
||||
# For English text, actual token count is typically lower than char/4
|
||||
text = "The quick brown fox jumps over the lazy dog."
|
||||
estimated = _estimate_token_count(text)
|
||||
assert estimated > 0
|
||||
assert estimated == len(text) // 4
|
||||
|
||||
|
||||
class TestParallelSummarization:
|
||||
"""Tests for parallel chunk summarization via asyncio."""
|
||||
|
||||
def _make_messages_for_n_chunks(self, n: int) -> list[dict[str, Any]]:
|
||||
"""Build a message list that will produce exactly *n* chunks.
|
||||
|
||||
Each message has 400 chars (~100 tokens). With max_tokens=100 returned
|
||||
by the mock LLM, each message lands in its own chunk.
|
||||
"""
|
||||
msgs: list[dict[str, Any]] = []
|
||||
for i in range(n):
|
||||
msgs.append({"role": "user", "content": f"msg-{i} " + "x" * 400})
|
||||
return msgs
|
||||
|
||||
def test_multiple_chunks_use_acall(self) -> None:
|
||||
"""When there are multiple chunks, summarize_messages should use
|
||||
llm.acall (parallel) instead of llm.call (sequential)."""
|
||||
messages = self._make_messages_for_n_chunks(3)
|
||||
|
||||
mock_llm = MagicMock()
|
||||
mock_llm.get_context_window_size.return_value = 100 # force multiple chunks
|
||||
mock_llm.acall = AsyncMock(
|
||||
side_effect=[
|
||||
"<summary>Summary chunk 1</summary>",
|
||||
"<summary>Summary chunk 2</summary>",
|
||||
"<summary>Summary chunk 3</summary>",
|
||||
]
|
||||
)
|
||||
|
||||
summarize_messages(
|
||||
messages=messages,
|
||||
llm=mock_llm,
|
||||
callbacks=[],
|
||||
i18n=_make_mock_i18n(),
|
||||
)
|
||||
|
||||
# acall should have been awaited once per chunk
|
||||
assert mock_llm.acall.await_count == 3
|
||||
# sync call should NOT have been used for chunk summarization
|
||||
mock_llm.call.assert_not_called()
|
||||
|
||||
def test_single_chunk_uses_sync_call(self) -> None:
|
||||
"""When there is only one chunk, summarize_messages should use
|
||||
the sync llm.call path (no async overhead)."""
|
||||
messages: list[dict[str, Any]] = [
|
||||
{"role": "user", "content": "Short message"},
|
||||
{"role": "assistant", "content": "Short reply"},
|
||||
]
|
||||
|
||||
mock_llm = MagicMock()
|
||||
mock_llm.get_context_window_size.return_value = 100_000
|
||||
mock_llm.call.return_value = "<summary>Short summary</summary>"
|
||||
|
||||
summarize_messages(
|
||||
messages=messages,
|
||||
llm=mock_llm,
|
||||
callbacks=[],
|
||||
i18n=_make_mock_i18n(),
|
||||
)
|
||||
|
||||
mock_llm.call.assert_called_once()
|
||||
|
||||
def test_parallel_results_preserve_order(self) -> None:
|
||||
"""Summaries must appear in the same order as the original chunks,
|
||||
regardless of which async call finishes first."""
|
||||
messages = self._make_messages_for_n_chunks(3)
|
||||
|
||||
mock_llm = MagicMock()
|
||||
mock_llm.get_context_window_size.return_value = 100
|
||||
|
||||
# Simulate varying latencies — chunk 2 finishes before chunk 0
|
||||
async def _delayed_acall(msgs: Any, **kwargs: Any) -> str:
|
||||
user_content = msgs[1]["content"]
|
||||
if "msg-0" in user_content:
|
||||
await asyncio.sleep(0.05)
|
||||
return "<summary>Summary-A</summary>"
|
||||
elif "msg-1" in user_content:
|
||||
return "<summary>Summary-B</summary>" # fastest
|
||||
else:
|
||||
await asyncio.sleep(0.02)
|
||||
return "<summary>Summary-C</summary>"
|
||||
|
||||
mock_llm.acall = _delayed_acall
|
||||
|
||||
summarize_messages(
|
||||
messages=messages,
|
||||
llm=mock_llm,
|
||||
callbacks=[],
|
||||
i18n=_make_mock_i18n(),
|
||||
)
|
||||
|
||||
# The final summary message should have A, B, C in order
|
||||
summary_content = messages[-1]["content"]
|
||||
pos_a = summary_content.index("Summary-A")
|
||||
pos_b = summary_content.index("Summary-B")
|
||||
pos_c = summary_content.index("Summary-C")
|
||||
assert pos_a < pos_b < pos_c
|
||||
|
||||
def test_asummarize_chunks_returns_ordered_results(self) -> None:
|
||||
"""Direct test of the async helper _asummarize_chunks."""
|
||||
chunk_a: list[dict[str, Any]] = [{"role": "user", "content": "Chunk A"}]
|
||||
chunk_b: list[dict[str, Any]] = [{"role": "user", "content": "Chunk B"}]
|
||||
|
||||
mock_llm = MagicMock()
|
||||
mock_llm.acall = AsyncMock(
|
||||
side_effect=[
|
||||
"<summary>Result A</summary>",
|
||||
"<summary>Result B</summary>",
|
||||
]
|
||||
)
|
||||
|
||||
results = asyncio.run(
|
||||
_asummarize_chunks(
|
||||
chunks=[chunk_a, chunk_b],
|
||||
llm=mock_llm,
|
||||
callbacks=[],
|
||||
i18n=_make_mock_i18n(),
|
||||
)
|
||||
)
|
||||
|
||||
assert len(results) == 2
|
||||
assert results[0]["content"] == "Result A"
|
||||
assert results[1]["content"] == "Result B"
|
||||
|
||||
@patch("crewai.utilities.agent_utils.is_inside_event_loop", return_value=True)
|
||||
def test_works_inside_existing_event_loop(self, _mock_loop: Any) -> None:
|
||||
"""When called from inside a running event loop (e.g. a Flow),
|
||||
the ThreadPoolExecutor fallback should still work."""
|
||||
messages = self._make_messages_for_n_chunks(2)
|
||||
|
||||
mock_llm = MagicMock()
|
||||
mock_llm.get_context_window_size.return_value = 100
|
||||
mock_llm.acall = AsyncMock(
|
||||
side_effect=[
|
||||
"<summary>Flow summary 1</summary>",
|
||||
"<summary>Flow summary 2</summary>",
|
||||
]
|
||||
)
|
||||
|
||||
summarize_messages(
|
||||
messages=messages,
|
||||
llm=mock_llm,
|
||||
callbacks=[],
|
||||
i18n=_make_mock_i18n(),
|
||||
)
|
||||
|
||||
assert mock_llm.acall.await_count == 2
|
||||
# Verify the merged summary made it into messages
|
||||
assert "Flow summary 1" in messages[-1]["content"]
|
||||
assert "Flow summary 2" in messages[-1]["content"]
|
||||
|
||||
|
||||
def _build_long_conversation() -> list[dict[str, Any]]:
|
||||
"""Build a multi-turn conversation that produces multiple chunks at max_tokens=200.
|
||||
|
||||
Each non-system message is ~100-140 estimated tokens (400-560 chars),
|
||||
so a max_tokens of 200 yields roughly 3 chunks from 6 messages.
|
||||
"""
|
||||
return [
|
||||
{
|
||||
"role": "system",
|
||||
"content": "You are a helpful research assistant.",
|
||||
},
|
||||
{
|
||||
"role": "user",
|
||||
"content": (
|
||||
"Tell me about the history of the Python programming language. "
|
||||
"Who created it, when was it first released, and what were the "
|
||||
"main design goals? Please provide a detailed overview covering "
|
||||
"the major milestones from its inception through Python 3."
|
||||
),
|
||||
},
|
||||
{
|
||||
"role": "assistant",
|
||||
"content": (
|
||||
"Python was created by Guido van Rossum and first released in 1991. "
|
||||
"The main design goals were code readability and simplicity. Key milestones: "
|
||||
"Python 1.0 (1994) introduced functional programming tools like lambda and map. "
|
||||
"Python 2.0 (2000) added list comprehensions and garbage collection. "
|
||||
"Python 3.0 (2008) was a major backward-incompatible release that fixed "
|
||||
"fundamental design flaws. Python 2 reached end-of-life in January 2020."
|
||||
),
|
||||
},
|
||||
{
|
||||
"role": "user",
|
||||
"content": (
|
||||
"What about the async/await features? When were they introduced "
|
||||
"and how do they compare to similar features in JavaScript and C#? "
|
||||
"Also explain the Global Interpreter Lock and its implications."
|
||||
),
|
||||
},
|
||||
{
|
||||
"role": "assistant",
|
||||
"content": (
|
||||
"Async/await was introduced in Python 3.5 (PEP 492, 2015). "
|
||||
"Unlike JavaScript which is single-threaded by design, Python's asyncio "
|
||||
"is an opt-in framework. C# introduced async/await in 2012 (C# 5.0) and "
|
||||
"was a major inspiration for Python's implementation. "
|
||||
"The GIL (Global Interpreter Lock) is a mutex that protects access to "
|
||||
"Python objects, preventing multiple threads from executing Python bytecodes "
|
||||
"simultaneously. This means CPU-bound multithreaded programs don't benefit "
|
||||
"from multiple cores. PEP 703 proposes making the GIL optional in CPython."
|
||||
),
|
||||
},
|
||||
{
|
||||
"role": "user",
|
||||
"content": (
|
||||
"Explain the Python package ecosystem. How does pip work, what is PyPI, "
|
||||
"and what are virtual environments? Compare pip with conda and uv."
|
||||
),
|
||||
},
|
||||
{
|
||||
"role": "assistant",
|
||||
"content": (
|
||||
"PyPI (Python Package Index) is the official repository hosting 400k+ packages. "
|
||||
"pip is the standard package installer that downloads from PyPI. "
|
||||
"Virtual environments (venv) create isolated Python installations to avoid "
|
||||
"dependency conflicts between projects. conda is a cross-language package manager "
|
||||
"popular in data science that can manage non-Python dependencies. "
|
||||
"uv is a new Rust-based tool that is 10-100x faster than pip and aims to replace "
|
||||
"pip, pip-tools, and virtualenv with a single unified tool."
|
||||
),
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
class TestParallelSummarizationVCR:
|
||||
"""VCR-backed integration tests for parallel summarization.
|
||||
|
||||
These tests use a real LLM but patch get_context_window_size to force
|
||||
multiple chunks, exercising the asyncio.gather + acall parallel path.
|
||||
|
||||
To record cassettes:
|
||||
PYTEST_VCR_RECORD_MODE=all uv run pytest lib/crewai/tests/utilities/test_agent_utils.py::TestParallelSummarizationVCR -v
|
||||
"""
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_parallel_summarize_openai(self) -> None:
|
||||
"""Test that parallel summarization with gpt-4o-mini produces a valid summary."""
|
||||
from crewai.llm import LLM
|
||||
from crewai.utilities.i18n import I18N
|
||||
|
||||
llm = LLM(model="gpt-4o-mini", temperature=0)
|
||||
i18n = I18N()
|
||||
messages = _build_long_conversation()
|
||||
|
||||
original_system = messages[0]["content"]
|
||||
|
||||
# Patch get_context_window_size to return 200 — forces multiple chunks
|
||||
with patch.object(type(llm), "get_context_window_size", return_value=200):
|
||||
# Verify we actually get multiple chunks with this window size
|
||||
non_system = [m for m in messages if m.get("role") != "system"]
|
||||
chunks = _split_messages_into_chunks(non_system, max_tokens=200)
|
||||
assert len(chunks) > 1, f"Expected multiple chunks, got {len(chunks)}"
|
||||
|
||||
summarize_messages(
|
||||
messages=messages,
|
||||
llm=llm,
|
||||
callbacks=[],
|
||||
i18n=i18n,
|
||||
)
|
||||
|
||||
# System message preserved
|
||||
assert messages[0]["role"] == "system"
|
||||
assert messages[0]["content"] == original_system
|
||||
|
||||
# Summary produced as a user message
|
||||
summary_msg = messages[-1]
|
||||
assert summary_msg["role"] == "user"
|
||||
assert len(summary_msg["content"]) > 0
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_parallel_summarize_preserves_files(self) -> None:
|
||||
"""Test that file references survive parallel summarization."""
|
||||
from crewai.llm import LLM
|
||||
from crewai.utilities.i18n import I18N
|
||||
|
||||
llm = LLM(model="gpt-4o-mini", temperature=0)
|
||||
i18n = I18N()
|
||||
messages = _build_long_conversation()
|
||||
|
||||
mock_file = MagicMock()
|
||||
messages[1]["files"] = {"report.pdf": mock_file}
|
||||
|
||||
with patch.object(type(llm), "get_context_window_size", return_value=200):
|
||||
summarize_messages(
|
||||
messages=messages,
|
||||
llm=llm,
|
||||
callbacks=[],
|
||||
i18n=i18n,
|
||||
)
|
||||
|
||||
summary_msg = messages[-1]
|
||||
assert summary_msg["role"] == "user"
|
||||
assert "files" in summary_msg
|
||||
assert "report.pdf" in summary_msg["files"]
|
||||
|
||||
284
lib/crewai/tests/utilities/test_summarize_integration.py
Normal file
284
lib/crewai/tests/utilities/test_summarize_integration.py
Normal file
@@ -0,0 +1,284 @@
|
||||
"""
|
||||
Integration tests for structured context compaction (summarize_messages).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.agent import Agent
|
||||
from crewai.crew import Crew
|
||||
from crewai.llm import LLM
|
||||
from crewai.task import Task
|
||||
from crewai.utilities.agent_utils import summarize_messages
|
||||
from crewai.utilities.i18n import I18N
|
||||
|
||||
|
||||
def _build_conversation_messages(
|
||||
*, include_system: bool = True, include_files: bool = False
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Build a realistic multi-turn conversation for summarization tests."""
|
||||
messages: list[dict[str, Any]] = []
|
||||
|
||||
if include_system:
|
||||
messages.append(
|
||||
{
|
||||
"role": "system",
|
||||
"content": (
|
||||
"You are a research assistant specializing in AI topics. "
|
||||
"Your goal is to find accurate, up-to-date information."
|
||||
),
|
||||
}
|
||||
)
|
||||
|
||||
user_msg: dict[str, Any] = {
|
||||
"role": "user",
|
||||
"content": (
|
||||
"Research the latest developments in large language models. "
|
||||
"Focus on architecture improvements and training techniques."
|
||||
),
|
||||
}
|
||||
if include_files:
|
||||
user_msg["files"] = {"reference.pdf": MagicMock()}
|
||||
messages.append(user_msg)
|
||||
|
||||
messages.append(
|
||||
{
|
||||
"role": "assistant",
|
||||
"content": (
|
||||
"I'll research the latest developments in large language models. "
|
||||
"Based on my knowledge, recent advances include:\n"
|
||||
"1. Mixture of Experts (MoE) architectures\n"
|
||||
"2. Improved attention mechanisms like Flash Attention\n"
|
||||
"3. Better training data curation techniques\n"
|
||||
"4. Constitutional AI and RLHF improvements"
|
||||
),
|
||||
}
|
||||
)
|
||||
|
||||
messages.append(
|
||||
{
|
||||
"role": "user",
|
||||
"content": "Can you go deeper on the MoE architectures? What are the key papers?",
|
||||
}
|
||||
)
|
||||
|
||||
messages.append(
|
||||
{
|
||||
"role": "assistant",
|
||||
"content": (
|
||||
"Key papers on Mixture of Experts:\n"
|
||||
"- Switch Transformers (Google, 2021) - simplified MoE routing\n"
|
||||
"- GShard - scaling to 600B parameters\n"
|
||||
"- Mixtral (Mistral AI) - open-source MoE model\n"
|
||||
"The main advantage is computational efficiency: "
|
||||
"only a subset of experts is activated per token."
|
||||
),
|
||||
}
|
||||
)
|
||||
|
||||
return messages
|
||||
|
||||
|
||||
class TestSummarizeDirectOpenAI:
|
||||
"""Test direct summarize_messages calls with OpenAI."""
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_summarize_direct_openai(self) -> None:
|
||||
"""Test summarize_messages with gpt-4o-mini preserves system messages."""
|
||||
llm = LLM(model="gpt-4o-mini", temperature=0)
|
||||
i18n = I18N()
|
||||
messages = _build_conversation_messages(include_system=True)
|
||||
|
||||
original_system_content = messages[0]["content"]
|
||||
|
||||
summarize_messages(
|
||||
messages=messages,
|
||||
llm=llm,
|
||||
callbacks=[],
|
||||
i18n=i18n,
|
||||
)
|
||||
|
||||
# System message should be preserved
|
||||
assert len(messages) >= 2
|
||||
assert messages[0]["role"] == "system"
|
||||
assert messages[0]["content"] == original_system_content
|
||||
|
||||
# Summary should be a user message with <summary> block
|
||||
summary_msg = messages[-1]
|
||||
assert summary_msg["role"] == "user"
|
||||
assert len(summary_msg["content"]) > 0
|
||||
assert "<summary>" in summary_msg["content"]
|
||||
assert "</summary>" in summary_msg["content"]
|
||||
|
||||
|
||||
class TestSummarizeDirectAnthropic:
|
||||
"""Test direct summarize_messages calls with Anthropic."""
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_summarize_direct_anthropic(self) -> None:
|
||||
"""Test summarize_messages with claude-3-5-haiku."""
|
||||
llm = LLM(model="anthropic/claude-3-5-haiku-latest", temperature=0)
|
||||
i18n = I18N()
|
||||
messages = _build_conversation_messages(include_system=True)
|
||||
|
||||
summarize_messages(
|
||||
messages=messages,
|
||||
llm=llm,
|
||||
callbacks=[],
|
||||
i18n=i18n,
|
||||
)
|
||||
|
||||
assert len(messages) >= 2
|
||||
assert messages[0]["role"] == "system"
|
||||
summary_msg = messages[-1]
|
||||
assert summary_msg["role"] == "user"
|
||||
assert len(summary_msg["content"]) > 0
|
||||
assert "<summary>" in summary_msg["content"]
|
||||
assert "</summary>" in summary_msg["content"]
|
||||
|
||||
|
||||
class TestSummarizeDirectGemini:
|
||||
"""Test direct summarize_messages calls with Gemini."""
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_summarize_direct_gemini(self) -> None:
|
||||
"""Test summarize_messages with gemini-2.0-flash."""
|
||||
llm = LLM(model="gemini/gemini-2.0-flash", temperature=0)
|
||||
i18n = I18N()
|
||||
messages = _build_conversation_messages(include_system=True)
|
||||
|
||||
summarize_messages(
|
||||
messages=messages,
|
||||
llm=llm,
|
||||
callbacks=[],
|
||||
i18n=i18n,
|
||||
)
|
||||
|
||||
assert len(messages) >= 2
|
||||
assert messages[0]["role"] == "system"
|
||||
summary_msg = messages[-1]
|
||||
assert summary_msg["role"] == "user"
|
||||
assert len(summary_msg["content"]) > 0
|
||||
assert "<summary>" in summary_msg["content"]
|
||||
assert "</summary>" in summary_msg["content"]
|
||||
|
||||
|
||||
class TestSummarizeDirectAzure:
|
||||
"""Test direct summarize_messages calls with Azure."""
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_summarize_direct_azure(self) -> None:
|
||||
"""Test summarize_messages with azure/gpt-4o-mini."""
|
||||
llm = LLM(model="azure/gpt-4o-mini", temperature=0)
|
||||
i18n = I18N()
|
||||
messages = _build_conversation_messages(include_system=True)
|
||||
|
||||
summarize_messages(
|
||||
messages=messages,
|
||||
llm=llm,
|
||||
callbacks=[],
|
||||
i18n=i18n,
|
||||
)
|
||||
|
||||
assert len(messages) >= 2
|
||||
assert messages[0]["role"] == "system"
|
||||
summary_msg = messages[-1]
|
||||
assert summary_msg["role"] == "user"
|
||||
assert len(summary_msg["content"]) > 0
|
||||
assert "<summary>" in summary_msg["content"]
|
||||
assert "</summary>" in summary_msg["content"]
|
||||
|
||||
|
||||
class TestCrewKickoffCompaction:
|
||||
"""Test compaction triggered via Crew.kickoff() with small context window."""
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_crew_kickoff_compaction_openai(self) -> None:
|
||||
"""Test that compaction is triggered during kickoff with small context_window_size."""
|
||||
llm = LLM(model="gpt-4o-mini", temperature=0)
|
||||
# Force a very small context window to trigger compaction
|
||||
llm.context_window_size = 500
|
||||
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Find information about Python programming",
|
||||
backstory="You are an expert researcher.",
|
||||
llm=llm,
|
||||
verbose=False,
|
||||
max_iter=2,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="What is Python? Give a brief answer.",
|
||||
expected_output="A short description of Python.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task], verbose=False)
|
||||
|
||||
# This may or may not trigger compaction depending on actual response sizes.
|
||||
# The test verifies the code path doesn't crash.
|
||||
result = crew.kickoff()
|
||||
assert result is not None
|
||||
|
||||
|
||||
class TestAgentExecuteTaskCompaction:
|
||||
"""Test compaction triggered via Agent.execute_task()."""
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_agent_execute_task_compaction(self) -> None:
|
||||
"""Test that Agent.execute_task() works with small context_window_size."""
|
||||
llm = LLM(model="gpt-4o-mini", temperature=0)
|
||||
llm.context_window_size = 500
|
||||
|
||||
agent = Agent(
|
||||
role="Writer",
|
||||
goal="Write concise content",
|
||||
backstory="You are a skilled writer.",
|
||||
llm=llm,
|
||||
verbose=False,
|
||||
max_iter=2,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Write one sentence about the sun.",
|
||||
expected_output="A single sentence about the sun.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
result = agent.execute_task(task=task)
|
||||
assert result is not None
|
||||
|
||||
|
||||
class TestSummarizePreservesFiles:
|
||||
"""Test that files are preserved through real summarization."""
|
||||
|
||||
@pytest.mark.vcr()
|
||||
def test_summarize_preserves_files_integration(self) -> None:
|
||||
"""Test that file references survive a real summarization call."""
|
||||
llm = LLM(model="gpt-4o-mini", temperature=0)
|
||||
i18n = I18N()
|
||||
messages = _build_conversation_messages(
|
||||
include_system=True, include_files=True
|
||||
)
|
||||
|
||||
summarize_messages(
|
||||
messages=messages,
|
||||
llm=llm,
|
||||
callbacks=[],
|
||||
i18n=i18n,
|
||||
)
|
||||
|
||||
# System message preserved
|
||||
assert messages[0]["role"] == "system"
|
||||
|
||||
# Files should be on the summary message with <summary> block
|
||||
summary_msg = messages[-1]
|
||||
assert "<summary>" in summary_msg["content"]
|
||||
assert "</summary>" in summary_msg["content"]
|
||||
assert "files" in summary_msg
|
||||
assert "reference.pdf" in summary_msg["files"]
|
||||
156
uv.lock
generated
156
uv.lock
generated
@@ -1295,7 +1295,7 @@ requires-dist = [
|
||||
{ name = "json5", specifier = "~=0.10.0" },
|
||||
{ name = "jsonref", specifier = "~=1.1.0" },
|
||||
{ name = "litellm", marker = "extra == 'litellm'", specifier = ">=1.74.9,<3" },
|
||||
{ name = "mcp", specifier = "~=1.23.1" },
|
||||
{ name = "mcp", specifier = "~=1.26.0" },
|
||||
{ name = "mem0ai", marker = "extra == 'mem0'", specifier = "~=0.1.94" },
|
||||
{ name = "openai", specifier = ">=1.83.0,<3" },
|
||||
{ name = "openpyxl", specifier = "~=3.1.5" },
|
||||
@@ -1311,7 +1311,7 @@ requires-dist = [
|
||||
{ name = "pyjwt", specifier = ">=2.9.0,<3" },
|
||||
{ name = "python-dotenv", specifier = "~=1.1.1" },
|
||||
{ name = "qdrant-client", extras = ["fastembed"], marker = "extra == 'qdrant'", specifier = "~=1.14.3" },
|
||||
{ name = "regex", specifier = "~=2024.9.11" },
|
||||
{ name = "regex", specifier = "~=2026.1.15" },
|
||||
{ name = "tiktoken", marker = "extra == 'embeddings'", specifier = "~=0.8.0" },
|
||||
{ name = "tokenizers", specifier = "~=0.20.3" },
|
||||
{ name = "tomli", specifier = "~=2.0.2" },
|
||||
@@ -3777,7 +3777,7 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "mcp"
|
||||
version = "1.23.3"
|
||||
version = "1.26.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "anyio" },
|
||||
@@ -3795,9 +3795,9 @@ dependencies = [
|
||||
{ name = "typing-inspection" },
|
||||
{ name = "uvicorn", marker = "sys_platform != 'emscripten'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/a7/a4/d06a303f45997e266f2c228081abe299bbcba216cb806128e2e49095d25f/mcp-1.23.3.tar.gz", hash = "sha256:b3b0da2cc949950ce1259c7bfc1b081905a51916fcd7c8182125b85e70825201", size = 600697, upload-time = "2025-12-09T16:04:37.351Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/fc/6d/62e76bbb8144d6ed86e202b5edd8a4cb631e7c8130f3f4893c3f90262b10/mcp-1.26.0.tar.gz", hash = "sha256:db6e2ef491eecc1a0d93711a76f28dec2e05999f93afd48795da1c1137142c66", size = 608005, upload-time = "2026-01-24T19:40:32.468Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/32/c6/13c1a26b47b3f3a3b480783001ada4268917c9f42d78a079c336da2e75e5/mcp-1.23.3-py3-none-any.whl", hash = "sha256:32768af4b46a1b4f7df34e2bfdf5c6011e7b63d7f1b0e321d0fdef4cd6082031", size = 231570, upload-time = "2025-12-09T16:04:35.56Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fd/d9/eaa1f80170d2b7c5ba23f3b59f766f3a0bb41155fbc32a69adfa1adaaef9/mcp-1.26.0-py3-none-any.whl", hash = "sha256:904a21c33c25aa98ddbeb47273033c435e595bbacfdb177f4bd87f6dceebe1ca", size = 233615, upload-time = "2026-01-24T19:40:30.652Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -6792,71 +6792,91 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "regex"
|
||||
version = "2024.9.11"
|
||||
version = "2026.1.15"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/f9/38/148df33b4dbca3bd069b963acab5e0fa1a9dbd6820f8c322d0dd6faeff96/regex-2024.9.11.tar.gz", hash = "sha256:6c188c307e8433bcb63dc1915022deb553b4203a70722fc542c363bf120a01fd", size = 399403, upload-time = "2024-09-11T19:00:09.814Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/0b/86/07d5056945f9ec4590b518171c4254a5925832eb727b56d3c38a7476f316/regex-2026.1.15.tar.gz", hash = "sha256:164759aa25575cbc0651bef59a0b18353e54300d79ace8084c818ad8ac72b7d5", size = 414811, upload-time = "2026-01-14T23:18:02.775Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/63/12/497bd6599ce8a239ade68678132296aec5ee25ebea45fc8ba91aa60fceec/regex-2024.9.11-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:1494fa8725c285a81d01dc8c06b55287a1ee5e0e382d8413adc0a9197aac6408", size = 482488, upload-time = "2024-09-11T18:56:55.331Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c1/24/595ddb9bec2a9b151cdaf9565b0c9f3da9f0cb1dca6c158bc5175332ddf8/regex-2024.9.11-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0e12c481ad92d129c78f13a2a3662317e46ee7ef96c94fd332e1c29131875b7d", size = 287443, upload-time = "2024-09-11T18:56:58.531Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/69/a8/b2fb45d9715b1469383a0da7968f8cacc2f83e9fbbcd6b8713752dd980a6/regex-2024.9.11-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:16e13a7929791ac1216afde26f712802e3df7bf0360b32e4914dca3ab8baeea5", size = 284561, upload-time = "2024-09-11T18:57:00.655Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/88/87/1ce4a5357216b19b7055e7d3b0efc75a6e426133bf1e7d094321df514257/regex-2024.9.11-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:46989629904bad940bbec2106528140a218b4a36bb3042d8406980be1941429c", size = 783177, upload-time = "2024-09-11T18:57:01.958Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3c/65/b9f002ab32f7b68e7d1dcabb67926f3f47325b8dbc22cc50b6a043e1d07c/regex-2024.9.11-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a906ed5e47a0ce5f04b2c981af1c9acf9e8696066900bf03b9d7879a6f679fc8", size = 823193, upload-time = "2024-09-11T18:57:04.06Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/22/91/8339dd3abce101204d246e31bc26cdd7ec07c9f91598472459a3a902aa41/regex-2024.9.11-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e9a091b0550b3b0207784a7d6d0f1a00d1d1c8a11699c1a4d93db3fbefc3ad35", size = 809950, upload-time = "2024-09-11T18:57:05.805Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cb/19/556638aa11c2ec9968a1da998f07f27ec0abb9bf3c647d7c7985ca0b8eea/regex-2024.9.11-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5ddcd9a179c0a6fa8add279a4444015acddcd7f232a49071ae57fa6e278f1f71", size = 782661, upload-time = "2024-09-11T18:57:07.881Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d1/e9/7a5bc4c6ef8d9cd2bdd83a667888fc35320da96a4cc4da5fa084330f53db/regex-2024.9.11-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6b41e1adc61fa347662b09398e31ad446afadff932a24807d3ceb955ed865cc8", size = 772348, upload-time = "2024-09-11T18:57:09.494Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f1/0b/29f2105bfac3ed08e704914c38e93b07c784a6655f8a015297ee7173e95b/regex-2024.9.11-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:ced479f601cd2f8ca1fd7b23925a7e0ad512a56d6e9476f79b8f381d9d37090a", size = 697460, upload-time = "2024-09-11T18:57:11.595Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/71/3a/52ff61054d15a4722605f5872ad03962b319a04c1ebaebe570b8b9b7dde1/regex-2024.9.11-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:635a1d96665f84b292e401c3d62775851aedc31d4f8784117b3c68c4fcd4118d", size = 769151, upload-time = "2024-09-11T18:57:14.358Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/97/07/37e460ab5ca84be8e1e197c3b526c5c86993dcc9e13cbc805c35fc2463c1/regex-2024.9.11-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:c0256beda696edcf7d97ef16b2a33a8e5a875affd6fa6567b54f7c577b30a137", size = 777478, upload-time = "2024-09-11T18:57:16.397Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/65/7b/953075723dd5ab00780043ac2f9de667306ff9e2a85332975e9f19279174/regex-2024.9.11-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:3ce4f1185db3fbde8ed8aa223fc9620f276c58de8b0d4f8cc86fd1360829edb6", size = 845373, upload-time = "2024-09-11T18:57:17.938Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/40/b8/3e9484c6230b8b6e8f816ab7c9a080e631124991a4ae2c27a81631777db0/regex-2024.9.11-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:09d77559e80dcc9d24570da3745ab859a9cf91953062e4ab126ba9d5993688ca", size = 845369, upload-time = "2024-09-11T18:57:20.091Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b7/99/38434984d912edbd2e1969d116257e869578f67461bd7462b894c45ed874/regex-2024.9.11-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:7a22ccefd4db3f12b526eccb129390942fe874a3a9fdbdd24cf55773a1faab1a", size = 773935, upload-time = "2024-09-11T18:57:21.652Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ab/67/43174d2b46fa947b7b9dfe56b6c8a8a76d44223f35b1d64645a732fd1d6f/regex-2024.9.11-cp310-cp310-win32.whl", hash = "sha256:f745ec09bc1b0bd15cfc73df6fa4f726dcc26bb16c23a03f9e3367d357eeedd0", size = 261624, upload-time = "2024-09-11T18:57:23.777Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c4/2a/4f9c47d9395b6aff24874c761d8d620c0232f97c43ef3cf668c8b355e7a7/regex-2024.9.11-cp310-cp310-win_amd64.whl", hash = "sha256:01c2acb51f8a7d6494c8c5eafe3d8e06d76563d8a8a4643b37e9b2dd8a2ff623", size = 274020, upload-time = "2024-09-11T18:57:25.27Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/86/a1/d526b7b6095a0019aa360948c143aacfeb029919c898701ce7763bbe4c15/regex-2024.9.11-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:2cce2449e5927a0bf084d346da6cd5eb016b2beca10d0013ab50e3c226ffc0df", size = 482483, upload-time = "2024-09-11T18:57:26.694Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/32/d9/bfdd153179867c275719e381e1e8e84a97bd186740456a0dcb3e7125c205/regex-2024.9.11-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3b37fa423beefa44919e009745ccbf353d8c981516e807995b2bd11c2c77d268", size = 287442, upload-time = "2024-09-11T18:57:28.133Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/33/c4/60f3370735135e3a8d673ddcdb2507a8560d0e759e1398d366e43d000253/regex-2024.9.11-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:64ce2799bd75039b480cc0360907c4fb2f50022f030bf9e7a8705b636e408fad", size = 284561, upload-time = "2024-09-11T18:57:30.83Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b1/51/91a5ebdff17f9ec4973cb0aa9d37635efec1c6868654bbc25d1543aca4ec/regex-2024.9.11-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a4cc92bb6db56ab0c1cbd17294e14f5e9224f0cc6521167ef388332604e92679", size = 791779, upload-time = "2024-09-11T18:57:32.461Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/07/4a/022c5e6f0891a90cd7eb3d664d6c58ce2aba48bff107b00013f3d6167069/regex-2024.9.11-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d05ac6fa06959c4172eccd99a222e1fbf17b5670c4d596cb1e5cde99600674c4", size = 832605, upload-time = "2024-09-11T18:57:34.01Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ac/1c/3793990c8c83ca04e018151ddda83b83ecc41d89964f0f17749f027fc44d/regex-2024.9.11-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:040562757795eeea356394a7fb13076ad4f99d3c62ab0f8bdfb21f99a1f85664", size = 818556, upload-time = "2024-09-11T18:57:36.363Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e9/5c/8b385afbfacb853730682c57be56225f9fe275c5bf02ac1fc88edbff316d/regex-2024.9.11-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6113c008a7780792efc80f9dfe10ba0cd043cbf8dc9a76ef757850f51b4edc50", size = 792808, upload-time = "2024-09-11T18:57:38.493Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9b/8b/a4723a838b53c771e9240951adde6af58c829fb6a6a28f554e8131f53839/regex-2024.9.11-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8e5fb5f77c8745a60105403a774fe2c1759b71d3e7b4ca237a5e67ad066c7199", size = 781115, upload-time = "2024-09-11T18:57:41.4Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/83/5f/031a04b6017033d65b261259c09043c06f4ef2d4eac841d0649d76d69541/regex-2024.9.11-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:54d9ff35d4515debf14bc27f1e3b38bfc453eff3220f5bce159642fa762fe5d4", size = 778155, upload-time = "2024-09-11T18:57:43.608Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fd/cd/4660756070b03ce4a66663a43f6c6e7ebc2266cc6b4c586c167917185eb4/regex-2024.9.11-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:df5cbb1fbc74a8305b6065d4ade43b993be03dbe0f8b30032cced0d7740994bd", size = 784614, upload-time = "2024-09-11T18:57:45.219Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/93/8d/65b9bea7df120a7be8337c415b6d256ba786cbc9107cebba3bf8ff09da99/regex-2024.9.11-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:7fb89ee5d106e4a7a51bce305ac4efb981536301895f7bdcf93ec92ae0d91c7f", size = 853744, upload-time = "2024-09-11T18:57:46.907Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/96/a7/fba1eae75eb53a704475baf11bd44b3e6ccb95b316955027eb7748f24ef8/regex-2024.9.11-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:a738b937d512b30bf75995c0159c0ddf9eec0775c9d72ac0202076c72f24aa96", size = 855890, upload-time = "2024-09-11T18:57:49.264Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/45/14/d864b2db80a1a3358534392373e8a281d95b28c29c87d8548aed58813910/regex-2024.9.11-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:e28f9faeb14b6f23ac55bfbbfd3643f5c7c18ede093977f1df249f73fd22c7b1", size = 781887, upload-time = "2024-09-11T18:57:51.619Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4d/a9/bfb29b3de3eb11dc9b412603437023b8e6c02fb4e11311863d9bf62c403a/regex-2024.9.11-cp311-cp311-win32.whl", hash = "sha256:18e707ce6c92d7282dfce370cd205098384b8ee21544e7cb29b8aab955b66fa9", size = 261644, upload-time = "2024-09-11T18:57:53.334Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c7/ab/1ad2511cf6a208fde57fafe49829cab8ca018128ab0d0b48973d8218634a/regex-2024.9.11-cp311-cp311-win_amd64.whl", hash = "sha256:313ea15e5ff2a8cbbad96ccef6be638393041b0a7863183c2d31e0c6116688cf", size = 274033, upload-time = "2024-09-11T18:57:55.605Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6e/92/407531450762bed778eedbde04407f68cbd75d13cee96c6f8d6903d9c6c1/regex-2024.9.11-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:b0d0a6c64fcc4ef9c69bd5b3b3626cc3776520a1637d8abaa62b9edc147a58f7", size = 483590, upload-time = "2024-09-11T18:57:57.793Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8e/a2/048acbc5ae1f615adc6cba36cc45734e679b5f1e4e58c3c77f0ed611d4e2/regex-2024.9.11-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:49b0e06786ea663f933f3710a51e9385ce0cba0ea56b67107fd841a55d56a231", size = 288175, upload-time = "2024-09-11T18:57:59.671Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8a/ea/909d8620329ab710dfaf7b4adee41242ab7c9b95ea8d838e9bfe76244259/regex-2024.9.11-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:5b513b6997a0b2f10e4fd3a1313568e373926e8c252bd76c960f96fd039cd28d", size = 284749, upload-time = "2024-09-11T18:58:01.855Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ca/fa/521eb683b916389b4975337873e66954e0f6d8f91bd5774164a57b503185/regex-2024.9.11-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ee439691d8c23e76f9802c42a95cfeebf9d47cf4ffd06f18489122dbb0a7ad64", size = 795181, upload-time = "2024-09-11T18:58:03.985Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/28/db/63047feddc3280cc242f9c74f7aeddc6ee662b1835f00046f57d5630c827/regex-2024.9.11-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a8f877c89719d759e52783f7fe6e1c67121076b87b40542966c02de5503ace42", size = 835842, upload-time = "2024-09-11T18:58:05.74Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e3/94/86adc259ff8ec26edf35fcca7e334566c1805c7493b192cb09679f9c3dee/regex-2024.9.11-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:23b30c62d0f16827f2ae9f2bb87619bc4fba2044911e2e6c2eb1af0161cdb766", size = 823533, upload-time = "2024-09-11T18:58:07.427Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/29/52/84662b6636061277cb857f658518aa7db6672bc6d1a3f503ccd5aefc581e/regex-2024.9.11-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:85ab7824093d8f10d44330fe1e6493f756f252d145323dd17ab6b48733ff6c0a", size = 797037, upload-time = "2024-09-11T18:58:09.879Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c3/2a/cd4675dd987e4a7505f0364a958bc41f3b84942de9efaad0ef9a2646681c/regex-2024.9.11-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8dee5b4810a89447151999428fe096977346cf2f29f4d5e29609d2e19e0199c9", size = 784106, upload-time = "2024-09-11T18:58:11.55Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6f/75/3ea7ec29de0bbf42f21f812f48781d41e627d57a634f3f23947c9a46e303/regex-2024.9.11-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:98eeee2f2e63edae2181c886d7911ce502e1292794f4c5ee71e60e23e8d26b5d", size = 782468, upload-time = "2024-09-11T18:58:13.552Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d3/67/15519d69b52c252b270e679cb578e22e0c02b8dd4e361f2b04efcc7f2335/regex-2024.9.11-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:57fdd2e0b2694ce6fc2e5ccf189789c3e2962916fb38779d3e3521ff8fe7a822", size = 790324, upload-time = "2024-09-11T18:58:15.268Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9c/71/eff77d3fe7ba08ab0672920059ec30d63fa7e41aa0fb61c562726e9bd721/regex-2024.9.11-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:d552c78411f60b1fdaafd117a1fca2f02e562e309223b9d44b7de8be451ec5e0", size = 860214, upload-time = "2024-09-11T18:58:17.583Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/81/11/e1bdf84a72372e56f1ea4b833dd583b822a23138a616ace7ab57a0e11556/regex-2024.9.11-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:a0b2b80321c2ed3fcf0385ec9e51a12253c50f146fddb2abbb10f033fe3d049a", size = 859420, upload-time = "2024-09-11T18:58:19.898Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ea/75/9753e9dcebfa7c3645563ef5c8a58f3a47e799c872165f37c55737dadd3e/regex-2024.9.11-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:18406efb2f5a0e57e3a5881cd9354c1512d3bb4f5c45d96d110a66114d84d23a", size = 787333, upload-time = "2024-09-11T18:58:21.699Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bc/4e/ba1cbca93141f7416624b3ae63573e785d4bc1834c8be44a8f0747919eca/regex-2024.9.11-cp312-cp312-win32.whl", hash = "sha256:e464b467f1588e2c42d26814231edecbcfe77f5ac414d92cbf4e7b55b2c2a776", size = 262058, upload-time = "2024-09-11T18:58:23.452Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6e/16/efc5f194778bf43e5888209e5cec4b258005d37c613b67ae137df3b89c53/regex-2024.9.11-cp312-cp312-win_amd64.whl", hash = "sha256:9e8719792ca63c6b8340380352c24dcb8cd7ec49dae36e963742a275dfae6009", size = 273526, upload-time = "2024-09-11T18:58:25.191Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/93/0a/d1c6b9af1ff1e36832fe38d74d5c5bab913f2bdcbbd6bc0e7f3ce8b2f577/regex-2024.9.11-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:c157bb447303070f256e084668b702073db99bbb61d44f85d811025fcf38f784", size = 483376, upload-time = "2024-09-11T18:58:27.11Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a4/42/5910a050c105d7f750a72dcb49c30220c3ae4e2654e54aaaa0e9bc0584cb/regex-2024.9.11-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:4db21ece84dfeefc5d8a3863f101995de646c6cb0536952c321a2650aa202c36", size = 288112, upload-time = "2024-09-11T18:58:28.78Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8d/56/0c262aff0e9224fa7ffce47b5458d373f4d3e3ff84e99b5ff0cb15e0b5b2/regex-2024.9.11-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:220e92a30b426daf23bb67a7962900ed4613589bab80382be09b48896d211e92", size = 284608, upload-time = "2024-09-11T18:58:30.498Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b9/54/9fe8f9aec5007bbbbce28ba3d2e3eaca425f95387b7d1e84f0d137d25237/regex-2024.9.11-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eb1ae19e64c14c7ec1995f40bd932448713d3c73509e82d8cd7744dc00e29e86", size = 795337, upload-time = "2024-09-11T18:58:32.665Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b2/e7/6b2f642c3cded271c4f16cc4daa7231be544d30fe2b168e0223724b49a61/regex-2024.9.11-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f47cd43a5bfa48f86925fe26fbdd0a488ff15b62468abb5d2a1e092a4fb10e85", size = 835848, upload-time = "2024-09-11T18:58:34.337Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cd/9e/187363bdf5d8c0e4662117b92aa32bf52f8f09620ae93abc7537d96d3311/regex-2024.9.11-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9d4a76b96f398697fe01117093613166e6aa8195d63f1b4ec3f21ab637632963", size = 823503, upload-time = "2024-09-11T18:58:36.17Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f8/10/601303b8ee93589f879664b0cfd3127949ff32b17f9b6c490fb201106c4d/regex-2024.9.11-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0ea51dcc0835eea2ea31d66456210a4e01a076d820e9039b04ae8d17ac11dee6", size = 797049, upload-time = "2024-09-11T18:58:38.225Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ef/1c/ea200f61ce9f341763f2717ab4daebe4422d83e9fd4ac5e33435fd3a148d/regex-2024.9.11-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b7aaa315101c6567a9a45d2839322c51c8d6e81f67683d529512f5bcfb99c802", size = 784144, upload-time = "2024-09-11T18:58:40.605Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d8/5c/d2429be49ef3292def7688401d3deb11702c13dcaecdc71d2b407421275b/regex-2024.9.11-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:c57d08ad67aba97af57a7263c2d9006d5c404d721c5f7542f077f109ec2a4a29", size = 782483, upload-time = "2024-09-11T18:58:42.58Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/12/d9/cbc30f2ff7164f3b26a7760f87c54bf8b2faed286f60efd80350a51c5b99/regex-2024.9.11-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:f8404bf61298bb6f8224bb9176c1424548ee1181130818fcd2cbffddc768bed8", size = 790320, upload-time = "2024-09-11T18:58:44.5Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/19/1d/43ed03a236313639da5a45e61bc553c8d41e925bcf29b0f8ecff0c2c3f25/regex-2024.9.11-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:dd4490a33eb909ef5078ab20f5f000087afa2a4daa27b4c072ccb3cb3050ad84", size = 860435, upload-time = "2024-09-11T18:58:47.014Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/34/4f/5d04da61c7c56e785058a46349f7285ae3ebc0726c6ea7c5c70600a52233/regex-2024.9.11-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:eee9130eaad130649fd73e5cd92f60e55708952260ede70da64de420cdcad554", size = 859571, upload-time = "2024-09-11T18:58:48.974Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/12/7f/8398c8155a3c70703a8e91c29532558186558e1aea44144b382faa2a6f7a/regex-2024.9.11-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6a2644a93da36c784e546de579ec1806bfd2763ef47babc1b03d765fe560c9f8", size = 787398, upload-time = "2024-09-11T18:58:51.05Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/58/3a/f5903977647a9a7e46d5535e9e96c194304aeeca7501240509bde2f9e17f/regex-2024.9.11-cp313-cp313-win32.whl", hash = "sha256:e997fd30430c57138adc06bba4c7c2968fb13d101e57dd5bb9355bf8ce3fa7e8", size = 262035, upload-time = "2024-09-11T18:58:53.526Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ff/80/51ba3a4b7482f6011095b3a036e07374f64de180b7d870b704ed22509002/regex-2024.9.11-cp313-cp313-win_amd64.whl", hash = "sha256:042c55879cfeb21a8adacc84ea347721d3d83a159da6acdf1116859e2427c43f", size = 273510, upload-time = "2024-09-11T18:58:55.263Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ea/d2/e6ee96b7dff201a83f650241c52db8e5bd080967cb93211f57aa448dc9d6/regex-2026.1.15-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:4e3dd93c8f9abe8aa4b6c652016da9a3afa190df5ad822907efe6b206c09896e", size = 488166, upload-time = "2026-01-14T23:13:46.408Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/23/8a/819e9ce14c9f87af026d0690901b3931f3101160833e5d4c8061fa3a1b67/regex-2026.1.15-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:97499ff7862e868b1977107873dd1a06e151467129159a6ffd07b66706ba3a9f", size = 290632, upload-time = "2026-01-14T23:13:48.688Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/c3/23dfe15af25d1d45b07dfd4caa6003ad710dcdcb4c4b279909bdfe7a2de8/regex-2026.1.15-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:0bda75ebcac38d884240914c6c43d8ab5fb82e74cde6da94b43b17c411aa4c2b", size = 288500, upload-time = "2026-01-14T23:13:50.503Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c6/31/1adc33e2f717df30d2f4d973f8776d2ba6ecf939301efab29fca57505c95/regex-2026.1.15-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7dcc02368585334f5bc81fc73a2a6a0bbade60e7d83da21cead622faf408f32c", size = 781670, upload-time = "2026-01-14T23:13:52.453Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/23/ce/21a8a22d13bc4adcb927c27b840c948f15fc973e21ed2346c1bd0eae22dc/regex-2026.1.15-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:693b465171707bbe882a7a05de5e866f33c76aa449750bee94a8d90463533cc9", size = 850820, upload-time = "2026-01-14T23:13:54.894Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6c/4f/3eeacdf587a4705a44484cd0b30e9230a0e602811fb3e2cc32268c70d509/regex-2026.1.15-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:b0d190e6f013ea938623a58706d1469a62103fb2a241ce2873a9906e0386582c", size = 898777, upload-time = "2026-01-14T23:13:56.908Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/79/a9/1898a077e2965c35fc22796488141a22676eed2d73701e37c73ad7c0b459/regex-2026.1.15-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5ff818702440a5878a81886f127b80127f5d50563753a28211482867f8318106", size = 791750, upload-time = "2026-01-14T23:13:58.527Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4c/84/e31f9d149a178889b3817212827f5e0e8c827a049ff31b4b381e76b26e2d/regex-2026.1.15-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f052d1be37ef35a54e394de66136e30fa1191fab64f71fc06ac7bc98c9a84618", size = 782674, upload-time = "2026-01-14T23:13:59.874Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d2/ff/adf60063db24532add6a1676943754a5654dcac8237af024ede38244fd12/regex-2026.1.15-cp310-cp310-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:6bfc31a37fd1592f0c4fc4bfc674b5c42e52efe45b4b7a6a14f334cca4bcebe4", size = 767906, upload-time = "2026-01-14T23:14:01.298Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/af/3e/e6a216cee1e2780fec11afe7fc47b6f3925d7264e8149c607ac389fd9b1a/regex-2026.1.15-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:3d6ce5ae80066b319ae3bc62fd55a557c9491baa5efd0d355f0de08c4ba54e79", size = 774798, upload-time = "2026-01-14T23:14:02.715Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0f/98/23a4a8378a9208514ed3efc7e7850c27fa01e00ed8557c958df0335edc4a/regex-2026.1.15-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:1704d204bd42b6bb80167df0e4554f35c255b579ba99616def38f69e14a5ccb9", size = 845861, upload-time = "2026-01-14T23:14:04.824Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f8/57/d7605a9d53bd07421a8785d349cd29677fe660e13674fa4c6cbd624ae354/regex-2026.1.15-cp310-cp310-musllinux_1_2_riscv64.whl", hash = "sha256:e3174a5ed4171570dc8318afada56373aa9289eb6dc0d96cceb48e7358b0e220", size = 755648, upload-time = "2026-01-14T23:14:06.371Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6f/76/6f2e24aa192da1e299cc1101674a60579d3912391867ce0b946ba83e2194/regex-2026.1.15-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:87adf5bd6d72e3e17c9cb59ac4096b1faaf84b7eb3037a5ffa61c4b4370f0f13", size = 836250, upload-time = "2026-01-14T23:14:08.343Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/11/3a/1f2a1d29453299a7858eab7759045fc3d9d1b429b088dec2dc85b6fa16a2/regex-2026.1.15-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:e85dc94595f4d766bd7d872a9de5ede1ca8d3063f3bdf1e2c725f5eb411159e3", size = 779919, upload-time = "2026-01-14T23:14:09.954Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c0/67/eab9bc955c9dcc58e9b222c801e39cff7ca0b04261792a2149166ce7e792/regex-2026.1.15-cp310-cp310-win32.whl", hash = "sha256:21ca32c28c30d5d65fc9886ff576fc9b59bbca08933e844fa2363e530f4c8218", size = 265888, upload-time = "2026-01-14T23:14:11.35Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1d/62/31d16ae24e1f8803bddb0885508acecaec997fcdcde9c243787103119ae4/regex-2026.1.15-cp310-cp310-win_amd64.whl", hash = "sha256:3038a62fc7d6e5547b8915a3d927a0fbeef84cdbe0b1deb8c99bbd4a8961b52a", size = 277830, upload-time = "2026-01-14T23:14:12.908Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e5/36/5d9972bccd6417ecd5a8be319cebfd80b296875e7f116c37fb2a2deecebf/regex-2026.1.15-cp310-cp310-win_arm64.whl", hash = "sha256:505831646c945e3e63552cc1b1b9b514f0e93232972a2d5bedbcc32f15bc82e3", size = 270376, upload-time = "2026-01-14T23:14:14.782Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d0/c9/0c80c96eab96948363d270143138d671d5731c3a692b417629bf3492a9d6/regex-2026.1.15-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:1ae6020fb311f68d753b7efa9d4b9a5d47a5d6466ea0d5e3b5a471a960ea6e4a", size = 488168, upload-time = "2026-01-14T23:14:16.129Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/17/f0/271c92f5389a552494c429e5cc38d76d1322eb142fb5db3c8ccc47751468/regex-2026.1.15-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:eddf73f41225942c1f994914742afa53dc0d01a6e20fe14b878a1b1edc74151f", size = 290636, upload-time = "2026-01-14T23:14:17.715Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a0/f9/5f1fd077d106ca5655a0f9ff8f25a1ab55b92128b5713a91ed7134ff688e/regex-2026.1.15-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1e8cd52557603f5c66a548f69421310886b28b7066853089e1a71ee710e1cdc1", size = 288496, upload-time = "2026-01-14T23:14:19.326Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b5/e1/8f43b03a4968c748858ec77f746c286d81f896c2e437ccf050ebc5d3128c/regex-2026.1.15-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5170907244b14303edc5978f522f16c974f32d3aa92109fabc2af52411c9433b", size = 793503, upload-time = "2026-01-14T23:14:20.922Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8d/4e/a39a5e8edc5377a46a7c875c2f9a626ed3338cb3bb06931be461c3e1a34a/regex-2026.1.15-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:2748c1ec0663580b4510bd89941a31560b4b439a0b428b49472a3d9944d11cd8", size = 860535, upload-time = "2026-01-14T23:14:22.405Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/dc/1c/9dce667a32a9477f7a2869c1c767dc00727284a9fa3ff5c09a5c6c03575e/regex-2026.1.15-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:2f2775843ca49360508d080eaa87f94fa248e2c946bbcd963bb3aae14f333413", size = 907225, upload-time = "2026-01-14T23:14:23.897Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a4/3c/87ca0a02736d16b6262921425e84b48984e77d8e4e572c9072ce96e66c30/regex-2026.1.15-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:d9ea2604370efc9a174c1b5dcc81784fb040044232150f7f33756049edfc9026", size = 800526, upload-time = "2026-01-14T23:14:26.039Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4b/ff/647d5715aeea7c87bdcbd2f578f47b415f55c24e361e639fe8c0cc88878f/regex-2026.1.15-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:0dcd31594264029b57bf16f37fd7248a70b3b764ed9e0839a8f271b2d22c0785", size = 773446, upload-time = "2026-01-14T23:14:28.109Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/af/89/bf22cac25cb4ba0fe6bff52ebedbb65b77a179052a9d6037136ae93f42f4/regex-2026.1.15-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:c08c1f3e34338256732bd6938747daa3c0d5b251e04b6e43b5813e94d503076e", size = 783051, upload-time = "2026-01-14T23:14:29.929Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1e/f4/6ed03e71dca6348a5188363a34f5e26ffd5db1404780288ff0d79513bce4/regex-2026.1.15-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:e43a55f378df1e7a4fa3547c88d9a5a9b7113f653a66821bcea4718fe6c58763", size = 854485, upload-time = "2026-01-14T23:14:31.366Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d9/9a/8e8560bd78caded8eb137e3e47612430a05b9a772caf60876435192d670a/regex-2026.1.15-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:f82110ab962a541737bd0ce87978d4c658f06e7591ba899192e2712a517badbb", size = 762195, upload-time = "2026-01-14T23:14:32.802Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/38/6b/61fc710f9aa8dfcd764fe27d37edfaa023b1a23305a0d84fccd5adb346ea/regex-2026.1.15-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:27618391db7bdaf87ac6c92b31e8f0dfb83a9de0075855152b720140bda177a2", size = 845986, upload-time = "2026-01-14T23:14:34.898Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fd/2e/fbee4cb93f9d686901a7ca8d94285b80405e8c34fe4107f63ffcbfb56379/regex-2026.1.15-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:bfb0d6be01fbae8d6655c8ca21b3b72458606c4aec9bbc932db758d47aba6db1", size = 788992, upload-time = "2026-01-14T23:14:37.116Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ed/14/3076348f3f586de64b1ab75a3fbabdaab7684af7f308ad43be7ef1849e55/regex-2026.1.15-cp311-cp311-win32.whl", hash = "sha256:b10e42a6de0e32559a92f2f8dc908478cc0fa02838d7dbe764c44dca3fa13569", size = 265893, upload-time = "2026-01-14T23:14:38.426Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0f/19/772cf8b5fc803f5c89ba85d8b1870a1ca580dc482aa030383a9289c82e44/regex-2026.1.15-cp311-cp311-win_amd64.whl", hash = "sha256:e9bf3f0bbdb56633c07d7116ae60a576f846efdd86a8848f8d62b749e1209ca7", size = 277840, upload-time = "2026-01-14T23:14:39.785Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/78/84/d05f61142709474da3c0853222d91086d3e1372bcdab516c6fd8d80f3297/regex-2026.1.15-cp311-cp311-win_arm64.whl", hash = "sha256:41aef6f953283291c4e4e6850607bd71502be67779586a61472beacb315c97ec", size = 270374, upload-time = "2026-01-14T23:14:41.592Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/92/81/10d8cf43c807d0326efe874c1b79f22bfb0fb226027b0b19ebc26d301408/regex-2026.1.15-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:4c8fcc5793dde01641a35905d6731ee1548f02b956815f8f1cab89e515a5bdf1", size = 489398, upload-time = "2026-01-14T23:14:43.741Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/90/b0/7c2a74e74ef2a7c32de724658a69a862880e3e4155cba992ba04d1c70400/regex-2026.1.15-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:bfd876041a956e6a90ad7cdb3f6a630c07d491280bfeed4544053cd434901681", size = 291339, upload-time = "2026-01-14T23:14:45.183Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/19/4d/16d0773d0c818417f4cc20aa0da90064b966d22cd62a8c46765b5bd2d643/regex-2026.1.15-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:9250d087bc92b7d4899ccd5539a1b2334e44eee85d848c4c1aef8e221d3f8c8f", size = 289003, upload-time = "2026-01-14T23:14:47.25Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c6/e4/1fc4599450c9f0863d9406e944592d968b8d6dfd0d552a7d569e43bceada/regex-2026.1.15-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c8a154cf6537ebbc110e24dabe53095e714245c272da9c1be05734bdad4a61aa", size = 798656, upload-time = "2026-01-14T23:14:48.77Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b2/e6/59650d73a73fa8a60b3a590545bfcf1172b4384a7df2e7fe7b9aab4e2da9/regex-2026.1.15-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:8050ba2e3ea1d8731a549e83c18d2f0999fbc99a5f6bd06b4c91449f55291804", size = 864252, upload-time = "2026-01-14T23:14:50.528Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6e/ab/1d0f4d50a1638849a97d731364c9a80fa304fec46325e48330c170ee8e80/regex-2026.1.15-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:0bf065240704cb8951cc04972cf107063917022511273e0969bdb34fc173456c", size = 912268, upload-time = "2026-01-14T23:14:52.952Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/dd/df/0d722c030c82faa1d331d1921ee268a4e8fb55ca8b9042c9341c352f17fa/regex-2026.1.15-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c32bef3e7aeee75746748643667668ef941d28b003bfc89994ecf09a10f7a1b5", size = 803589, upload-time = "2026-01-14T23:14:55.182Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/66/23/33289beba7ccb8b805c6610a8913d0131f834928afc555b241caabd422a9/regex-2026.1.15-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:d5eaa4a4c5b1906bd0d2508d68927f15b81821f85092e06f1a34a4254b0e1af3", size = 775700, upload-time = "2026-01-14T23:14:56.707Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e7/65/bf3a42fa6897a0d3afa81acb25c42f4b71c274f698ceabd75523259f6688/regex-2026.1.15-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:86c1077a3cc60d453d4084d5b9649065f3bf1184e22992bd322e1f081d3117fb", size = 787928, upload-time = "2026-01-14T23:14:58.312Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f4/f5/13bf65864fc314f68cdd6d8ca94adcab064d4d39dbd0b10fef29a9da48fc/regex-2026.1.15-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:2b091aefc05c78d286657cd4db95f2e6313375ff65dcf085e42e4c04d9c8d410", size = 858607, upload-time = "2026-01-14T23:15:00.657Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a3/31/040e589834d7a439ee43fb0e1e902bc81bd58a5ba81acffe586bb3321d35/regex-2026.1.15-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:57e7d17f59f9ebfa9667e6e5a1c0127b96b87cb9cede8335482451ed00788ba4", size = 763729, upload-time = "2026-01-14T23:15:02.248Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9b/84/6921e8129687a427edf25a34a5594b588b6d88f491320b9de5b6339a4fcb/regex-2026.1.15-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:c6c4dcdfff2c08509faa15d36ba7e5ef5fcfab25f1e8f85a0c8f45bc3a30725d", size = 850697, upload-time = "2026-01-14T23:15:03.878Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8a/87/3d06143d4b128f4229158f2de5de6c8f2485170c7221e61bf381313314b2/regex-2026.1.15-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:cf8ff04c642716a7f2048713ddc6278c5fd41faa3b9cab12607c7abecd012c22", size = 789849, upload-time = "2026-01-14T23:15:06.102Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/77/69/c50a63842b6bd48850ebc7ab22d46e7a2a32d824ad6c605b218441814639/regex-2026.1.15-cp312-cp312-win32.whl", hash = "sha256:82345326b1d8d56afbe41d881fdf62f1926d7264b2fc1537f99ae5da9aad7913", size = 266279, upload-time = "2026-01-14T23:15:07.678Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f2/36/39d0b29d087e2b11fd8191e15e81cce1b635fcc845297c67f11d0d19274d/regex-2026.1.15-cp312-cp312-win_amd64.whl", hash = "sha256:4def140aa6156bc64ee9912383d4038f3fdd18fee03a6f222abd4de6357ce42a", size = 277166, upload-time = "2026-01-14T23:15:09.257Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/28/32/5b8e476a12262748851fa8ab1b0be540360692325975b094e594dfebbb52/regex-2026.1.15-cp312-cp312-win_arm64.whl", hash = "sha256:c6c565d9a6e1a8d783c1948937ffc377dd5771e83bd56de8317c450a954d2056", size = 270415, upload-time = "2026-01-14T23:15:10.743Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f8/2e/6870bb16e982669b674cce3ee9ff2d1d46ab80528ee6bcc20fb2292efb60/regex-2026.1.15-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:e69d0deeb977ffe7ed3d2e4439360089f9c3f217ada608f0f88ebd67afb6385e", size = 489164, upload-time = "2026-01-14T23:15:13.962Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/dc/67/9774542e203849b0286badf67199970a44ebdb0cc5fb739f06e47ada72f8/regex-2026.1.15-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:3601ffb5375de85a16f407854d11cca8fe3f5febbe3ac78fb2866bb220c74d10", size = 291218, upload-time = "2026-01-14T23:15:15.647Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b2/87/b0cda79f22b8dee05f774922a214da109f9a4c0eca5da2c9d72d77ea062c/regex-2026.1.15-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:4c5ef43b5c2d4114eb8ea424bb8c9cec01d5d17f242af88b2448f5ee81caadbc", size = 288895, upload-time = "2026-01-14T23:15:17.788Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3b/6a/0041f0a2170d32be01ab981d6346c83a8934277d82c780d60b127331f264/regex-2026.1.15-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:968c14d4f03e10b2fd960f1d5168c1f0ac969381d3c1fcc973bc45fb06346599", size = 798680, upload-time = "2026-01-14T23:15:19.342Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/58/de/30e1cfcdbe3e891324aa7568b7c968771f82190df5524fabc1138cb2d45a/regex-2026.1.15-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:56a5595d0f892f214609c9f76b41b7428bed439d98dc961efafdd1354d42baae", size = 864210, upload-time = "2026-01-14T23:15:22.005Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/64/44/4db2f5c5ca0ccd40ff052ae7b1e9731352fcdad946c2b812285a7505ca75/regex-2026.1.15-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:0bf650f26087363434c4e560011f8e4e738f6f3e029b85d4904c50135b86cfa5", size = 912358, upload-time = "2026-01-14T23:15:24.569Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/79/b6/e6a5665d43a7c42467138c8a2549be432bad22cbd206f5ec87162de74bd7/regex-2026.1.15-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:18388a62989c72ac24de75f1449d0fb0b04dfccd0a1a7c1c43af5eb503d890f6", size = 803583, upload-time = "2026-01-14T23:15:26.526Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e7/53/7cd478222169d85d74d7437e74750005e993f52f335f7c04ff7adfda3310/regex-2026.1.15-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:6d220a2517f5893f55daac983bfa9fe998a7dbcaee4f5d27a88500f8b7873788", size = 775782, upload-time = "2026-01-14T23:15:29.352Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ca/b5/75f9a9ee4b03a7c009fe60500fe550b45df94f0955ca29af16333ef557c5/regex-2026.1.15-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:c9c08c2fbc6120e70abff5d7f28ffb4d969e14294fb2143b4b5c7d20e46d1714", size = 787978, upload-time = "2026-01-14T23:15:31.295Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/72/b3/79821c826245bbe9ccbb54f6eadb7879c722fd3e0248c17bfc90bf54e123/regex-2026.1.15-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:7ef7d5d4bd49ec7364315167a4134a015f61e8266c6d446fc116a9ac4456e10d", size = 858550, upload-time = "2026-01-14T23:15:33.558Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4a/85/2ab5f77a1c465745bfbfcb3ad63178a58337ae8d5274315e2cc623a822fa/regex-2026.1.15-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:6e42844ad64194fa08d5ccb75fe6a459b9b08e6d7296bd704460168d58a388f3", size = 763747, upload-time = "2026-01-14T23:15:35.206Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6d/84/c27df502d4bfe2873a3e3a7cf1bdb2b9cc10284d1a44797cf38bed790470/regex-2026.1.15-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:cfecdaa4b19f9ca534746eb3b55a5195d5c95b88cac32a205e981ec0a22b7d31", size = 850615, upload-time = "2026-01-14T23:15:37.523Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7d/b7/658a9782fb253680aa8ecb5ccbb51f69e088ed48142c46d9f0c99b46c575/regex-2026.1.15-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:08df9722d9b87834a3d701f3fca570b2be115654dbfd30179f30ab2f39d606d3", size = 789951, upload-time = "2026-01-14T23:15:39.582Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fc/2a/5928af114441e059f15b2f63e188bd00c6529b3051c974ade7444b85fcda/regex-2026.1.15-cp313-cp313-win32.whl", hash = "sha256:d426616dae0967ca225ab12c22274eb816558f2f99ccb4a1d52ca92e8baf180f", size = 266275, upload-time = "2026-01-14T23:15:42.108Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4f/16/5bfbb89e435897bff28cf0352a992ca719d9e55ebf8b629203c96b6ce4f7/regex-2026.1.15-cp313-cp313-win_amd64.whl", hash = "sha256:febd38857b09867d3ed3f4f1af7d241c5c50362e25ef43034995b77a50df494e", size = 277145, upload-time = "2026-01-14T23:15:44.244Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/56/c1/a09ff7392ef4233296e821aec5f78c51be5e91ffde0d163059e50fd75835/regex-2026.1.15-cp313-cp313-win_arm64.whl", hash = "sha256:8e32f7896f83774f91499d239e24cebfadbc07639c1494bb7213983842348337", size = 270411, upload-time = "2026-01-14T23:15:45.858Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3c/38/0cfd5a78e5c6db00e6782fdae70458f89850ce95baa5e8694ab91d89744f/regex-2026.1.15-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:ec94c04149b6a7b8120f9f44565722c7ae31b7a6d2275569d2eefa76b83da3be", size = 492068, upload-time = "2026-01-14T23:15:47.616Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/50/72/6c86acff16cb7c959c4355826bbf06aad670682d07c8f3998d9ef4fee7cd/regex-2026.1.15-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:40c86d8046915bb9aeb15d3f3f15b6fd500b8ea4485b30e1bbc799dab3fe29f8", size = 292756, upload-time = "2026-01-14T23:15:49.307Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4e/58/df7fb69eadfe76526ddfce28abdc0af09ffe65f20c2c90932e89d705153f/regex-2026.1.15-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:726ea4e727aba21643205edad8f2187ec682d3305d790f73b7a51c7587b64bdd", size = 291114, upload-time = "2026-01-14T23:15:51.484Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ed/6c/a4011cd1cf96b90d2cdc7e156f91efbd26531e822a7fbb82a43c1016678e/regex-2026.1.15-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1cb740d044aff31898804e7bf1181cc72c03d11dfd19932b9911ffc19a79070a", size = 807524, upload-time = "2026-01-14T23:15:53.102Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1d/25/a53ffb73183f69c3e9f4355c4922b76d2840aee160af6af5fac229b6201d/regex-2026.1.15-cp313-cp313t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:05d75a668e9ea16f832390d22131fe1e8acc8389a694c8febc3e340b0f810b93", size = 873455, upload-time = "2026-01-14T23:15:54.956Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/66/0b/8b47fc2e8f97d9b4a851736f3890a5f786443aa8901061c55f24c955f45b/regex-2026.1.15-cp313-cp313t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:d991483606f3dbec93287b9f35596f41aa2e92b7c2ebbb935b63f409e243c9af", size = 915007, upload-time = "2026-01-14T23:15:57.041Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c2/fa/97de0d681e6d26fabe71968dbee06dd52819e9a22fdce5dac7256c31ed84/regex-2026.1.15-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:194312a14819d3e44628a44ed6fea6898fdbecb0550089d84c403475138d0a09", size = 812794, upload-time = "2026-01-14T23:15:58.916Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/22/38/e752f94e860d429654aa2b1c51880bff8dfe8f084268258adf9151cf1f53/regex-2026.1.15-cp313-cp313t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:fe2fda4110a3d0bc163c2e0664be44657431440722c5c5315c65155cab92f9e5", size = 781159, upload-time = "2026-01-14T23:16:00.817Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e9/a7/d739ffaef33c378fc888302a018d7f81080393d96c476b058b8c64fd2b0d/regex-2026.1.15-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:124dc36c85d34ef2d9164da41a53c1c8c122cfb1f6e1ec377a1f27ee81deb794", size = 795558, upload-time = "2026-01-14T23:16:03.267Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3e/c4/542876f9a0ac576100fc73e9c75b779f5c31e3527576cfc9cb3009dcc58a/regex-2026.1.15-cp313-cp313t-musllinux_1_2_ppc64le.whl", hash = "sha256:a1774cd1981cd212506a23a14dba7fdeaee259f5deba2df6229966d9911e767a", size = 868427, upload-time = "2026-01-14T23:16:05.646Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fc/0f/d5655bea5b22069e32ae85a947aa564912f23758e112cdb74212848a1a1b/regex-2026.1.15-cp313-cp313t-musllinux_1_2_riscv64.whl", hash = "sha256:b5f7d8d2867152cdb625e72a530d2ccb48a3d199159144cbdd63870882fb6f80", size = 769939, upload-time = "2026-01-14T23:16:07.542Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/20/06/7e18a4fa9d326daeda46d471a44ef94201c46eaa26dbbb780b5d92cbfdda/regex-2026.1.15-cp313-cp313t-musllinux_1_2_s390x.whl", hash = "sha256:492534a0ab925d1db998defc3c302dae3616a2fc3fe2e08db1472348f096ddf2", size = 854753, upload-time = "2026-01-14T23:16:10.395Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3b/67/dc8946ef3965e166f558ef3b47f492bc364e96a265eb4a2bb3ca765c8e46/regex-2026.1.15-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:c661fc820cfb33e166bf2450d3dadbda47c8d8981898adb9b6fe24e5e582ba60", size = 799559, upload-time = "2026-01-14T23:16:12.347Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a5/61/1bba81ff6d50c86c65d9fd84ce9699dd106438ee4cdb105bf60374ee8412/regex-2026.1.15-cp313-cp313t-win32.whl", hash = "sha256:99ad739c3686085e614bf77a508e26954ff1b8f14da0e3765ff7abbf7799f952", size = 268879, upload-time = "2026-01-14T23:16:14.049Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e9/5e/cef7d4c5fb0ea3ac5c775fd37db5747f7378b29526cc83f572198924ff47/regex-2026.1.15-cp313-cp313t-win_amd64.whl", hash = "sha256:32655d17905e7ff8ba5c764c43cb124e34a9245e45b83c22e81041e1071aee10", size = 280317, upload-time = "2026-01-14T23:16:15.718Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b4/52/4317f7a5988544e34ab57b4bde0f04944c4786128c933fb09825924d3e82/regex-2026.1.15-cp313-cp313t-win_arm64.whl", hash = "sha256:b2a13dd6a95e95a489ca242319d18fc02e07ceb28fa9ad146385194d95b3c829", size = 271551, upload-time = "2026-01-14T23:16:17.533Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
||||
Reference in New Issue
Block a user