-
Notifications
You must be signed in to change notification settings - Fork 204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pinecone Native Integration #485
base: main
Are you sure you want to change the base?
Conversation
|
GitGuardian id | GitGuardian status | Secret | Commit | Filename | |
---|---|---|---|---|---|
- | - | Generic High Entropy Secret | 4570e65 | agentops/llms/pinecone_test.py | View secret |
- | - | Generic High Entropy Secret | 5b0a5b1 | agentops/llms/pinecone_test.py | View secret |
- | - | Generic High Entropy Secret | 5b0a5b1 | agentops/llms/pinecone_test.py | View secret |
- | - | Generic High Entropy Secret | 4570e65 | agentops/llms/pinecone_test.py | View secret |
🛠 Guidelines to remediate hardcoded secrets
- Understand the implications of revoking this secret by investigating where it is used in your code.
- Replace and store your secrets safely. Learn here the best practices.
- Revoke and rotate these secrets.
- If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.
To avoid such incidents in the future consider
- following these best practices for managing and storing secrets including API keys and other credentials
- install secret detection on pre-commit to catch secret before it leaves your machine and ease remediation.
🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.
Amazing-- this is actually the first vectorDB integration PR we've seen so far. Will take a look soon @monami44. In the meantime-- can you resolve the merge conflict? |
Hey @areibman , I just resolved the merge conflict. Also here is a video-walkthrough I sent to Adam as I submitted the PR. Hit me up if you want anything else to be changed |
Super cool! @the-praxs and I can take a look |
provider.delete_assistant(pc, "test-assistant") | ||
print("Assistant deleted") | ||
|
||
os.remove("test_data.txt") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TL;DR tests should not write to physical file, but simulate doing so by writing memory.
Dangerous practice... in the rare but not impossible event this can lead to race conditions and surely host directory pollution, for which I suggest either standard pyfakefs or mktemp / tempfile
to avoid file collision at the very least.
Both provisioning and teardown belong to a fixture or a provisioner (if I remember correctly pyfakefs already handles it for you).
Added pyfakefs in PR #490
Files selected (1)
Files ignored (0)InstructionsEmoji Descriptions:
Interact with the Bot:
Available Commands:
Tips for Using @Entelligence.AI Effectively:
Need More Help?
|
agentops/llms/__init__.py
Outdated
"assistant.list_assistants", | ||
"assistant.Assistant.chat", | ||
"assistant.Assistant.chat_completions" | ||
), | ||
}, | ||
"mistralai": { | ||
"1.0.1": ("chat.complete", "chat.stream"), | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ℹ️ Logic Error
Ensure Proper Integration of New Methods into Tracking Logic
The addition of the new methods assistant.Assistant.chat
and assistant.Assistant.chat_completions
to the LlmTracker
class is intended to enhance tracking capabilities. However, the current implementation does not ensure that these methods are correctly integrated into the tracking logic. Ensure that the tracking mechanism is updated to handle these new methods appropriately, including any necessary data collection or processing logic.
Files selected (1)
Files ignored (0)InstructionsEmoji Descriptions:
Interact with the Bot:
Available Commands:
Tips for Using @Entelligence.AI Effectively:
Need More Help?
|
agentops/llms/__init__.py
Outdated
else: | ||
logger.warning( | ||
f"Only Pinecone>=2.0.0 supported. v{module_version} found." | ||
) | ||
|
||
if api == "mistralai": | ||
module_version = version(api) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ℹ️ Performance Improvement
Add Conditional Check to Prevent Unnecessary Logging
The current implementation logs a warning when an unsupported version of Pinecone is detected. However, it would be beneficial to include a conditional check to prevent unnecessary logging if the version is already supported. This can help reduce log noise and improve performance by avoiding redundant operations.
def override_api(self):
module_version = version(api)
if module_version < '2.0.0':
logger.warning(
f"Only Pinecone>=2.0.0 supported. v{module_version} found."
)
if api == "mistralai":
module_version = version(api)
Commitable Code Suggestion:
else: | |
logger.warning( | |
f"Only Pinecone>=2.0.0 supported. v{module_version} found." | |
) | |
if api == "mistralai": | |
module_version = version(api) | |
def override_api(self): | |
module_version = version(api) | |
if module_version < '2.0.0': | |
logger.warning( | |
f"Only Pinecone>=2.0.0 supported. v{module_version} found." | |
) | |
if api == "mistralai": | |
module_version = version(api) |
@monami44 I would request to do the following modifications -
|
Files selected (6)
Files ignored (1)
InstructionsEmoji Descriptions:
Interact with the Bot:
Available Commands:
Tips for Using @Entelligence.AI Effectively:
Need More Help?
|
pyproject.toml
Outdated
|
||
[project] | ||
name = "agentops" | ||
version = "0.3.14" | ||
version = "0.3.17" | ||
authors = [ | ||
{ name="Alex Reibman", email="[email protected]" }, | ||
{ name="Shawn Qiu", email="[email protected]" }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ℹ️ Version Update
Verify Changes for Version Bump to 0.3.17
The version bump from 0.3.15rc1
to 0.3.17
indicates a stable release. Ensure that all changes between these versions are thoroughly tested and documented, especially if there are any breaking changes or significant new features.
pyproject.toml
Outdated
"requests>=2.0.0,<3.0.0", | ||
"psutil==5.9.8", | ||
"packaging==23.2", | ||
"termcolor==2.4.0", | ||
"termcolor>=2.3.0", # 2.x.x tolerant | ||
"PyYAML>=5.3,<7.0" | ||
] | ||
[project.optional-dependencies] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ℹ️ Dependency Management
Ensure Compatibility with Version Range Change for termcolor
The change from a fixed version to a version range for termcolor
allows for greater flexibility in dependency management. However, ensure that the new version range is compatible with the rest of the project and does not introduce any breaking changes.
Commitable Code Suggestion:
dependencies = [
"requests>=2.0.0,<3.0.0",
"psutil==5.9.8",
"packaging==23.2",
- "termcolor==2.4.0",
+ "termcolor>=2.3.0", # 2.x.x tolerant
"PyYAML>=5.3,<7.0"
]
[project.optional-dependencies]
name = "aiohappyeyeballs" | ||
version = "2.4.3" | ||
description = "Happy Eyeballs for asyncio" | ||
optional = false | ||
python-versions = ">=3.8" | ||
files = [ | ||
{file = "aiohappyeyeballs-2.4.3-py3-none-any.whl", hash = "sha256:8a7a83727b2756f394ab2895ea0765a0a8c475e3c71e98d43d76f22b4b435572"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ℹ️ Dependency Management
Review Mandatory Dependency Change for aiohappyeyeballs
The change from optional to mandatory for the aiohappyeyeballs
package indicates a shift in its necessity for the project. Ensure that this package is indeed required for all environments and that its inclusion does not introduce unnecessary dependencies or conflicts.
Commitable Code Suggestion:
-optional = true
+optional = false
assistant = pc_instance.assistant.Assistant(assistant_name=assistant_name) | ||
response = assistant.chat_completions(messages=message_objects, stream=stream, model=model) | ||
|
||
# Debug logging | ||
print(f"Debug - Raw response: {response}") | ||
|
||
# Initialize completion text | ||
completion_text = "" | ||
|
||
# Extract message content - handle both dictionary and object responses | ||
if isinstance(response, dict): | ||
# Handle dictionary response | ||
choices = response.get("choices", []) | ||
if choices and isinstance(choices[0], dict): | ||
message = choices[0].get("message", {}) | ||
if isinstance(message, dict): | ||
completion_text = message.get("content", "") | ||
else: | ||
# Handle object response | ||
try: | ||
if hasattr(response, 'choices') and response.choices: | ||
if hasattr(response.choices[0], 'message'): | ||
completion_text = response.choices[0].message.content | ||
except AttributeError: | ||
# If we can't access attributes, convert to string | ||
completion_text = str(response) | ||
|
||
# If still empty, try alternative extraction methods | ||
if not completion_text: | ||
try: | ||
# Try to access as string representation | ||
completion_text = str(response) | ||
# If it's just an empty string or 'None', use the full response | ||
if not completion_text or completion_text == 'None': | ||
completion_text = f"Full response: {response}" | ||
except: | ||
completion_text = "Error extracting response content" | ||
|
||
# Create completion message for event logging | ||
completion_message = { | ||
"role": "assistant", | ||
"content": completion_text | ||
} | ||
|
||
# Create LLMEvent | ||
llm_event = LLMEvent( | ||
init_timestamp=init_timestamp, | ||
prompt=messages[-1]["content"] if messages else "", | ||
completion=completion_message, | ||
model=response.get("model", "unknown") if isinstance(response, dict) else "unknown", | ||
params=kwargs, | ||
returns=response, | ||
prompt_tokens=response.get("usage", {}).get("prompt_tokens") if isinstance(response, dict) else None, | ||
completion_tokens=response.get("usage", {}).get("completion_tokens") if isinstance(response, dict) else None | ||
) | ||
|
||
if session: | ||
llm_event.session_id = session.session_id | ||
if "event_counts" in session.__dict__: | ||
session.event_counts["llms"] += 1 | ||
|
||
self._safe_record(session, llm_event) | ||
|
||
# Return the complete text | ||
return completion_text | ||
|
||
except Exception as e: | ||
print(f"Debug - Exception in chat_completions: {str(e)}") | ||
error_event = ErrorEvent( | ||
trigger_event=LLMEvent( | ||
init_timestamp=init_timestamp, | ||
prompt=messages[-1]["content"] if messages else "", | ||
completion="", | ||
model=model or "unknown", | ||
params=kwargs | ||
), | ||
exception=e, | ||
details=kwargs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ℹ️ Performance Improvement
Optimize Response Content Extraction
The current implementation of extracting the completion_text
from the response
object involves multiple checks and conversions, which can be optimized. Consider using a more direct approach to extract the content, reducing the complexity and potential for errors.
+ # Extract message content - handle both dictionary and object responses
+ if isinstance(response, dict):
+ # Directly access the message content if available
+ completion_text = response.get("choices", [{}])[0].get("message", {}).get("content", "")
+ else:
+ # Handle object response
+ try:
+ completion_text = response.choices[0].message.content
+ except (AttributeError, IndexError):
+ # If we can't access attributes, convert to string
+ completion_text = str(response)
Commitable Code Suggestion:
assistant = pc_instance.assistant.Assistant(assistant_name=assistant_name) | |
response = assistant.chat_completions(messages=message_objects, stream=stream, model=model) | |
# Debug logging | |
print(f"Debug - Raw response: {response}") | |
# Initialize completion text | |
completion_text = "" | |
# Extract message content - handle both dictionary and object responses | |
if isinstance(response, dict): | |
# Handle dictionary response | |
choices = response.get("choices", []) | |
if choices and isinstance(choices[0], dict): | |
message = choices[0].get("message", {}) | |
if isinstance(message, dict): | |
completion_text = message.get("content", "") | |
else: | |
# Handle object response | |
try: | |
if hasattr(response, 'choices') and response.choices: | |
if hasattr(response.choices[0], 'message'): | |
completion_text = response.choices[0].message.content | |
except AttributeError: | |
# If we can't access attributes, convert to string | |
completion_text = str(response) | |
# If still empty, try alternative extraction methods | |
if not completion_text: | |
try: | |
# Try to access as string representation | |
completion_text = str(response) | |
# If it's just an empty string or 'None', use the full response | |
if not completion_text or completion_text == 'None': | |
completion_text = f"Full response: {response}" | |
except: | |
completion_text = "Error extracting response content" | |
# Create completion message for event logging | |
completion_message = { | |
"role": "assistant", | |
"content": completion_text | |
} | |
# Create LLMEvent | |
llm_event = LLMEvent( | |
init_timestamp=init_timestamp, | |
prompt=messages[-1]["content"] if messages else "", | |
completion=completion_message, | |
model=response.get("model", "unknown") if isinstance(response, dict) else "unknown", | |
params=kwargs, | |
returns=response, | |
prompt_tokens=response.get("usage", {}).get("prompt_tokens") if isinstance(response, dict) else None, | |
completion_tokens=response.get("usage", {}).get("completion_tokens") if isinstance(response, dict) else None | |
) | |
if session: | |
llm_event.session_id = session.session_id | |
if "event_counts" in session.__dict__: | |
session.event_counts["llms"] += 1 | |
self._safe_record(session, llm_event) | |
# Return the complete text | |
return completion_text | |
except Exception as e: | |
print(f"Debug - Exception in chat_completions: {str(e)}") | |
error_event = ErrorEvent( | |
trigger_event=LLMEvent( | |
init_timestamp=init_timestamp, | |
prompt=messages[-1]["content"] if messages else "", | |
completion="", | |
model=model or "unknown", | |
params=kwargs | |
), | |
exception=e, | |
details=kwargs | |
# Extract message content - handle both dictionary and object responses | |
if isinstance(response, dict): | |
# Directly access the message content if available | |
completion_text = response.get("choices", [{}])[0].get("message", {}).get("content", "") | |
else: | |
# Handle object response | |
try: | |
completion_text = response.choices[0].message.content | |
except (AttributeError, IndexError): | |
# If we can't access attributes, convert to string | |
completion_text = str(response) |
" 'name': 'test-assistant',\n", | ||
" 'status': 'Ready',\n", | ||
" 'updated_at': '2024-11-11T11:11:37.675110926Z'}\n", | ||
"\n", | ||
"Checking assistant status...\n", | ||
"Assistant status: None\n", | ||
"\n", | ||
"Updating assistant...\n", | ||
"Updated assistant: {'created_at': '2024-11-11T11:11:35.667889532Z',\n", | ||
" 'instructions': 'Updated instructions for testing.',\n", | ||
" 'metadata': {},\n", | ||
" 'name': 'test-assistant',\n", | ||
" 'status': 'Ready',\n", | ||
" 'updated_at': '2024-11-11T11:11:41.564726136Z'}\n", | ||
"\n", | ||
"Uploading file...\n", | ||
"File upload: {'id': '63a1c983-9906-4008-ab5e-af9fd0e8f100', 'name': 'test_document.txt', 'metadata': None, 'created_on': '2024-11-11T11:11:42.389279875Z', 'updated_on': '2024-11-11T11:11:58.319674163Z', 'status': 'Available', 'percent_done': 1.0, 'signed_url': None}\n", | ||
"\n", | ||
"Waiting for file processing...\n", | ||
"\n", | ||
"Testing chat completions...\n", | ||
"Debug - Raw response: {'choices': [{'finish_reason': 'stop',\n", | ||
" 'index': 0,\n", | ||
" 'message': {'content': 'The uploaded file mentions the following '\n", | ||
" 'facts about nature and science:\\n'\n", | ||
" '1. The sky is blue.\\n'\n", | ||
" '2. Water boils at 100 degrees Celsius.\\n'\n", | ||
" '3. The Earth orbits around the Sun [1, '\n", | ||
" 'pp. 1].\\n'\n", | ||
" '\\n'\n", | ||
" 'References:\\n'\n", | ||
" '1. '\n", | ||
" '[test_document.txt](https://storage.googleapis.com/knowledge-prod-files/2ae32459-b2ad-4fce-a3b7-ac0413a7b798%2F9f246797-8b7b-49d2-89c8-e79c234850b4%2F63a1c983-9906-4008-ab5e-af9fd0e8f100.txt?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=ke-prod-1%40pc-knowledge-prod.iam.gserviceaccount.com%2F20241111%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20241111T111208Z&X-Goog-Expires=3600&X-Goog-SignedHeaders=host&response-content-disposition=inline&response-content-type=text%2Fplain&X-Goog-Signature=272842cff3e11390f927a9aff2ac9b299a986bf5ad5c5bb616a3fdddf7fb60f20bffab1bb838bc9761297bbe8360c21af64f5428340d2d4d172cf5d58958aa78c1db99034cf937e0a14688a63ca71f20378f1608dc1b0473fa0cf497cf9ee1eb58e34c2073e0399215e99646efc80fc18c21fbfdac4ff2e777d0c85f0b1f90b47537f9b057e8de89a8eaa0e747a89fce0456de293a69d689f543b60b63e8b1e877f8f338cada78105cc8a068cc4a09c418b61939186768baa942386de0c494f8b22054e28aef441e02f8afaeea6678e1a9078522bcab58fd982f5e9ac3ed47dc6d5fb11423133e44c6636594fd1978dc721e6b081b4dd7d468120bcc1179e8b4) \\n',\n", | ||
" 'role': '\"assistant\"'}}],\n", | ||
" 'id': '000000000000000024387da14e422955',\n", | ||
" 'model': 'gpt-4o-2024-05-13',\n", | ||
" 'usage': {'completion_tokens': 48, 'prompt_tokens': 412, 'total_tokens': 460}}\n", | ||
"Chat completion response: The uploaded file mentions the following facts about nature and science:\n", | ||
"1. The sky is blue.\n", | ||
"2. Water boils at 100 degrees Celsius.\n", | ||
"3. The Earth orbits around the Sun [1, pp. 1].\n", | ||
"\n", | ||
"References:\n", | ||
"1. [test_document.txt](https://storage.googleapis.com/knowledge-prod-files/2ae32459-b2ad-4fce-a3b7-ac0413a7b798%2F9f246797-8b7b-49d2-89c8-e79c234850b4%2F63a1c983-9906-4008-ab5e-af9fd0e8f100.txt?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=ke-prod-1%40pc-knowledge-prod.iam.gserviceaccount.com%2F20241111%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20241111T111208Z&X-Goog-Expires=3600&X-Goog-SignedHeaders=host&response-content-disposition=inline&response-content-type=text%2Fplain&X-Goog-Signature=272842cff3e11390f927a9aff2ac9b299a986bf5ad5c5bb616a3fdddf7fb60f20bffab1bb838bc9761297bbe8360c21af64f5428340d2d4d172cf5d58958aa78c1db99034cf937e0a14688a63ca71f20378f1608dc1b0473fa0cf497cf9ee1eb58e34c2073e0399215e99646efc80fc18c21fbfdac4ff2e777d0c85f0b1f90b47537f9b057e8de89a8eaa0e747a89fce0456de293a69d689f543b60b63e8b1e877f8f338cada78105cc8a068cc4a09c418b61939186768baa942386de0c494f8b22054e28aef441e02f8afaeea6678e1a9078522bcab58fd982f5e9ac3ed47dc6d5fb11423133e44c6636594fd1978dc721e6b081b4dd7d468120bcc1179e8b4) \n", | ||
"\n", | ||
"\n", | ||
"Deleting uploaded file...\n", | ||
"File deletion response: None\n", | ||
"\n", | ||
"Cleaning up...\n", | ||
"Assistant deleted\n" | ||
] | ||
}, | ||
{ | ||
"name": "stderr", | ||
"output_type": "stream", | ||
"text": [ | ||
"🖇 AgentOps: Session Stats - \u001b[1mDuration:\u001b[0m 51.8s | \u001b[1mCost:\u001b[0m $0.00 | \u001b[1mLLMs:\u001b[0m 2 | \u001b[1mTools:\u001b[0m 0 | \u001b[1mActions:\u001b[0m 16 | \u001b[1mErrors:\u001b[0m 0 | \u001b[1mVectors:\u001b[0m 0\n", | ||
"🖇 AgentOps: \u001b[34m\u001b[34mSession Replay: https://app.agentops.ai/drilldown?session_id=b0ceab6a-c0f2-4ca1-82fa-36941bc12aff\u001b[0m\u001b[0m\n" | ||
] | ||
}, | ||
{ | ||
"name": "stdout", | ||
"output_type": "stream", | ||
"text": [ | ||
"\n", | ||
"Assistant tests completed successfully!\n" | ||
] | ||
} | ||
], | ||
"source": [ | ||
"\n", | ||
"if __name__ == \"__main__\":\n", | ||
" agentops.init(default_tags=[\"pinecone-assistant-test\"])\n", | ||
" test_assistant_operations()\n" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "agentops-F4gm0d-M-py3.11", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.11.5" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 5 | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ℹ️ Performance Improvement
Optimize File Processing Wait with Exponential Backoff
The current implementation of the test_assistant_operations
function uses a fixed sleep interval to wait for file processing. This can lead to unnecessary delays if the file is processed quickly or excessive polling if it takes longer. Implementing an exponential backoff strategy can optimize the waiting time and reduce unnecessary load on the system.
{ | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"id": "9c559018", | |
"metadata": {}, | |
"source": [ | |
"# Pinecone Assistant Test Notebook" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 1, | |
"id": "f91eaebb", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stderr", | |
"output_type": "stream", | |
"text": [ | |
"/Users/maksymliamin/Library/Caches/pypoetry/virtualenvs/agentops-F4gm0d-M-py3.11/lib/python3.11/site-packages/pinecone/data/index.py:1: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n", | |
" from tqdm.autonotebook import tqdm\n" | |
] | |
}, | |
{ | |
"data": { | |
"text/plain": [ | |
"True" | |
] | |
}, | |
"execution_count": 1, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"\n", | |
"import agentops\n", | |
"from dotenv import load_dotenv\n", | |
"from pinecone import Pinecone\n", | |
"from pinecone_plugins.assistant.models.chat import Message\n", | |
"import time\n", | |
"import tempfile\n", | |
"import os\n", | |
"\n", | |
"# Load environment variables\n", | |
"load_dotenv()\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "5b7ff832", | |
"metadata": {}, | |
"source": [ | |
"## Define Assistant Operations Test Function" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 2, | |
"id": "e20e5aca", | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"\n", | |
"def test_assistant_operations():\n", | |
" \"\"\"Test Pinecone Assistant operations using in-memory or temporary file handling\"\"\"\n", | |
" # Initialize Pinecone and Provider\n", | |
" pc = Pinecone(api_key=os.getenv(\"PINECONE_API_KEY\"))\n", | |
" provider = agentops.llms.PineconeProvider(pc)\n", | |
" \n", | |
" try:\n", | |
" # List existing assistants\n", | |
" print(\"\\nListing assistants...\")\n", | |
" assistants = provider.list_assistants(pc)\n", | |
" print(f\"Current assistants: {assistants}\")\n", | |
" \n", | |
" # Create a new assistant\n", | |
" print(\"\\nCreating assistant...\")\n", | |
" assistant = provider.create_assistant(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" instructions=\"You are a helpful assistant for testing purposes.\"\n", | |
" )\n", | |
" print(f\"Created assistant: {assistant}\")\n", | |
" \n", | |
" # Check assistant status\n", | |
" print(\"\\nChecking assistant status...\")\n", | |
" status = provider.get_assistant(pc, \"test-assistant\")\n", | |
" print(f\"Assistant status: {status}\")\n", | |
" \n", | |
" # Update assistant\n", | |
" print(\"\\nUpdating assistant...\")\n", | |
" updated = provider.update_assistant(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" instructions=\"Updated instructions for testing.\"\n", | |
" )\n", | |
" print(f\"Updated assistant: {updated}\")\n", | |
" \n", | |
" # Create in-memory file-like object with test data\n", | |
" test_data_content = \"\"\"\n", | |
" This is a test document containing specific information.\n", | |
" The document discusses important facts:\n", | |
" 1. The sky is blue\n", | |
" 2. Water boils at 100 degrees Celsius\n", | |
" 3. The Earth orbits around the Sun\n", | |
" \n", | |
" This information should be retrievable by the assistant.\n", | |
" \"\"\"\n", | |
" \n", | |
" # Create a proper temporary file with content\n", | |
" temp_dir = tempfile.mkdtemp() # Create a temporary directory\n", | |
" file_path = os.path.join(temp_dir, 'test_document.txt') # Create a path with explicit filename\n", | |
" \n", | |
" with open(file_path, 'w', encoding='utf-8') as f:\n", | |
" f.write(test_data_content)\n", | |
"\n", | |
" # Upload file using the explicit file path\n", | |
" print(\"\\nUploading file...\")\n", | |
" file_upload = provider.upload_file(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" file_path=file_path\n", | |
" )\n", | |
" print(f\"File upload: {file_upload}\")\n", | |
" \n", | |
" # Wait for file processing (check status until ready)\n", | |
" print(\"\\nWaiting for file processing...\")\n", | |
" max_retries = 10\n", | |
" for _ in range(max_retries):\n", | |
" file_status = provider.describe_file(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" file_id=file_upload[\"id\"]\n", | |
" )\n", | |
" if file_status.get(\"status\") == \"Available\":\n", | |
" break\n", | |
" print(\"File still processing, waiting...\")\n", | |
" time.sleep(2)\n", | |
" \n", | |
" # Test chat with OpenAI-compatible interface\n", | |
" print(\"\\nTesting chat completions...\")\n", | |
" chat_completion = provider.chat_completions(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" messages=[\n", | |
" {\"role\": \"user\", \"content\": \"What facts are mentioned in the uploaded file about nature and science?\"}\n", | |
" ]\n", | |
" )\n", | |
" print(f\"Chat completion response: {chat_completion}\")\n", | |
" \n", | |
" # Delete uploaded file\n", | |
" print(\"\\nDeleting uploaded file...\")\n", | |
" delete_response = provider.delete_file(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" file_id=file_upload[\"id\"]\n", | |
" )\n", | |
" print(f\"File deletion response: {delete_response}\")\n", | |
" \n", | |
" # Clean up\n", | |
" print(\"\\nCleaning up...\")\n", | |
" os.remove(file_path) # Remove the temporary file\n", | |
" os.rmdir(temp_dir) # Remove the temporary directory\n", | |
" # Delete assistant\n", | |
" provider.delete_assistant(pc, \"test-assistant\")\n", | |
" print(\"Assistant deleted\")\n", | |
"\n", | |
" except Exception as e:\n", | |
" print(f\"Error during testing: {e}\")\n", | |
" agentops.end_session(end_state=\"Fail\")\n", | |
" return\n", | |
" \n", | |
" agentops.end_session(end_state=\"Success\")\n", | |
" print(\"\\nAssistant tests completed successfully!\")\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "8e1c07ce", | |
"metadata": {}, | |
"source": [ | |
"## Run the Assistant Operations Test Function" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 3, | |
"id": "2cb94c2e", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stderr", | |
"output_type": "stream", | |
"text": [ | |
"🖇 AgentOps: WARNING: agentops is out of date. Please update with the command: 'pip install --upgrade agentops'\n", | |
"🖇 AgentOps: \u001b[34m\u001b[34mSession Replay: https://app.agentops.ai/drilldown?session_id=b0ceab6a-c0f2-4ca1-82fa-36941bc12aff\u001b[0m\u001b[0m\n" | |
] | |
}, | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"\n", | |
"Listing assistants...\n", | |
"Current assistants: []\n", | |
"\n", | |
"Creating assistant...\n", | |
"Created assistant: {'created_at': '2024-11-11T11:11:35.667889532Z',\n", | |
" 'instructions': 'You are a helpful assistant for testing purposes.',\n", | |
" 'metadata': {},\n", | |
" 'name': 'test-assistant',\n", | |
" 'status': 'Ready',\n", | |
" 'updated_at': '2024-11-11T11:11:37.675110926Z'}\n", | |
"\n", | |
"Checking assistant status...\n", | |
"Assistant status: None\n", | |
"\n", | |
"Updating assistant...\n", | |
"Updated assistant: {'created_at': '2024-11-11T11:11:35.667889532Z',\n", | |
" 'instructions': 'Updated instructions for testing.',\n", | |
" 'metadata': {},\n", | |
" 'name': 'test-assistant',\n", | |
" 'status': 'Ready',\n", | |
" 'updated_at': '2024-11-11T11:11:41.564726136Z'}\n", | |
"\n", | |
"Uploading file...\n", | |
"File upload: {'id': '63a1c983-9906-4008-ab5e-af9fd0e8f100', 'name': 'test_document.txt', 'metadata': None, 'created_on': '2024-11-11T11:11:42.389279875Z', 'updated_on': '2024-11-11T11:11:58.319674163Z', 'status': 'Available', 'percent_done': 1.0, 'signed_url': None}\n", | |
"\n", | |
"Waiting for file processing...\n", | |
"\n", | |
"Testing chat completions...\n", | |
"Debug - Raw response: {'choices': [{'finish_reason': 'stop',\n", | |
" 'index': 0,\n", | |
" 'message': {'content': 'The uploaded file mentions the following '\n", | |
" 'facts about nature and science:\\n'\n", | |
" '1. The sky is blue.\\n'\n", | |
" '2. Water boils at 100 degrees Celsius.\\n'\n", | |
" '3. The Earth orbits around the Sun [1, '\n", | |
" 'pp. 1].\\n'\n", | |
" '\\n'\n", | |
" 'References:\\n'\n", | |
" '1. '\n", | |
" '[test_document.txt](https://storage.googleapis.com/knowledge-prod-files/2ae32459-b2ad-4fce-a3b7-ac0413a7b798%2F9f246797-8b7b-49d2-89c8-e79c234850b4%2F63a1c983-9906-4008-ab5e-af9fd0e8f100.txt?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=ke-prod-1%40pc-knowledge-prod.iam.gserviceaccount.com%2F20241111%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20241111T111208Z&X-Goog-Expires=3600&X-Goog-SignedHeaders=host&response-content-disposition=inline&response-content-type=text%2Fplain&X-Goog-Signature=272842cff3e11390f927a9aff2ac9b299a986bf5ad5c5bb616a3fdddf7fb60f20bffab1bb838bc9761297bbe8360c21af64f5428340d2d4d172cf5d58958aa78c1db99034cf937e0a14688a63ca71f20378f1608dc1b0473fa0cf497cf9ee1eb58e34c2073e0399215e99646efc80fc18c21fbfdac4ff2e777d0c85f0b1f90b47537f9b057e8de89a8eaa0e747a89fce0456de293a69d689f543b60b63e8b1e877f8f338cada78105cc8a068cc4a09c418b61939186768baa942386de0c494f8b22054e28aef441e02f8afaeea6678e1a9078522bcab58fd982f5e9ac3ed47dc6d5fb11423133e44c6636594fd1978dc721e6b081b4dd7d468120bcc1179e8b4) \\n',\n", | |
" 'role': '\"assistant\"'}}],\n", | |
" 'id': '000000000000000024387da14e422955',\n", | |
" 'model': 'gpt-4o-2024-05-13',\n", | |
" 'usage': {'completion_tokens': 48, 'prompt_tokens': 412, 'total_tokens': 460}}\n", | |
"Chat completion response: The uploaded file mentions the following facts about nature and science:\n", | |
"1. The sky is blue.\n", | |
"2. Water boils at 100 degrees Celsius.\n", | |
"3. The Earth orbits around the Sun [1, pp. 1].\n", | |
"\n", | |
"References:\n", | |
"1. [test_document.txt](https://storage.googleapis.com/knowledge-prod-files/2ae32459-b2ad-4fce-a3b7-ac0413a7b798%2F9f246797-8b7b-49d2-89c8-e79c234850b4%2F63a1c983-9906-4008-ab5e-af9fd0e8f100.txt?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=ke-prod-1%40pc-knowledge-prod.iam.gserviceaccount.com%2F20241111%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20241111T111208Z&X-Goog-Expires=3600&X-Goog-SignedHeaders=host&response-content-disposition=inline&response-content-type=text%2Fplain&X-Goog-Signature=272842cff3e11390f927a9aff2ac9b299a986bf5ad5c5bb616a3fdddf7fb60f20bffab1bb838bc9761297bbe8360c21af64f5428340d2d4d172cf5d58958aa78c1db99034cf937e0a14688a63ca71f20378f1608dc1b0473fa0cf497cf9ee1eb58e34c2073e0399215e99646efc80fc18c21fbfdac4ff2e777d0c85f0b1f90b47537f9b057e8de89a8eaa0e747a89fce0456de293a69d689f543b60b63e8b1e877f8f338cada78105cc8a068cc4a09c418b61939186768baa942386de0c494f8b22054e28aef441e02f8afaeea6678e1a9078522bcab58fd982f5e9ac3ed47dc6d5fb11423133e44c6636594fd1978dc721e6b081b4dd7d468120bcc1179e8b4) \n", | |
"\n", | |
"\n", | |
"Deleting uploaded file...\n", | |
"File deletion response: None\n", | |
"\n", | |
"Cleaning up...\n", | |
"Assistant deleted\n" | |
] | |
}, | |
{ | |
"name": "stderr", | |
"output_type": "stream", | |
"text": [ | |
"🖇 AgentOps: Session Stats - \u001b[1mDuration:\u001b[0m 51.8s | \u001b[1mCost:\u001b[0m $0.00 | \u001b[1mLLMs:\u001b[0m 2 | \u001b[1mTools:\u001b[0m 0 | \u001b[1mActions:\u001b[0m 16 | \u001b[1mErrors:\u001b[0m 0 | \u001b[1mVectors:\u001b[0m 0\n", | |
"🖇 AgentOps: \u001b[34m\u001b[34mSession Replay: https://app.agentops.ai/drilldown?session_id=b0ceab6a-c0f2-4ca1-82fa-36941bc12aff\u001b[0m\u001b[0m\n" | |
] | |
}, | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"\n", | |
"Assistant tests completed successfully!\n" | |
] | |
} | |
], | |
"source": [ | |
"\n", | |
"if __name__ == \"__main__\":\n", | |
" agentops.init(default_tags=[\"pinecone-assistant-test\"])\n", | |
" test_assistant_operations()\n" | |
] | |
} | |
], | |
"metadata": { | |
"kernelspec": { | |
"display_name": "agentops-F4gm0d-M-py3.11", | |
"language": "python", | |
"name": "python3" | |
}, | |
"language_info": { | |
"codemirror_mode": { | |
"name": "ipython", | |
"version": 3 | |
}, | |
"file_extension": ".py", | |
"mimetype": "text/x-python", | |
"name": "python", | |
"nbconvert_exporter": "python", | |
"pygments_lexer": "ipython3", | |
"version": "3.11.5" | |
} | |
}, | |
"nbformat": 4, | |
"nbformat_minor": 5 | |
} | |
max_retries = 10 | |
wait_time = 2 | |
for attempt in range(max_retries): | |
file_status = provider.describe_file( | |
pc, | |
assistant_name="test-assistant", | |
file_id=file_upload["id"] | |
) | |
if file_status.get("status") == "Available": | |
break | |
print(f"File still processing, waiting {wait_time} seconds...") | |
time.sleep(wait_time) | |
wait_time *= 2 # Exponential backoff |
Commitable Code Suggestion:
{ | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"id": "9c559018", | |
"metadata": {}, | |
"source": [ | |
"# Pinecone Assistant Test Notebook" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 1, | |
"id": "f91eaebb", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stderr", | |
"output_type": "stream", | |
"text": [ | |
"/Users/maksymliamin/Library/Caches/pypoetry/virtualenvs/agentops-F4gm0d-M-py3.11/lib/python3.11/site-packages/pinecone/data/index.py:1: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n", | |
" from tqdm.autonotebook import tqdm\n" | |
] | |
}, | |
{ | |
"data": { | |
"text/plain": [ | |
"True" | |
] | |
}, | |
"execution_count": 1, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"\n", | |
"import agentops\n", | |
"from dotenv import load_dotenv\n", | |
"from pinecone import Pinecone\n", | |
"from pinecone_plugins.assistant.models.chat import Message\n", | |
"import time\n", | |
"import tempfile\n", | |
"import os\n", | |
"\n", | |
"# Load environment variables\n", | |
"load_dotenv()\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "5b7ff832", | |
"metadata": {}, | |
"source": [ | |
"## Define Assistant Operations Test Function" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 2, | |
"id": "e20e5aca", | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"\n", | |
"def test_assistant_operations():\n", | |
" \"\"\"Test Pinecone Assistant operations using in-memory or temporary file handling\"\"\"\n", | |
" # Initialize Pinecone and Provider\n", | |
" pc = Pinecone(api_key=os.getenv(\"PINECONE_API_KEY\"))\n", | |
" provider = agentops.llms.PineconeProvider(pc)\n", | |
" \n", | |
" try:\n", | |
" # List existing assistants\n", | |
" print(\"\\nListing assistants...\")\n", | |
" assistants = provider.list_assistants(pc)\n", | |
" print(f\"Current assistants: {assistants}\")\n", | |
" \n", | |
" # Create a new assistant\n", | |
" print(\"\\nCreating assistant...\")\n", | |
" assistant = provider.create_assistant(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" instructions=\"You are a helpful assistant for testing purposes.\"\n", | |
" )\n", | |
" print(f\"Created assistant: {assistant}\")\n", | |
" \n", | |
" # Check assistant status\n", | |
" print(\"\\nChecking assistant status...\")\n", | |
" status = provider.get_assistant(pc, \"test-assistant\")\n", | |
" print(f\"Assistant status: {status}\")\n", | |
" \n", | |
" # Update assistant\n", | |
" print(\"\\nUpdating assistant...\")\n", | |
" updated = provider.update_assistant(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" instructions=\"Updated instructions for testing.\"\n", | |
" )\n", | |
" print(f\"Updated assistant: {updated}\")\n", | |
" \n", | |
" # Create in-memory file-like object with test data\n", | |
" test_data_content = \"\"\"\n", | |
" This is a test document containing specific information.\n", | |
" The document discusses important facts:\n", | |
" 1. The sky is blue\n", | |
" 2. Water boils at 100 degrees Celsius\n", | |
" 3. The Earth orbits around the Sun\n", | |
" \n", | |
" This information should be retrievable by the assistant.\n", | |
" \"\"\"\n", | |
" \n", | |
" # Create a proper temporary file with content\n", | |
" temp_dir = tempfile.mkdtemp() # Create a temporary directory\n", | |
" file_path = os.path.join(temp_dir, 'test_document.txt') # Create a path with explicit filename\n", | |
" \n", | |
" with open(file_path, 'w', encoding='utf-8') as f:\n", | |
" f.write(test_data_content)\n", | |
"\n", | |
" # Upload file using the explicit file path\n", | |
" print(\"\\nUploading file...\")\n", | |
" file_upload = provider.upload_file(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" file_path=file_path\n", | |
" )\n", | |
" print(f\"File upload: {file_upload}\")\n", | |
" \n", | |
" # Wait for file processing (check status until ready)\n", | |
" print(\"\\nWaiting for file processing...\")\n", | |
" max_retries = 10\n", | |
" for _ in range(max_retries):\n", | |
" file_status = provider.describe_file(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" file_id=file_upload[\"id\"]\n", | |
" )\n", | |
" if file_status.get(\"status\") == \"Available\":\n", | |
" break\n", | |
" print(\"File still processing, waiting...\")\n", | |
" time.sleep(2)\n", | |
" \n", | |
" # Test chat with OpenAI-compatible interface\n", | |
" print(\"\\nTesting chat completions...\")\n", | |
" chat_completion = provider.chat_completions(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" messages=[\n", | |
" {\"role\": \"user\", \"content\": \"What facts are mentioned in the uploaded file about nature and science?\"}\n", | |
" ]\n", | |
" )\n", | |
" print(f\"Chat completion response: {chat_completion}\")\n", | |
" \n", | |
" # Delete uploaded file\n", | |
" print(\"\\nDeleting uploaded file...\")\n", | |
" delete_response = provider.delete_file(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" file_id=file_upload[\"id\"]\n", | |
" )\n", | |
" print(f\"File deletion response: {delete_response}\")\n", | |
" \n", | |
" # Clean up\n", | |
" print(\"\\nCleaning up...\")\n", | |
" os.remove(file_path) # Remove the temporary file\n", | |
" os.rmdir(temp_dir) # Remove the temporary directory\n", | |
" # Delete assistant\n", | |
" provider.delete_assistant(pc, \"test-assistant\")\n", | |
" print(\"Assistant deleted\")\n", | |
"\n", | |
" except Exception as e:\n", | |
" print(f\"Error during testing: {e}\")\n", | |
" agentops.end_session(end_state=\"Fail\")\n", | |
" return\n", | |
" \n", | |
" agentops.end_session(end_state=\"Success\")\n", | |
" print(\"\\nAssistant tests completed successfully!\")\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "8e1c07ce", | |
"metadata": {}, | |
"source": [ | |
"## Run the Assistant Operations Test Function" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 3, | |
"id": "2cb94c2e", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stderr", | |
"output_type": "stream", | |
"text": [ | |
"🖇 AgentOps: WARNING: agentops is out of date. Please update with the command: 'pip install --upgrade agentops'\n", | |
"🖇 AgentOps: \u001b[34m\u001b[34mSession Replay: https://app.agentops.ai/drilldown?session_id=b0ceab6a-c0f2-4ca1-82fa-36941bc12aff\u001b[0m\u001b[0m\n" | |
] | |
}, | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"\n", | |
"Listing assistants...\n", | |
"Current assistants: []\n", | |
"\n", | |
"Creating assistant...\n", | |
"Created assistant: {'created_at': '2024-11-11T11:11:35.667889532Z',\n", | |
" 'instructions': 'You are a helpful assistant for testing purposes.',\n", | |
" 'metadata': {},\n", | |
" 'name': 'test-assistant',\n", | |
" 'status': 'Ready',\n", | |
" 'updated_at': '2024-11-11T11:11:37.675110926Z'}\n", | |
"\n", | |
"Checking assistant status...\n", | |
"Assistant status: None\n", | |
"\n", | |
"Updating assistant...\n", | |
"Updated assistant: {'created_at': '2024-11-11T11:11:35.667889532Z',\n", | |
" 'instructions': 'Updated instructions for testing.',\n", | |
" 'metadata': {},\n", | |
" 'name': 'test-assistant',\n", | |
" 'status': 'Ready',\n", | |
" 'updated_at': '2024-11-11T11:11:41.564726136Z'}\n", | |
"\n", | |
"Uploading file...\n", | |
"File upload: {'id': '63a1c983-9906-4008-ab5e-af9fd0e8f100', 'name': 'test_document.txt', 'metadata': None, 'created_on': '2024-11-11T11:11:42.389279875Z', 'updated_on': '2024-11-11T11:11:58.319674163Z', 'status': 'Available', 'percent_done': 1.0, 'signed_url': None}\n", | |
"\n", | |
"Waiting for file processing...\n", | |
"\n", | |
"Testing chat completions...\n", | |
"Debug - Raw response: {'choices': [{'finish_reason': 'stop',\n", | |
" 'index': 0,\n", | |
" 'message': {'content': 'The uploaded file mentions the following '\n", | |
" 'facts about nature and science:\\n'\n", | |
" '1. The sky is blue.\\n'\n", | |
" '2. Water boils at 100 degrees Celsius.\\n'\n", | |
" '3. The Earth orbits around the Sun [1, '\n", | |
" 'pp. 1].\\n'\n", | |
" '\\n'\n", | |
" 'References:\\n'\n", | |
" '1. '\n", | |
" '[test_document.txt](https://storage.googleapis.com/knowledge-prod-files/2ae32459-b2ad-4fce-a3b7-ac0413a7b798%2F9f246797-8b7b-49d2-89c8-e79c234850b4%2F63a1c983-9906-4008-ab5e-af9fd0e8f100.txt?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=ke-prod-1%40pc-knowledge-prod.iam.gserviceaccount.com%2F20241111%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20241111T111208Z&X-Goog-Expires=3600&X-Goog-SignedHeaders=host&response-content-disposition=inline&response-content-type=text%2Fplain&X-Goog-Signature=272842cff3e11390f927a9aff2ac9b299a986bf5ad5c5bb616a3fdddf7fb60f20bffab1bb838bc9761297bbe8360c21af64f5428340d2d4d172cf5d58958aa78c1db99034cf937e0a14688a63ca71f20378f1608dc1b0473fa0cf497cf9ee1eb58e34c2073e0399215e99646efc80fc18c21fbfdac4ff2e777d0c85f0b1f90b47537f9b057e8de89a8eaa0e747a89fce0456de293a69d689f543b60b63e8b1e877f8f338cada78105cc8a068cc4a09c418b61939186768baa942386de0c494f8b22054e28aef441e02f8afaeea6678e1a9078522bcab58fd982f5e9ac3ed47dc6d5fb11423133e44c6636594fd1978dc721e6b081b4dd7d468120bcc1179e8b4) \\n',\n", | |
" 'role': '\"assistant\"'}}],\n", | |
" 'id': '000000000000000024387da14e422955',\n", | |
" 'model': 'gpt-4o-2024-05-13',\n", | |
" 'usage': {'completion_tokens': 48, 'prompt_tokens': 412, 'total_tokens': 460}}\n", | |
"Chat completion response: The uploaded file mentions the following facts about nature and science:\n", | |
"1. The sky is blue.\n", | |
"2. Water boils at 100 degrees Celsius.\n", | |
"3. The Earth orbits around the Sun [1, pp. 1].\n", | |
"\n", | |
"References:\n", | |
"1. [test_document.txt](https://storage.googleapis.com/knowledge-prod-files/2ae32459-b2ad-4fce-a3b7-ac0413a7b798%2F9f246797-8b7b-49d2-89c8-e79c234850b4%2F63a1c983-9906-4008-ab5e-af9fd0e8f100.txt?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=ke-prod-1%40pc-knowledge-prod.iam.gserviceaccount.com%2F20241111%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20241111T111208Z&X-Goog-Expires=3600&X-Goog-SignedHeaders=host&response-content-disposition=inline&response-content-type=text%2Fplain&X-Goog-Signature=272842cff3e11390f927a9aff2ac9b299a986bf5ad5c5bb616a3fdddf7fb60f20bffab1bb838bc9761297bbe8360c21af64f5428340d2d4d172cf5d58958aa78c1db99034cf937e0a14688a63ca71f20378f1608dc1b0473fa0cf497cf9ee1eb58e34c2073e0399215e99646efc80fc18c21fbfdac4ff2e777d0c85f0b1f90b47537f9b057e8de89a8eaa0e747a89fce0456de293a69d689f543b60b63e8b1e877f8f338cada78105cc8a068cc4a09c418b61939186768baa942386de0c494f8b22054e28aef441e02f8afaeea6678e1a9078522bcab58fd982f5e9ac3ed47dc6d5fb11423133e44c6636594fd1978dc721e6b081b4dd7d468120bcc1179e8b4) \n", | |
"\n", | |
"\n", | |
"Deleting uploaded file...\n", | |
"File deletion response: None\n", | |
"\n", | |
"Cleaning up...\n", | |
"Assistant deleted\n" | |
] | |
}, | |
{ | |
"name": "stderr", | |
"output_type": "stream", | |
"text": [ | |
"🖇 AgentOps: Session Stats - \u001b[1mDuration:\u001b[0m 51.8s | \u001b[1mCost:\u001b[0m $0.00 | \u001b[1mLLMs:\u001b[0m 2 | \u001b[1mTools:\u001b[0m 0 | \u001b[1mActions:\u001b[0m 16 | \u001b[1mErrors:\u001b[0m 0 | \u001b[1mVectors:\u001b[0m 0\n", | |
"🖇 AgentOps: \u001b[34m\u001b[34mSession Replay: https://app.agentops.ai/drilldown?session_id=b0ceab6a-c0f2-4ca1-82fa-36941bc12aff\u001b[0m\u001b[0m\n" | |
] | |
}, | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"\n", | |
"Assistant tests completed successfully!\n" | |
] | |
} | |
], | |
"source": [ | |
"\n", | |
"if __name__ == \"__main__\":\n", | |
" agentops.init(default_tags=[\"pinecone-assistant-test\"])\n", | |
" test_assistant_operations()\n" | |
] | |
} | |
], | |
"metadata": { | |
"kernelspec": { | |
"display_name": "agentops-F4gm0d-M-py3.11", | |
"language": "python", | |
"name": "python3" | |
}, | |
"language_info": { | |
"codemirror_mode": { | |
"name": "ipython", | |
"version": 3 | |
}, | |
"file_extension": ".py", | |
"mimetype": "text/x-python", | |
"name": "python", | |
"nbconvert_exporter": "python", | |
"pygments_lexer": "ipython3", | |
"version": "3.11.5" | |
} | |
}, | |
"nbformat": 4, | |
"nbformat_minor": 5 | |
} | |
max_retries = 10 | |
wait_time = 2 | |
for attempt in range(max_retries): | |
file_status = provider.describe_file( | |
pc, | |
assistant_name="test-assistant", | |
file_id=file_upload["id"] | |
) | |
if file_status.get("status") == "Available": | |
break | |
print(f"File still processing, waiting {wait_time} seconds...") | |
time.sleep(wait_time) | |
wait_time *= 2 # Exponential backoff |
ℹ️ Error Handling
Use Logging Framework for Error Handling
The test_assistant_operations
function currently prints errors to the console. For better error tracking and debugging, consider logging errors using a logging framework. This will allow for more flexible error management and integration with monitoring tools.
{ | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"id": "9c559018", | |
"metadata": {}, | |
"source": [ | |
"# Pinecone Assistant Test Notebook" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 1, | |
"id": "f91eaebb", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stderr", | |
"output_type": "stream", | |
"text": [ | |
"/Users/maksymliamin/Library/Caches/pypoetry/virtualenvs/agentops-F4gm0d-M-py3.11/lib/python3.11/site-packages/pinecone/data/index.py:1: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n", | |
" from tqdm.autonotebook import tqdm\n" | |
] | |
}, | |
{ | |
"data": { | |
"text/plain": [ | |
"True" | |
] | |
}, | |
"execution_count": 1, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"\n", | |
"import agentops\n", | |
"from dotenv import load_dotenv\n", | |
"from pinecone import Pinecone\n", | |
"from pinecone_plugins.assistant.models.chat import Message\n", | |
"import time\n", | |
"import tempfile\n", | |
"import os\n", | |
"\n", | |
"# Load environment variables\n", | |
"load_dotenv()\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "5b7ff832", | |
"metadata": {}, | |
"source": [ | |
"## Define Assistant Operations Test Function" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 2, | |
"id": "e20e5aca", | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"\n", | |
"def test_assistant_operations():\n", | |
" \"\"\"Test Pinecone Assistant operations using in-memory or temporary file handling\"\"\"\n", | |
" # Initialize Pinecone and Provider\n", | |
" pc = Pinecone(api_key=os.getenv(\"PINECONE_API_KEY\"))\n", | |
" provider = agentops.llms.PineconeProvider(pc)\n", | |
" \n", | |
" try:\n", | |
" # List existing assistants\n", | |
" print(\"\\nListing assistants...\")\n", | |
" assistants = provider.list_assistants(pc)\n", | |
" print(f\"Current assistants: {assistants}\")\n", | |
" \n", | |
" # Create a new assistant\n", | |
" print(\"\\nCreating assistant...\")\n", | |
" assistant = provider.create_assistant(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" instructions=\"You are a helpful assistant for testing purposes.\"\n", | |
" )\n", | |
" print(f\"Created assistant: {assistant}\")\n", | |
" \n", | |
" # Check assistant status\n", | |
" print(\"\\nChecking assistant status...\")\n", | |
" status = provider.get_assistant(pc, \"test-assistant\")\n", | |
" print(f\"Assistant status: {status}\")\n", | |
" \n", | |
" # Update assistant\n", | |
" print(\"\\nUpdating assistant...\")\n", | |
" updated = provider.update_assistant(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" instructions=\"Updated instructions for testing.\"\n", | |
" )\n", | |
" print(f\"Updated assistant: {updated}\")\n", | |
" \n", | |
" # Create in-memory file-like object with test data\n", | |
" test_data_content = \"\"\"\n", | |
" This is a test document containing specific information.\n", | |
" The document discusses important facts:\n", | |
" 1. The sky is blue\n", | |
" 2. Water boils at 100 degrees Celsius\n", | |
" 3. The Earth orbits around the Sun\n", | |
" \n", | |
" This information should be retrievable by the assistant.\n", | |
" \"\"\"\n", | |
" \n", | |
" # Create a proper temporary file with content\n", | |
" temp_dir = tempfile.mkdtemp() # Create a temporary directory\n", | |
" file_path = os.path.join(temp_dir, 'test_document.txt') # Create a path with explicit filename\n", | |
" \n", | |
" with open(file_path, 'w', encoding='utf-8') as f:\n", | |
" f.write(test_data_content)\n", | |
"\n", | |
" # Upload file using the explicit file path\n", | |
" print(\"\\nUploading file...\")\n", | |
" file_upload = provider.upload_file(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" file_path=file_path\n", | |
" )\n", | |
" print(f\"File upload: {file_upload}\")\n", | |
" \n", | |
" # Wait for file processing (check status until ready)\n", | |
" print(\"\\nWaiting for file processing...\")\n", | |
" max_retries = 10\n", | |
" for _ in range(max_retries):\n", | |
" file_status = provider.describe_file(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" file_id=file_upload[\"id\"]\n", | |
" )\n", | |
" if file_status.get(\"status\") == \"Available\":\n", | |
" break\n", | |
" print(\"File still processing, waiting...\")\n", | |
" time.sleep(2)\n", | |
" \n", | |
" # Test chat with OpenAI-compatible interface\n", | |
" print(\"\\nTesting chat completions...\")\n", | |
" chat_completion = provider.chat_completions(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" messages=[\n", | |
" {\"role\": \"user\", \"content\": \"What facts are mentioned in the uploaded file about nature and science?\"}\n", | |
" ]\n", | |
" )\n", | |
" print(f\"Chat completion response: {chat_completion}\")\n", | |
" \n", | |
" # Delete uploaded file\n", | |
" print(\"\\nDeleting uploaded file...\")\n", | |
" delete_response = provider.delete_file(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" file_id=file_upload[\"id\"]\n", | |
" )\n", | |
" print(f\"File deletion response: {delete_response}\")\n", | |
" \n", | |
" # Clean up\n", | |
" print(\"\\nCleaning up...\")\n", | |
" os.remove(file_path) # Remove the temporary file\n", | |
" os.rmdir(temp_dir) # Remove the temporary directory\n", | |
" # Delete assistant\n", | |
" provider.delete_assistant(pc, \"test-assistant\")\n", | |
" print(\"Assistant deleted\")\n", | |
"\n", | |
" except Exception as e:\n", | |
" print(f\"Error during testing: {e}\")\n", | |
" agentops.end_session(end_state=\"Fail\")\n", | |
" return\n", | |
" \n", | |
" agentops.end_session(end_state=\"Success\")\n", | |
" print(\"\\nAssistant tests completed successfully!\")\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "8e1c07ce", | |
"metadata": {}, | |
"source": [ | |
"## Run the Assistant Operations Test Function" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 3, | |
"id": "2cb94c2e", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stderr", | |
"output_type": "stream", | |
"text": [ | |
"🖇 AgentOps: WARNING: agentops is out of date. Please update with the command: 'pip install --upgrade agentops'\n", | |
"🖇 AgentOps: \u001b[34m\u001b[34mSession Replay: https://app.agentops.ai/drilldown?session_id=b0ceab6a-c0f2-4ca1-82fa-36941bc12aff\u001b[0m\u001b[0m\n" | |
] | |
}, | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"\n", | |
"Listing assistants...\n", | |
"Current assistants: []\n", | |
"\n", | |
"Creating assistant...\n", | |
"Created assistant: {'created_at': '2024-11-11T11:11:35.667889532Z',\n", | |
" 'instructions': 'You are a helpful assistant for testing purposes.',\n", | |
" 'metadata': {},\n", | |
" 'name': 'test-assistant',\n", | |
" 'status': 'Ready',\n", | |
" 'updated_at': '2024-11-11T11:11:37.675110926Z'}\n", | |
"\n", | |
"Checking assistant status...\n", | |
"Assistant status: None\n", | |
"\n", | |
"Updating assistant...\n", | |
"Updated assistant: {'created_at': '2024-11-11T11:11:35.667889532Z',\n", | |
" 'instructions': 'Updated instructions for testing.',\n", | |
" 'metadata': {},\n", | |
" 'name': 'test-assistant',\n", | |
" 'status': 'Ready',\n", | |
" 'updated_at': '2024-11-11T11:11:41.564726136Z'}\n", | |
"\n", | |
"Uploading file...\n", | |
"File upload: {'id': '63a1c983-9906-4008-ab5e-af9fd0e8f100', 'name': 'test_document.txt', 'metadata': None, 'created_on': '2024-11-11T11:11:42.389279875Z', 'updated_on': '2024-11-11T11:11:58.319674163Z', 'status': 'Available', 'percent_done': 1.0, 'signed_url': None}\n", | |
"\n", | |
"Waiting for file processing...\n", | |
"\n", | |
"Testing chat completions...\n", | |
"Debug - Raw response: {'choices': [{'finish_reason': 'stop',\n", | |
" 'index': 0,\n", | |
" 'message': {'content': 'The uploaded file mentions the following '\n", | |
" 'facts about nature and science:\\n'\n", | |
" '1. The sky is blue.\\n'\n", | |
" '2. Water boils at 100 degrees Celsius.\\n'\n", | |
" '3. The Earth orbits around the Sun [1, '\n", | |
" 'pp. 1].\\n'\n", | |
" '\\n'\n", | |
" 'References:\\n'\n", | |
" '1. '\n", | |
" '[test_document.txt](https://storage.googleapis.com/knowledge-prod-files/2ae32459-b2ad-4fce-a3b7-ac0413a7b798%2F9f246797-8b7b-49d2-89c8-e79c234850b4%2F63a1c983-9906-4008-ab5e-af9fd0e8f100.txt?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=ke-prod-1%40pc-knowledge-prod.iam.gserviceaccount.com%2F20241111%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20241111T111208Z&X-Goog-Expires=3600&X-Goog-SignedHeaders=host&response-content-disposition=inline&response-content-type=text%2Fplain&X-Goog-Signature=272842cff3e11390f927a9aff2ac9b299a986bf5ad5c5bb616a3fdddf7fb60f20bffab1bb838bc9761297bbe8360c21af64f5428340d2d4d172cf5d58958aa78c1db99034cf937e0a14688a63ca71f20378f1608dc1b0473fa0cf497cf9ee1eb58e34c2073e0399215e99646efc80fc18c21fbfdac4ff2e777d0c85f0b1f90b47537f9b057e8de89a8eaa0e747a89fce0456de293a69d689f543b60b63e8b1e877f8f338cada78105cc8a068cc4a09c418b61939186768baa942386de0c494f8b22054e28aef441e02f8afaeea6678e1a9078522bcab58fd982f5e9ac3ed47dc6d5fb11423133e44c6636594fd1978dc721e6b081b4dd7d468120bcc1179e8b4) \\n',\n", | |
" 'role': '\"assistant\"'}}],\n", | |
" 'id': '000000000000000024387da14e422955',\n", | |
" 'model': 'gpt-4o-2024-05-13',\n", | |
" 'usage': {'completion_tokens': 48, 'prompt_tokens': 412, 'total_tokens': 460}}\n", | |
"Chat completion response: The uploaded file mentions the following facts about nature and science:\n", | |
"1. The sky is blue.\n", | |
"2. Water boils at 100 degrees Celsius.\n", | |
"3. The Earth orbits around the Sun [1, pp. 1].\n", | |
"\n", | |
"References:\n", | |
"1. [test_document.txt](https://storage.googleapis.com/knowledge-prod-files/2ae32459-b2ad-4fce-a3b7-ac0413a7b798%2F9f246797-8b7b-49d2-89c8-e79c234850b4%2F63a1c983-9906-4008-ab5e-af9fd0e8f100.txt?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=ke-prod-1%40pc-knowledge-prod.iam.gserviceaccount.com%2F20241111%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20241111T111208Z&X-Goog-Expires=3600&X-Goog-SignedHeaders=host&response-content-disposition=inline&response-content-type=text%2Fplain&X-Goog-Signature=272842cff3e11390f927a9aff2ac9b299a986bf5ad5c5bb616a3fdddf7fb60f20bffab1bb838bc9761297bbe8360c21af64f5428340d2d4d172cf5d58958aa78c1db99034cf937e0a14688a63ca71f20378f1608dc1b0473fa0cf497cf9ee1eb58e34c2073e0399215e99646efc80fc18c21fbfdac4ff2e777d0c85f0b1f90b47537f9b057e8de89a8eaa0e747a89fce0456de293a69d689f543b60b63e8b1e877f8f338cada78105cc8a068cc4a09c418b61939186768baa942386de0c494f8b22054e28aef441e02f8afaeea6678e1a9078522bcab58fd982f5e9ac3ed47dc6d5fb11423133e44c6636594fd1978dc721e6b081b4dd7d468120bcc1179e8b4) \n", | |
"\n", | |
"\n", | |
"Deleting uploaded file...\n", | |
"File deletion response: None\n", | |
"\n", | |
"Cleaning up...\n", | |
"Assistant deleted\n" | |
] | |
}, | |
{ | |
"name": "stderr", | |
"output_type": "stream", | |
"text": [ | |
"🖇 AgentOps: Session Stats - \u001b[1mDuration:\u001b[0m 51.8s | \u001b[1mCost:\u001b[0m $0.00 | \u001b[1mLLMs:\u001b[0m 2 | \u001b[1mTools:\u001b[0m 0 | \u001b[1mActions:\u001b[0m 16 | \u001b[1mErrors:\u001b[0m 0 | \u001b[1mVectors:\u001b[0m 0\n", | |
"🖇 AgentOps: \u001b[34m\u001b[34mSession Replay: https://app.agentops.ai/drilldown?session_id=b0ceab6a-c0f2-4ca1-82fa-36941bc12aff\u001b[0m\u001b[0m\n" | |
] | |
}, | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"\n", | |
"Assistant tests completed successfully!\n" | |
] | |
} | |
], | |
"source": [ | |
"\n", | |
"if __name__ == \"__main__\":\n", | |
" agentops.init(default_tags=[\"pinecone-assistant-test\"])\n", | |
" test_assistant_operations()\n" | |
] | |
} | |
], | |
"metadata": { | |
"kernelspec": { | |
"display_name": "agentops-F4gm0d-M-py3.11", | |
"language": "python", | |
"name": "python3" | |
}, | |
"language_info": { | |
"codemirror_mode": { | |
"name": "ipython", | |
"version": 3 | |
}, | |
"file_extension": ".py", | |
"mimetype": "text/x-python", | |
"name": "python", | |
"nbconvert_exporter": "python", | |
"pygments_lexer": "ipython3", | |
"version": "3.11.5" | |
} | |
}, | |
"nbformat": 4, | |
"nbformat_minor": 5 | |
} | |
except Exception as e: | |
logging.error(f"Error during testing: {e}") |
Commitable Code Suggestion:
{ | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"id": "9c559018", | |
"metadata": {}, | |
"source": [ | |
"# Pinecone Assistant Test Notebook" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 1, | |
"id": "f91eaebb", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stderr", | |
"output_type": "stream", | |
"text": [ | |
"/Users/maksymliamin/Library/Caches/pypoetry/virtualenvs/agentops-F4gm0d-M-py3.11/lib/python3.11/site-packages/pinecone/data/index.py:1: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n", | |
" from tqdm.autonotebook import tqdm\n" | |
] | |
}, | |
{ | |
"data": { | |
"text/plain": [ | |
"True" | |
] | |
}, | |
"execution_count": 1, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"\n", | |
"import agentops\n", | |
"from dotenv import load_dotenv\n", | |
"from pinecone import Pinecone\n", | |
"from pinecone_plugins.assistant.models.chat import Message\n", | |
"import time\n", | |
"import tempfile\n", | |
"import os\n", | |
"\n", | |
"# Load environment variables\n", | |
"load_dotenv()\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "5b7ff832", | |
"metadata": {}, | |
"source": [ | |
"## Define Assistant Operations Test Function" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 2, | |
"id": "e20e5aca", | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"\n", | |
"def test_assistant_operations():\n", | |
" \"\"\"Test Pinecone Assistant operations using in-memory or temporary file handling\"\"\"\n", | |
" # Initialize Pinecone and Provider\n", | |
" pc = Pinecone(api_key=os.getenv(\"PINECONE_API_KEY\"))\n", | |
" provider = agentops.llms.PineconeProvider(pc)\n", | |
" \n", | |
" try:\n", | |
" # List existing assistants\n", | |
" print(\"\\nListing assistants...\")\n", | |
" assistants = provider.list_assistants(pc)\n", | |
" print(f\"Current assistants: {assistants}\")\n", | |
" \n", | |
" # Create a new assistant\n", | |
" print(\"\\nCreating assistant...\")\n", | |
" assistant = provider.create_assistant(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" instructions=\"You are a helpful assistant for testing purposes.\"\n", | |
" )\n", | |
" print(f\"Created assistant: {assistant}\")\n", | |
" \n", | |
" # Check assistant status\n", | |
" print(\"\\nChecking assistant status...\")\n", | |
" status = provider.get_assistant(pc, \"test-assistant\")\n", | |
" print(f\"Assistant status: {status}\")\n", | |
" \n", | |
" # Update assistant\n", | |
" print(\"\\nUpdating assistant...\")\n", | |
" updated = provider.update_assistant(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" instructions=\"Updated instructions for testing.\"\n", | |
" )\n", | |
" print(f\"Updated assistant: {updated}\")\n", | |
" \n", | |
" # Create in-memory file-like object with test data\n", | |
" test_data_content = \"\"\"\n", | |
" This is a test document containing specific information.\n", | |
" The document discusses important facts:\n", | |
" 1. The sky is blue\n", | |
" 2. Water boils at 100 degrees Celsius\n", | |
" 3. The Earth orbits around the Sun\n", | |
" \n", | |
" This information should be retrievable by the assistant.\n", | |
" \"\"\"\n", | |
" \n", | |
" # Create a proper temporary file with content\n", | |
" temp_dir = tempfile.mkdtemp() # Create a temporary directory\n", | |
" file_path = os.path.join(temp_dir, 'test_document.txt') # Create a path with explicit filename\n", | |
" \n", | |
" with open(file_path, 'w', encoding='utf-8') as f:\n", | |
" f.write(test_data_content)\n", | |
"\n", | |
" # Upload file using the explicit file path\n", | |
" print(\"\\nUploading file...\")\n", | |
" file_upload = provider.upload_file(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" file_path=file_path\n", | |
" )\n", | |
" print(f\"File upload: {file_upload}\")\n", | |
" \n", | |
" # Wait for file processing (check status until ready)\n", | |
" print(\"\\nWaiting for file processing...\")\n", | |
" max_retries = 10\n", | |
" for _ in range(max_retries):\n", | |
" file_status = provider.describe_file(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" file_id=file_upload[\"id\"]\n", | |
" )\n", | |
" if file_status.get(\"status\") == \"Available\":\n", | |
" break\n", | |
" print(\"File still processing, waiting...\")\n", | |
" time.sleep(2)\n", | |
" \n", | |
" # Test chat with OpenAI-compatible interface\n", | |
" print(\"\\nTesting chat completions...\")\n", | |
" chat_completion = provider.chat_completions(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" messages=[\n", | |
" {\"role\": \"user\", \"content\": \"What facts are mentioned in the uploaded file about nature and science?\"}\n", | |
" ]\n", | |
" )\n", | |
" print(f\"Chat completion response: {chat_completion}\")\n", | |
" \n", | |
" # Delete uploaded file\n", | |
" print(\"\\nDeleting uploaded file...\")\n", | |
" delete_response = provider.delete_file(\n", | |
" pc,\n", | |
" assistant_name=\"test-assistant\",\n", | |
" file_id=file_upload[\"id\"]\n", | |
" )\n", | |
" print(f\"File deletion response: {delete_response}\")\n", | |
" \n", | |
" # Clean up\n", | |
" print(\"\\nCleaning up...\")\n", | |
" os.remove(file_path) # Remove the temporary file\n", | |
" os.rmdir(temp_dir) # Remove the temporary directory\n", | |
" # Delete assistant\n", | |
" provider.delete_assistant(pc, \"test-assistant\")\n", | |
" print(\"Assistant deleted\")\n", | |
"\n", | |
" except Exception as e:\n", | |
" print(f\"Error during testing: {e}\")\n", | |
" agentops.end_session(end_state=\"Fail\")\n", | |
" return\n", | |
" \n", | |
" agentops.end_session(end_state=\"Success\")\n", | |
" print(\"\\nAssistant tests completed successfully!\")\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "8e1c07ce", | |
"metadata": {}, | |
"source": [ | |
"## Run the Assistant Operations Test Function" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 3, | |
"id": "2cb94c2e", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stderr", | |
"output_type": "stream", | |
"text": [ | |
"🖇 AgentOps: WARNING: agentops is out of date. Please update with the command: 'pip install --upgrade agentops'\n", | |
"🖇 AgentOps: \u001b[34m\u001b[34mSession Replay: https://app.agentops.ai/drilldown?session_id=b0ceab6a-c0f2-4ca1-82fa-36941bc12aff\u001b[0m\u001b[0m\n" | |
] | |
}, | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"\n", | |
"Listing assistants...\n", | |
"Current assistants: []\n", | |
"\n", | |
"Creating assistant...\n", | |
"Created assistant: {'created_at': '2024-11-11T11:11:35.667889532Z',\n", | |
" 'instructions': 'You are a helpful assistant for testing purposes.',\n", | |
" 'metadata': {},\n", | |
" 'name': 'test-assistant',\n", | |
" 'status': 'Ready',\n", | |
" 'updated_at': '2024-11-11T11:11:37.675110926Z'}\n", | |
"\n", | |
"Checking assistant status...\n", | |
"Assistant status: None\n", | |
"\n", | |
"Updating assistant...\n", | |
"Updated assistant: {'created_at': '2024-11-11T11:11:35.667889532Z',\n", | |
" 'instructions': 'Updated instructions for testing.',\n", | |
" 'metadata': {},\n", | |
" 'name': 'test-assistant',\n", | |
" 'status': 'Ready',\n", | |
" 'updated_at': '2024-11-11T11:11:41.564726136Z'}\n", | |
"\n", | |
"Uploading file...\n", | |
"File upload: {'id': '63a1c983-9906-4008-ab5e-af9fd0e8f100', 'name': 'test_document.txt', 'metadata': None, 'created_on': '2024-11-11T11:11:42.389279875Z', 'updated_on': '2024-11-11T11:11:58.319674163Z', 'status': 'Available', 'percent_done': 1.0, 'signed_url': None}\n", | |
"\n", | |
"Waiting for file processing...\n", | |
"\n", | |
"Testing chat completions...\n", | |
"Debug - Raw response: {'choices': [{'finish_reason': 'stop',\n", | |
" 'index': 0,\n", | |
" 'message': {'content': 'The uploaded file mentions the following '\n", | |
" 'facts about nature and science:\\n'\n", | |
" '1. The sky is blue.\\n'\n", | |
" '2. Water boils at 100 degrees Celsius.\\n'\n", | |
" '3. The Earth orbits around the Sun [1, '\n", | |
" 'pp. 1].\\n'\n", | |
" '\\n'\n", | |
" 'References:\\n'\n", | |
" '1. '\n", | |
" '[test_document.txt](https://storage.googleapis.com/knowledge-prod-files/2ae32459-b2ad-4fce-a3b7-ac0413a7b798%2F9f246797-8b7b-49d2-89c8-e79c234850b4%2F63a1c983-9906-4008-ab5e-af9fd0e8f100.txt?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=ke-prod-1%40pc-knowledge-prod.iam.gserviceaccount.com%2F20241111%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20241111T111208Z&X-Goog-Expires=3600&X-Goog-SignedHeaders=host&response-content-disposition=inline&response-content-type=text%2Fplain&X-Goog-Signature=272842cff3e11390f927a9aff2ac9b299a986bf5ad5c5bb616a3fdddf7fb60f20bffab1bb838bc9761297bbe8360c21af64f5428340d2d4d172cf5d58958aa78c1db99034cf937e0a14688a63ca71f20378f1608dc1b0473fa0cf497cf9ee1eb58e34c2073e0399215e99646efc80fc18c21fbfdac4ff2e777d0c85f0b1f90b47537f9b057e8de89a8eaa0e747a89fce0456de293a69d689f543b60b63e8b1e877f8f338cada78105cc8a068cc4a09c418b61939186768baa942386de0c494f8b22054e28aef441e02f8afaeea6678e1a9078522bcab58fd982f5e9ac3ed47dc6d5fb11423133e44c6636594fd1978dc721e6b081b4dd7d468120bcc1179e8b4) \\n',\n", | |
" 'role': '\"assistant\"'}}],\n", | |
" 'id': '000000000000000024387da14e422955',\n", | |
" 'model': 'gpt-4o-2024-05-13',\n", | |
" 'usage': {'completion_tokens': 48, 'prompt_tokens': 412, 'total_tokens': 460}}\n", | |
"Chat completion response: The uploaded file mentions the following facts about nature and science:\n", | |
"1. The sky is blue.\n", | |
"2. Water boils at 100 degrees Celsius.\n", | |
"3. The Earth orbits around the Sun [1, pp. 1].\n", | |
"\n", | |
"References:\n", | |
"1. [test_document.txt](https://storage.googleapis.com/knowledge-prod-files/2ae32459-b2ad-4fce-a3b7-ac0413a7b798%2F9f246797-8b7b-49d2-89c8-e79c234850b4%2F63a1c983-9906-4008-ab5e-af9fd0e8f100.txt?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=ke-prod-1%40pc-knowledge-prod.iam.gserviceaccount.com%2F20241111%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20241111T111208Z&X-Goog-Expires=3600&X-Goog-SignedHeaders=host&response-content-disposition=inline&response-content-type=text%2Fplain&X-Goog-Signature=272842cff3e11390f927a9aff2ac9b299a986bf5ad5c5bb616a3fdddf7fb60f20bffab1bb838bc9761297bbe8360c21af64f5428340d2d4d172cf5d58958aa78c1db99034cf937e0a14688a63ca71f20378f1608dc1b0473fa0cf497cf9ee1eb58e34c2073e0399215e99646efc80fc18c21fbfdac4ff2e777d0c85f0b1f90b47537f9b057e8de89a8eaa0e747a89fce0456de293a69d689f543b60b63e8b1e877f8f338cada78105cc8a068cc4a09c418b61939186768baa942386de0c494f8b22054e28aef441e02f8afaeea6678e1a9078522bcab58fd982f5e9ac3ed47dc6d5fb11423133e44c6636594fd1978dc721e6b081b4dd7d468120bcc1179e8b4) \n", | |
"\n", | |
"\n", | |
"Deleting uploaded file...\n", | |
"File deletion response: None\n", | |
"\n", | |
"Cleaning up...\n", | |
"Assistant deleted\n" | |
] | |
}, | |
{ | |
"name": "stderr", | |
"output_type": "stream", | |
"text": [ | |
"🖇 AgentOps: Session Stats - \u001b[1mDuration:\u001b[0m 51.8s | \u001b[1mCost:\u001b[0m $0.00 | \u001b[1mLLMs:\u001b[0m 2 | \u001b[1mTools:\u001b[0m 0 | \u001b[1mActions:\u001b[0m 16 | \u001b[1mErrors:\u001b[0m 0 | \u001b[1mVectors:\u001b[0m 0\n", | |
"🖇 AgentOps: \u001b[34m\u001b[34mSession Replay: https://app.agentops.ai/drilldown?session_id=b0ceab6a-c0f2-4ca1-82fa-36941bc12aff\u001b[0m\u001b[0m\n" | |
] | |
}, | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"\n", | |
"Assistant tests completed successfully!\n" | |
] | |
} | |
], | |
"source": [ | |
"\n", | |
"if __name__ == \"__main__\":\n", | |
" agentops.init(default_tags=[\"pinecone-assistant-test\"])\n", | |
" test_assistant_operations()\n" | |
] | |
} | |
], | |
"metadata": { | |
"kernelspec": { | |
"display_name": "agentops-F4gm0d-M-py3.11", | |
"language": "python", | |
"name": "python3" | |
}, | |
"language_info": { | |
"codemirror_mode": { | |
"name": "ipython", | |
"version": 3 | |
}, | |
"file_extension": ".py", | |
"mimetype": "text/x-python", | |
"name": "python", | |
"nbconvert_exporter": "python", | |
"pygments_lexer": "ipython3", | |
"version": "3.11.5" | |
} | |
}, | |
"nbformat": 4, | |
"nbformat_minor": 5 | |
} | |
except Exception as e: | |
logging.error(f"Error during testing: {e}") |
" query=\"Tell me about the tech company Apple\",\n", | ||
" documents=[\n", | ||
" {\"id\": \"vec1\", \"text\": \"Apple is a popular fruit known for its sweetness.\"},\n", | ||
" {\"id\": \"vec2\", \"text\": \"Apple Inc. has revolutionized the tech industry with its iPhone.\"}\n", | ||
" ],\n", | ||
" top_n=2,\n", | ||
" return_documents=True\n", | ||
" )\n", | ||
" for result in rerank_result.data:\n", | ||
" print(f\"Score: {result.score:.4f} | Document: {result.document['text'][:100]}...\")\n", | ||
"\n", | ||
" except Exception as e:\n", | ||
" print(f\"Error in inference operations: {e}\")\n", | ||
" agentops.end_session(end_state=\"Fail\")\n", | ||
" return\n", | ||
" \n", | ||
" agentops.end_session(end_state=\"Success\")\n", | ||
" print(\"\\nInference test completed successfully!\")\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "05cc2216", | ||
"metadata": {}, | ||
"source": [ | ||
"# Execute the test" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 5, | ||
"id": "0c7dfaaf", | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"name": "stderr", | ||
"output_type": "stream", | ||
"text": [ | ||
"🖇 AgentOps: AgentOps has already been initialized. If you are trying to start a session, call agentops.start_session() instead.\n" | ||
] | ||
}, | ||
{ | ||
"name": "stdout", | ||
"output_type": "stream", | ||
"text": [ | ||
"\n", | ||
"Testing Pinecone Inference API operations...\n" | ||
] | ||
}, | ||
{ | ||
"name": "stderr", | ||
"output_type": "stream", | ||
"text": [ | ||
"🖇 AgentOps: WARNING: agentops is out of date. Please update with the command: 'pip install --upgrade agentops'\n", | ||
"🖇 AgentOps: Could not end session - no sessions detected\n" | ||
] | ||
}, | ||
{ | ||
"name": "stdout", | ||
"output_type": "stream", | ||
"text": [ | ||
"Generated 2 embeddings\n", | ||
"Embedding dimension: 1024\n", | ||
"Score: 0.2187 | Document: Apple Inc. has revolutionized the tech industry with its iPhone....\n", | ||
"Score: 0.0113 | Document: Apple is a popular fruit known for its sweetness....\n", | ||
"\n", | ||
"Inference test completed successfully!\n" | ||
] | ||
} | ||
], | ||
"source": [ | ||
"\n", | ||
"if __name__ == \"__main__\":\n", | ||
" agentops.init(default_tags=[\"pinecone-inference-test\"])\n", | ||
" test_inference_operations()\n" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "agentops-F4gm0d-M-py3.11", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.11.5" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 5 | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ℹ️ Performance Improvement
Optimize Pinecone Client Initialization
The current implementation initializes the Pinecone client and provider within the test function. This can lead to repeated initialization if the function is called multiple times, which is inefficient. Consider initializing these components outside the function to improve performance.
+ # Initialize Pinecone outside the function
+ pc = Pinecone(api_key=os.getenv("PINECONE_API_KEY"))
+ provider = agentops.llms.PineconeProvider(pc)
+
def test_inference_operations():
"""Test Pinecone's Inference API operations"""
print("\nTesting Pinecone Inference API operations...")
- # Initialize Pinecone
- pc = Pinecone(api_key=os.getenv("PINECONE_API_KEY"))
- provider = agentops.llms.PineconeProvider(pc)
Commitable Code Suggestion:
{ | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"id": "c8a5e223", | |
"metadata": {}, | |
"source": [ | |
"# Pinecone Inference Operations Notebook\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "e5e029ce", | |
"metadata": {}, | |
"source": [ | |
"# Import the necessary libraries" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 1, | |
"id": "2b231e89", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stderr", | |
"output_type": "stream", | |
"text": [ | |
"/Users/maksymliamin/Library/Caches/pypoetry/virtualenvs/agentops-F4gm0d-M-py3.11/lib/python3.11/site-packages/pinecone/data/index.py:1: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n", | |
" from tqdm.autonotebook import tqdm\n" | |
] | |
}, | |
{ | |
"data": { | |
"text/plain": [ | |
"True" | |
] | |
}, | |
"execution_count": 1, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"\n", | |
"import agentops\n", | |
"from dotenv import load_dotenv\n", | |
"from pinecone import Pinecone\n", | |
"import os\n", | |
"from pinecone_plugins.assistant.models.chat import Message\n", | |
"import time\n", | |
"\n", | |
"load_dotenv() # Load environment variables\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "8423d334", | |
"metadata": {}, | |
"source": [ | |
"# Test inference operations" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 4, | |
"id": "d59d5e4c", | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"\n", | |
"def test_inference_operations():\n", | |
" \"\"\"Test Pinecone's Inference API operations\"\"\"\n", | |
" print(\"\\nTesting Pinecone Inference API operations...\")\n", | |
" \n", | |
" # Initialize Pinecone\n", | |
" pc = Pinecone(api_key=os.getenv(\"PINECONE_API_KEY\"))\n", | |
" provider = agentops.llms.PineconeProvider(pc)\n", | |
" \n", | |
" try:\n", | |
" # Embedding generation test\n", | |
" test_texts = [\n", | |
" \"Apple is a popular fruit known for its sweetness.\",\n", | |
" \"Apple Inc. is a technology company that makes iPhones.\"\n", | |
" ]\n", | |
" embeddings = provider.embed(\n", | |
" pc, \n", | |
" model=\"multilingual-e5-large\",\n", | |
" inputs=test_texts,\n", | |
" parameters={\"input_type\": \"passage\", \"truncate\": \"END\"}\n", | |
" )\n", | |
" print(f\"Generated {len(embeddings.data)} embeddings\")\n", | |
" print(f\"Embedding dimension: {len(embeddings.data[0].values)}\")\n", | |
" \n", | |
" # Document reranking test\n", | |
" rerank_result = provider.rerank(\n", | |
" pc,\n", | |
" model=\"bge-reranker-v2-m3\",\n", | |
" query=\"Tell me about the tech company Apple\",\n", | |
" documents=[\n", | |
" {\"id\": \"vec1\", \"text\": \"Apple is a popular fruit known for its sweetness.\"},\n", | |
" {\"id\": \"vec2\", \"text\": \"Apple Inc. has revolutionized the tech industry with its iPhone.\"}\n", | |
" ],\n", | |
" top_n=2,\n", | |
" return_documents=True\n", | |
" )\n", | |
" for result in rerank_result.data:\n", | |
" print(f\"Score: {result.score:.4f} | Document: {result.document['text'][:100]}...\")\n", | |
"\n", | |
" except Exception as e:\n", | |
" print(f\"Error in inference operations: {e}\")\n", | |
" agentops.end_session(end_state=\"Fail\")\n", | |
" return\n", | |
" \n", | |
" agentops.end_session(end_state=\"Success\")\n", | |
" print(\"\\nInference test completed successfully!\")\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "05cc2216", | |
"metadata": {}, | |
"source": [ | |
"# Execute the test" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 5, | |
"id": "0c7dfaaf", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stderr", | |
"output_type": "stream", | |
"text": [ | |
"🖇 AgentOps: AgentOps has already been initialized. If you are trying to start a session, call agentops.start_session() instead.\n" | |
] | |
}, | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"\n", | |
"Testing Pinecone Inference API operations...\n" | |
] | |
}, | |
{ | |
"name": "stderr", | |
"output_type": "stream", | |
"text": [ | |
"🖇 AgentOps: WARNING: agentops is out of date. Please update with the command: 'pip install --upgrade agentops'\n", | |
"🖇 AgentOps: Could not end session - no sessions detected\n" | |
] | |
}, | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"Generated 2 embeddings\n", | |
"Embedding dimension: 1024\n", | |
"Score: 0.2187 | Document: Apple Inc. has revolutionized the tech industry with its iPhone....\n", | |
"Score: 0.0113 | Document: Apple is a popular fruit known for its sweetness....\n", | |
"\n", | |
"Inference test completed successfully!\n" | |
] | |
} | |
], | |
"source": [ | |
"\n", | |
"if __name__ == \"__main__\":\n", | |
" agentops.init(default_tags=[\"pinecone-inference-test\"])\n", | |
" test_inference_operations()\n" | |
] | |
} | |
], | |
"metadata": { | |
"kernelspec": { | |
"display_name": "agentops-F4gm0d-M-py3.11", | |
"language": "python", | |
"name": "python3" | |
}, | |
"language_info": { | |
"codemirror_mode": { | |
"name": "ipython", | |
"version": 3 | |
}, | |
"file_extension": ".py", | |
"mimetype": "text/x-python", | |
"name": "python", | |
"nbconvert_exporter": "python", | |
"pygments_lexer": "ipython3", | |
"version": "3.11.5" | |
} | |
}, | |
"nbformat": 4, | |
"nbformat_minor": 5 | |
} | |
# Initialize Pinecone outside the function | |
pc = Pinecone(api_key=os.getenv("PINECONE_API_KEY")) | |
provider = agentops.llms.PineconeProvider(pc) | |
def test_inference_operations(): | |
"""Test Pinecone's Inference API operations""" | |
print("\nTesting Pinecone Inference API operations...") |
ℹ️ Error Handling
Enhance Error Logging
The current error handling in the test_inference_operations
function only prints the error message. Consider logging the error using a logging framework to ensure that errors are recorded in a structured manner, which can be useful for debugging and monitoring.
except Exception as e:
- print(f"Error in inference operations: {e}")
+ logging.error(f"Error in inference operations: {e}")
agentops.end_session(end_state="Fail")
return
Commitable Code Suggestion:
{ | |
"cells": [ | |
{ | |
"cell_type": "markdown", | |
"id": "c8a5e223", | |
"metadata": {}, | |
"source": [ | |
"# Pinecone Inference Operations Notebook\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "e5e029ce", | |
"metadata": {}, | |
"source": [ | |
"# Import the necessary libraries" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 1, | |
"id": "2b231e89", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stderr", | |
"output_type": "stream", | |
"text": [ | |
"/Users/maksymliamin/Library/Caches/pypoetry/virtualenvs/agentops-F4gm0d-M-py3.11/lib/python3.11/site-packages/pinecone/data/index.py:1: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n", | |
" from tqdm.autonotebook import tqdm\n" | |
] | |
}, | |
{ | |
"data": { | |
"text/plain": [ | |
"True" | |
] | |
}, | |
"execution_count": 1, | |
"metadata": {}, | |
"output_type": "execute_result" | |
} | |
], | |
"source": [ | |
"\n", | |
"import agentops\n", | |
"from dotenv import load_dotenv\n", | |
"from pinecone import Pinecone\n", | |
"import os\n", | |
"from pinecone_plugins.assistant.models.chat import Message\n", | |
"import time\n", | |
"\n", | |
"load_dotenv() # Load environment variables\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "8423d334", | |
"metadata": {}, | |
"source": [ | |
"# Test inference operations" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 4, | |
"id": "d59d5e4c", | |
"metadata": {}, | |
"outputs": [], | |
"source": [ | |
"\n", | |
"def test_inference_operations():\n", | |
" \"\"\"Test Pinecone's Inference API operations\"\"\"\n", | |
" print(\"\\nTesting Pinecone Inference API operations...\")\n", | |
" \n", | |
" # Initialize Pinecone\n", | |
" pc = Pinecone(api_key=os.getenv(\"PINECONE_API_KEY\"))\n", | |
" provider = agentops.llms.PineconeProvider(pc)\n", | |
" \n", | |
" try:\n", | |
" # Embedding generation test\n", | |
" test_texts = [\n", | |
" \"Apple is a popular fruit known for its sweetness.\",\n", | |
" \"Apple Inc. is a technology company that makes iPhones.\"\n", | |
" ]\n", | |
" embeddings = provider.embed(\n", | |
" pc, \n", | |
" model=\"multilingual-e5-large\",\n", | |
" inputs=test_texts,\n", | |
" parameters={\"input_type\": \"passage\", \"truncate\": \"END\"}\n", | |
" )\n", | |
" print(f\"Generated {len(embeddings.data)} embeddings\")\n", | |
" print(f\"Embedding dimension: {len(embeddings.data[0].values)}\")\n", | |
" \n", | |
" # Document reranking test\n", | |
" rerank_result = provider.rerank(\n", | |
" pc,\n", | |
" model=\"bge-reranker-v2-m3\",\n", | |
" query=\"Tell me about the tech company Apple\",\n", | |
" documents=[\n", | |
" {\"id\": \"vec1\", \"text\": \"Apple is a popular fruit known for its sweetness.\"},\n", | |
" {\"id\": \"vec2\", \"text\": \"Apple Inc. has revolutionized the tech industry with its iPhone.\"}\n", | |
" ],\n", | |
" top_n=2,\n", | |
" return_documents=True\n", | |
" )\n", | |
" for result in rerank_result.data:\n", | |
" print(f\"Score: {result.score:.4f} | Document: {result.document['text'][:100]}...\")\n", | |
"\n", | |
" except Exception as e:\n", | |
" print(f\"Error in inference operations: {e}\")\n", | |
" agentops.end_session(end_state=\"Fail\")\n", | |
" return\n", | |
" \n", | |
" agentops.end_session(end_state=\"Success\")\n", | |
" print(\"\\nInference test completed successfully!\")\n" | |
] | |
}, | |
{ | |
"cell_type": "markdown", | |
"id": "05cc2216", | |
"metadata": {}, | |
"source": [ | |
"# Execute the test" | |
] | |
}, | |
{ | |
"cell_type": "code", | |
"execution_count": 5, | |
"id": "0c7dfaaf", | |
"metadata": {}, | |
"outputs": [ | |
{ | |
"name": "stderr", | |
"output_type": "stream", | |
"text": [ | |
"🖇 AgentOps: AgentOps has already been initialized. If you are trying to start a session, call agentops.start_session() instead.\n" | |
] | |
}, | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"\n", | |
"Testing Pinecone Inference API operations...\n" | |
] | |
}, | |
{ | |
"name": "stderr", | |
"output_type": "stream", | |
"text": [ | |
"🖇 AgentOps: WARNING: agentops is out of date. Please update with the command: 'pip install --upgrade agentops'\n", | |
"🖇 AgentOps: Could not end session - no sessions detected\n" | |
] | |
}, | |
{ | |
"name": "stdout", | |
"output_type": "stream", | |
"text": [ | |
"Generated 2 embeddings\n", | |
"Embedding dimension: 1024\n", | |
"Score: 0.2187 | Document: Apple Inc. has revolutionized the tech industry with its iPhone....\n", | |
"Score: 0.0113 | Document: Apple is a popular fruit known for its sweetness....\n", | |
"\n", | |
"Inference test completed successfully!\n" | |
] | |
} | |
], | |
"source": [ | |
"\n", | |
"if __name__ == \"__main__\":\n", | |
" agentops.init(default_tags=[\"pinecone-inference-test\"])\n", | |
" test_inference_operations()\n" | |
] | |
} | |
], | |
"metadata": { | |
"kernelspec": { | |
"display_name": "agentops-F4gm0d-M-py3.11", | |
"language": "python", | |
"name": "python3" | |
}, | |
"language_info": { | |
"codemirror_mode": { | |
"name": "ipython", | |
"version": 3 | |
}, | |
"file_extension": ".py", | |
"mimetype": "text/x-python", | |
"name": "python", | |
"nbconvert_exporter": "python", | |
"pygments_lexer": "ipython3", | |
"version": "3.11.5" | |
} | |
}, | |
"nbformat": 4, | |
"nbformat_minor": 5 | |
} | |
except Exception as e: | |
logging.error(f"Error in inference operations: {e}") | |
agentops.end_session(end_state="Fail") | |
return |
{file = "certifi-2024.8.30.tar.gz", hash = "sha256:bec941d2aa8195e248a60b31ff9f0558284cf01a52591ceda73ea9afffd69fd9"}, | ||
] | ||
|
||
[[package]] | ||
name = "cffi" | ||
version = "1.17.1" | ||
description = "Foreign Function Interface for Python calling C code." | ||
optional = false | ||
python-versions = ">=3.8" | ||
files = [ | ||
{file = "cffi-1.17.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:df8b1c11f177bc2313ec4b2d46baec87a5f3e71fc8b45dab2ee7cae86d9aba14"}, | ||
{file = "cffi-1.17.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8f2cdc858323644ab277e9bb925ad72ae0e67f69e804f4898c070998d50b1a67"}, | ||
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:edae79245293e15384b51f88b00613ba9f7198016a5948b5dddf4917d4d26382"}, | ||
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45398b671ac6d70e67da8e4224a065cec6a93541bb7aebe1b198a61b58c7b702"}, | ||
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ad9413ccdeda48c5afdae7e4fa2192157e991ff761e7ab8fdd8926f40b160cc3"}, | ||
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5da5719280082ac6bd9aa7becb3938dc9f9cbd57fac7d2871717b1feb0902ab6"}, | ||
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bb1a08b8008b281856e5971307cc386a8e9c5b625ac297e853d36da6efe9c17"}, | ||
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:045d61c734659cc045141be4bae381a41d89b741f795af1dd018bfb532fd0df8"}, | ||
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:6883e737d7d9e4899a8a695e00ec36bd4e5e4f18fabe0aca0efe0a4b44cdb13e"}, | ||
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:6b8b4a92e1c65048ff98cfe1f735ef8f1ceb72e3d5f0c25fdb12087a23da22be"}, | ||
{file = "cffi-1.17.1-cp310-cp310-win32.whl", hash = "sha256:c9c3d058ebabb74db66e431095118094d06abf53284d9c81f27300d0e0d8bc7c"}, | ||
{file = "cffi-1.17.1-cp310-cp310-win_amd64.whl", hash = "sha256:0f048dcf80db46f0098ccac01132761580d28e28bc0f78ae0d58048063317e15"}, | ||
{file = "cffi-1.17.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a45e3c6913c5b87b3ff120dcdc03f6131fa0065027d0ed7ee6190736a74cd401"}, | ||
{file = "cffi-1.17.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:30c5e0cb5ae493c04c8b42916e52ca38079f1b235c2f8ae5f4527b963c401caf"}, | ||
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f75c7ab1f9e4aca5414ed4d8e5c0e303a34f4421f8a0d47a4d019ceff0ab6af4"}, | ||
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a1ed2dd2972641495a3ec98445e09766f077aee98a1c896dcb4ad0d303628e41"}, | ||
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:46bf43160c1a35f7ec506d254e5c890f3c03648a4dbac12d624e4490a7046cd1"}, | ||
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a24ed04c8ffd54b0729c07cee15a81d964e6fee0e3d4d342a27b020d22959dc6"}, | ||
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:610faea79c43e44c71e1ec53a554553fa22321b65fae24889706c0a84d4ad86d"}, | ||
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:a9b15d491f3ad5d692e11f6b71f7857e7835eb677955c00cc0aefcd0669adaf6"}, | ||
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:de2ea4b5833625383e464549fec1bc395c1bdeeb5f25c4a3a82b5a8c756ec22f"}, | ||
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:fc48c783f9c87e60831201f2cce7f3b2e4846bf4d8728eabe54d60700b318a0b"}, | ||
{file = "cffi-1.17.1-cp311-cp311-win32.whl", hash = "sha256:85a950a4ac9c359340d5963966e3e0a94a676bd6245a4b55bc43949eee26a655"}, | ||
{file = "cffi-1.17.1-cp311-cp311-win_amd64.whl", hash = "sha256:caaf0640ef5f5517f49bc275eca1406b0ffa6aa184892812030f04c2abf589a0"}, | ||
{file = "cffi-1.17.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:805b4371bf7197c329fcb3ead37e710d1bca9da5d583f5073b799d5c5bd1eee4"}, | ||
{file = "cffi-1.17.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:733e99bc2df47476e3848417c5a4540522f234dfd4ef3ab7fafdf555b082ec0c"}, | ||
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1257bdabf294dceb59f5e70c64a3e2f462c30c7ad68092d01bbbfb1c16b1ba36"}, | ||
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da95af8214998d77a98cc14e3a3bd00aa191526343078b530ceb0bd710fb48a5"}, | ||
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d63afe322132c194cf832bfec0dc69a99fb9bb6bbd550f161a49e9e855cc78ff"}, | ||
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f79fc4fc25f1c8698ff97788206bb3c2598949bfe0fef03d299eb1b5356ada99"}, | ||
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b62ce867176a75d03a665bad002af8e6d54644fad99a3c70905c543130e39d93"}, | ||
{file = "cffi-1.17.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:386c8bf53c502fff58903061338ce4f4950cbdcb23e2902d86c0f722b786bbe3"}, | ||
{file = "cffi-1.17.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:4ceb10419a9adf4460ea14cfd6bc43d08701f0835e979bf821052f1805850fe8"}, | ||
{file = "cffi-1.17.1-cp312-cp312-win32.whl", hash = "sha256:a08d7e755f8ed21095a310a693525137cfe756ce62d066e53f502a83dc550f65"}, | ||
{file = "cffi-1.17.1-cp312-cp312-win_amd64.whl", hash = "sha256:51392eae71afec0d0c8fb1a53b204dbb3bcabcb3c9b807eedf3e1e6ccf2de903"}, | ||
{file = "cffi-1.17.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f3a2b4222ce6b60e2e8b337bb9596923045681d71e5a082783484d845390938e"}, | ||
{file = "cffi-1.17.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:0984a4925a435b1da406122d4d7968dd861c1385afe3b45ba82b750f229811e2"}, | ||
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d01b12eeeb4427d3110de311e1774046ad344f5b1a7403101878976ecd7a10f3"}, | ||
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:706510fe141c86a69c8ddc029c7910003a17353970cff3b904ff0686a5927683"}, | ||
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:de55b766c7aa2e2a3092c51e0483d700341182f08e67c63630d5b6f200bb28e5"}, | ||
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c59d6e989d07460165cc5ad3c61f9fd8f1b4796eacbd81cee78957842b834af4"}, | ||
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd398dbc6773384a17fe0d3e7eeb8d1a21c2200473ee6806bb5e6a8e62bb73dd"}, | ||
{file = "cffi-1.17.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:3edc8d958eb099c634dace3c7e16560ae474aa3803a5df240542b305d14e14ed"}, | ||
{file = "cffi-1.17.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:72e72408cad3d5419375fc87d289076ee319835bdfa2caad331e377589aebba9"}, | ||
{file = "cffi-1.17.1-cp313-cp313-win32.whl", hash = "sha256:e03eab0a8677fa80d646b5ddece1cbeaf556c313dcfac435ba11f107ba117b5d"}, | ||
{file = "cffi-1.17.1-cp313-cp313-win_amd64.whl", hash = "sha256:f6a16c31041f09ead72d69f583767292f750d24913dadacf5756b966aacb3f1a"}, | ||
{file = "cffi-1.17.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:636062ea65bd0195bc012fea9321aca499c0504409f413dc88af450b57ffd03b"}, | ||
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c7eac2ef9b63c79431bc4b25f1cd649d7f061a28808cbc6c47b534bd789ef964"}, | ||
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e221cf152cff04059d011ee126477f0d9588303eb57e88923578ace7baad17f9"}, | ||
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:31000ec67d4221a71bd3f67df918b1f88f676f1c3b535a7eb473255fdc0b83fc"}, | ||
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6f17be4345073b0a7b8ea599688f692ac3ef23ce28e5df79c04de519dbc4912c"}, | ||
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2b1fac190ae3ebfe37b979cc1ce69c81f4e4fe5746bb401dca63a9062cdaf1"}, | ||
{file = "cffi-1.17.1-cp38-cp38-win32.whl", hash = "sha256:7596d6620d3fa590f677e9ee430df2958d2d6d6de2feeae5b20e82c00b76fbf8"}, | ||
{file = "cffi-1.17.1-cp38-cp38-win_amd64.whl", hash = "sha256:78122be759c3f8a014ce010908ae03364d00a1f81ab5c7f4a7a5120607ea56e1"}, | ||
{file = "cffi-1.17.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b2ab587605f4ba0bf81dc0cb08a41bd1c0a5906bd59243d56bad7668a6fc6c16"}, | ||
{file = "cffi-1.17.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:28b16024becceed8c6dfbc75629e27788d8a3f9030691a1dbf9821a128b22c36"}, | ||
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1d599671f396c4723d016dbddb72fe8e0397082b0a77a4fab8028923bec050e8"}, | ||
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ca74b8dbe6e8e8263c0ffd60277de77dcee6c837a3d0881d8c1ead7268c9e576"}, | ||
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f7f5baafcc48261359e14bcd6d9bff6d4b28d9103847c9e136694cb0501aef87"}, | ||
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:98e3969bcff97cae1b2def8ba499ea3d6f31ddfdb7635374834cf89a1a08ecf0"}, | ||
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cdf5ce3acdfd1661132f2a9c19cac174758dc2352bfe37d98aa7512c6b7178b3"}, | ||
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:9755e4345d1ec879e3849e62222a18c7174d65a6a92d5b346b1863912168b595"}, | ||
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:f1e22e8c4419538cb197e4dd60acc919d7696e5ef98ee4da4e01d3f8cfa4cc5a"}, | ||
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:c03e868a0b3bc35839ba98e74211ed2b05d2119be4e8a0f224fba9384f1fe02e"}, | ||
{file = "cffi-1.17.1-cp39-cp39-win32.whl", hash = "sha256:e31ae45bc2e29f6b2abd0de1cc3b9d5205aa847cafaecb8af1476a609a2f6eb7"}, | ||
{file = "cffi-1.17.1-cp39-cp39-win_amd64.whl", hash = "sha256:d016c76bdd850f3c626af19b0542c9677ba156e4ee4fccfdd7848803533ef662"}, | ||
{file = "cffi-1.17.1.tar.gz", hash = "sha256:1c39c6016c32bc48dd54561950ebd6836e1670f2ae46128f67cf49e789c52824"}, | ||
] | ||
|
||
[package.dependencies] | ||
pycparser = "*" | ||
|
||
[[package]] | ||
name = "charset-normalizer" | ||
version = "3.4.0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ℹ️ Dependency Management
Assess Impact of Adding cffi
as a Mandatory Dependency
The addition of the cffi
package as a mandatory dependency should be carefully evaluated. cffi
is a C Foreign Function Interface for Python, and its inclusion might introduce platform-specific issues or require additional setup steps. Ensure that its inclusion is justified and documented for all environments.
Hey @areibman @the-praxs @teocns ! I've updated to match the new version, created example notebooks, applied LLMEvent to pinecone assistant chat function as well as did tests with temporary files as asked. Please tell me if anything else needs changes. Cheers, |
I will look in a while. Thanks a lot! |
Linting is failing so please resolve that. |
Files selected (3)
Files ignored (2)
InstructionsEmoji Descriptions:
Interact with the Bot:
Available Commands:
Tips for Using @Entelligence.AI Effectively:
Need More Help?
|
Files selected (0)Files ignored (2)
InstructionsEmoji Descriptions:
Interact with the Bot:
Available Commands:
Tips for Using @Entelligence.AI Effectively:
Need More Help?
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The most modifications are required in the Exception
block where a pprint
can help the user understand what's causing it.
Other changes are essential for code consistency and efficiency.
Also - please do the linting and resolve the errors with CI tests.
@dataclass | ||
class VectorEvent(Event): | ||
"""Event class for vector operations""" | ||
event_type: str = "action" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect type annotation and value. This should be EventType
with the value as EventType.VECTOR.value
.
Since that events is missing in the EventType
class, adding a VECTOR
enum will resolve the issu.
self._safe_record(session, event) | ||
return response | ||
except Exception as e: | ||
error_event = ErrorEvent( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be good to use pprint
and print the Exception
in the console for the user to see what's causing it.
"query": kwargs.get("query") | ||
}) | ||
except Exception as e: | ||
details["error"] = str(e) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as the other Exception
block, pprint
will look good!
self._safe_record(session, event) | ||
return response | ||
except Exception as e: | ||
error_event = ErrorEvent( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as the other Exception
block - pprint
result = orig(*args, **kwargs) | ||
return self.handle_response(result, event_kwargs, init_timestamp, session=session) | ||
except Exception as e: | ||
# Create ActionEvent for the error |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You know what I mean :)
response = pc_instance.assistant.create_assistant(**kwargs) | ||
return self._handle_assistant_response(response, "create_assistant", kwargs, init_timestamp, session) | ||
except Exception as e: | ||
error_event = ErrorEvent( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:)
response = assistant.chat_completions(messages=message_objects, stream=stream, model=model) | ||
|
||
# Debug logging | ||
print(f"Debug - Raw response: {response}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace with logger.debug
to ensure consistent logging.
return completion_text | ||
|
||
except Exception as e: | ||
print(f"Debug - Exception in chat_completions: {str(e)}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace with logger.debug
to ensure consistent logging.
Pinecone Integration with Comprehensive Testing Suite
Overview
This PR introduces a comprehensive Pinecone integration along with extensive testing coverage across vector operations, RAG implementations, inference capabilities, and assistant functionalities.
Key Features
PineconeProvider
ClassTesting Suites
RAG Pipeline Test (
pinecone_rag_test.py
)Inference API Test (
pinecone_inference_test.py
)Assistant API Test (
pinecone_assistant_test.py
)Implementation Details
Testing Instructions
Ensure that the required environment variables are set:
PINECONE_API_KEY
OPENAI_API_KEY
(for RAG testing)Run individual test suites with the following commands:
Notes
Future Improvements
For more information, refer to the Pinecone Documentation.