Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Autogen latency v2 #30

Closed
wants to merge 15 commits into from
Closed

Autogen latency v2 #30

wants to merge 15 commits into from

Conversation

raghavdixit99
Copy link
Owner

📥 Pull Request

📘 Description
Briefly describe the changes you've made.

🧪 Testing
Describe the tests you performed to validate your changes.

HowieG and others added 15 commits October 30, 2024 13:37
…AgentOps-AI#479)

In this feature we're adding the pop up chat of entelligence inside the agentops repository
* make start_session non blocking

* fix build issue

* bump version

* callback for start session now that it is async

* fix callback

* unpin dependencies (AgentOps-AI#434)

Co-authored-by: Howard Gil <[email protected]>

* bump version number

* wip

* change autogen getting run every time

* remove prints

* remove more prints

* suppress warnings

* exponential retry to close data

* removed event counter

* fix requests mock

* remove print

* fixed more tests

* removed inits from test_agent; does not require a client

* create requests fixture

* Scope fixtures

* black run

* remove bad files

* add spaces back

* revert session threading changes

* remove callback

* revert session changes

* revert session

* fix test

* update tests

* update tox to install local build instead of pypi

* replace http client with requests so requests_mock works properly

* fixed multiple sessions

* fix tool recorder

* removed test logs, fixed request count tests

* set fixture scopes

* Fixed missing async tests failing in tox, updated tox

* fixed missing pass in tests

* fixed timing

* created rc branch

---------

Co-authored-by: Shawn Qiu <[email protected]>
Co-authored-by: Howard Gil <[email protected]>
* Updated README and assets with new v2 screenshots

* updated font size on dashboard banner
Add Hover Badges and Update Badge Style for Social Links
…#382)

* add initial files for support

* working sync client

* stream not working

* updated examples notebook for current testing

* fix for `delta.content` and cleanup

* cleanup again

* cleanup and add tool event

* structure examples notebook

* add contextual answers tracking

* cleanup example notebook

* create testing file

* clean example notebook again

* rename examples directory

* updated docs page

* wrap `chunk.choices[0].delta.content` in `str(...)`

* update doc

---------

Co-authored-by: reibs <[email protected]>
…Ops-AI#374)

* add mistral support

* linting

* fix typo

* add tests

* add examples notebook

* linting

* fix langchain typo in pyproject.toml (updated to 0.2.14)

* fix mistralai import and `undo_override` function

* add mistral to readme

* fix typo

* modified self.llm_event to llm_event

* refactoring

* black

* rename examples directory

* fix merge

* init merge

* updated model name so that tokencost will recognize this as a mistral model

* black lint

---------

Co-authored-by: reibs <[email protected]>
@raghavdixit99
Copy link
Owner Author

Entelligence AI Bot Icon Entelligence AI Bot v4

Summary

Purpose:
Integrate AI21 and Mistral providers, improve session management, error handling, and logging, and update documentation and examples.

Key Changes:

  • New Feature: Support for AI21 and Mistral providers, introduction of class-level session object for HTTP requests, and new Jupyter notebooks for AI21 and Mistral integrations.
  • Refactor: Refactored session initialization and logging configuration, replaced http.client with requests for HTTP requests, and updated project version to 0.3.15rc1.
  • Documentation: Updated social media and documentation links with badge-style images, added new documentation for AI21 integration, and added new images and updated existing ones for documentation.
  • Test: Added new test cases for AI21 and Mistral providers, and updated existing tests for compatibility with new changes.

Impact:
Enhanced AgentOps project with new provider support, improved performance and reliability for HTTP requests, and provided comprehensive documentation and examples for users.

Copy link

github-actions bot commented Nov 9, 2024

Entelligence AI Bot Icon Entelligence AI Bot v4

Walkthrough

This update enhances the AgentOps project by integrating AI21 and Mistral providers, adding new examples and documentation for these integrations. The README file has been updated with badge-style links for better visual appeal. Several new images have been added to the documentation, and existing ones have been updated. The codebase has been refactored to improve session management, error handling, and logging. Additionally, the update includes new test cases and adjustments to existing ones to ensure compatibility with the latest changes. The version has been incremented to 0.3.15rc1.

Changes

File(s) Summary
README.md Updated social media and documentation links with badge-style images for improved visual presentation.
agentops/__init__.py, agentops/client.py Refactored session initialization and logging configuration. Improved error handling and logging for client initialization.
agentops/helpers.py Replaced http.client with requests for HTTP requests. Added a new function to format duration.
agentops/http_client.py Introduced a class-level session object for HTTP requests to improve performance and reliability.
agentops/llms/__init__.py, agentops/llms/ai21.py, agentops/llms/mistral.py Added support for AI21 and Mistral providers, including methods for handling responses and overriding default behavior.
agentops/session.py Set default session end state to "Indeterminate".
docs/images/external/app_screenshots/* Added new images and updated existing ones for documentation purposes.
docs/snippets/github-stars.mdx, docs/v1/integrations/ai21.mdx, docs/v1/integrations/langchain.mdx, docs/v1/introduction.mdx, docs/v1/quickstart.mdx Updated GitHub star count and added new documentation for AI21 integration.
docs/v1/scripts/entelligence.js, docs/v1/styles/styles.css Added new script for Entelligence chat and updated styles for better responsiveness.
examples/ai21_examples/ai21_examples.ipynb, examples/mistral_examples/mistral_example.ipynb Added new Jupyter notebooks demonstrating the use of AI21 and Mistral integrations.
pyproject.toml Updated project version to 0.3.15rc1 and added pytest configuration for asyncio.
tests/core_manual_tests/*, tests/test_*.py Added new test cases for AI21 and Mistral providers, updated existing tests for compatibility with new changes.
tox.ini Updated Python environment list and dependencies for testing.

Copy link

github-actions bot commented Nov 9, 2024

Entelligence AI Bot Icon Entelligence AI Bot v4

Files selected (24)
  • README.md
    - agentops/init.py
    - agentops/client.py
    - agentops/helpers.py
    - agentops/host_env.py
    - agentops/http_client.py
    - agentops/llms/init.py
    - agentops/llms/ai21.py
    - agentops/llms/mistral.py
    - agentops/session.py
    - docs/snippets/github-stars.mdx
    - docs/v1/integrations/ai21.mdx
    - docs/v1/integrations/langchain.mdx
    - docs/v1/introduction.mdx
    - docs/v1/quickstart.mdx
    - docs/v1/scripts/entelligence.js
    - docs/v1/styles/styles.css
    - examples/ai21_examples/ai21_examples.ipynb
    - examples/mistral_examples/mistral_example.ipynb
    - pyproject.toml
    - tests/core_manual_tests/api_server/server.py
    - tests/core_manual_tests/providers/ai21_canary.py
    - tests/core_manual_tests/providers/anthropic_canary.py
    - tests/core_manual_tests/providers/mistral_canary.py
Files ignored (18)
  • docs/images/external/app_screenshots/chat-viewer.png
    - docs/images/external/app_screenshots/dashboard-banner.png
    - docs/images/external/app_screenshots/dashboard_banner.png
    - docs/images/external/app_screenshots/overview-charts.png
    - docs/images/external/app_screenshots/overview.png
    - docs/images/external/app_screenshots/session-drilldown-graphs.png
    - docs/images/external/app_screenshots/session-drilldown-metadata.png
    - docs/images/external/app_screenshots/session-overview.png
    - docs/images/external/app_screenshots/session-replay.png
    - tests/test_agent.py
    - tests/test_canary.py
    - tests/test_events.py
    - tests/test_pre_init.py
    - tests/test_record_action.py
    - tests/test_record_tool.py
    - tests/test_session.py
    - tests/test_teardown.py
    - tox.ini
Instructions

Emoji Descriptions:

  • ⚠️ Potential Issue - May require further investigation.
  • 🔒 Security Vulnerability - Fix to ensure system safety.
  • 💻 Code Improvement - Suggestions to enhance code quality.
  • 🔨 Refactor Suggestion - Recommendations for restructuring code.
  • ℹ️ Others - General comments and information.

Interact with the Bot:

  • Send a message or request using the format:

    @Entelligence.AI + *your message*

    • Example: @Entelligence.AI Can you suggest improvements for this code?
  • Execute a command using the format:

    @Entelligence.AI + *"/command"*

Available Commands:

  • /updateCommit: Apply the suggested changes and commit them.
  • /updateGuideline: Modify an existing guideline.
  • /addGuideline: Introduce a new guideline.

Tips for Using @Entelligence.AI Effectively:

  • Specific Queries: For the best results, be specific with your requests. Example: @Entelligence.AI summarize the changes in this PR.
  • Focused Discussions: Tag @Entelligence.AI directly on specific code lines or files for detailed feedback.
  • Managing Reviews: Use review comments for targeted discussions on code snippets, and PR comments for broader queries about the entire PR.

Need More Help?

  • Visit our documentation for detailed guides on using Entelligence.AI.
  • Join our community to connect with others, request features, and share feedback.
  • Follow us for updates on new features and improvements.

Comment on lines 29 to 35
Attributes:
init_timestamp (float): The timestamp for when the session started, represented as seconds since the epoch.
end_timestamp (float, optional): The timestamp for when the session ended, represented as seconds since the epoch. This is only set after end_session is called.
end_state (str, optional): The final state of the session. Suggested: "Success", "Fail", "Indeterminate"
end_state (str, optional): The final state of the session. Suggested: "Success", "Fail", "Indeterminate". Defaults to "Indeterminate".
end_state_reason (str, optional): The reason for ending the session.

"""
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Entelligence AI Bot Icon Entelligence AI Bot v4

💻 Code Consistency

Ensure Consistent Use of Default end_state

The docstring now mentions that end_state defaults to "Indeterminate". Ensure that this change is consistently applied across the codebase, especially in places where end_state is checked or utilized, to prevent any logic discrepancies.

Comment on lines 42 to 48
host_env: Optional[dict] = None,
):
self.end_timestamp = None
self.end_state: Optional[str] = None
self.end_state: Optional[str] = "Indeterminate"
self.session_id = session_id
self.init_timestamp = get_ISO_time()
self.tags: List[str] = tags or []
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Entelligence AI Bot Icon Entelligence AI Bot v4

ℹ️ Logic Error

Review Default Value for end_state

The default value for end_state is set to "Indeterminate" in the constructor. Ensure that this default value aligns with the intended logic of the application. If the session's end state should be explicitly set by the application logic, consider initializing it to None and setting it only when the session ends.

Commitable Code Suggestion:
Suggested change
host_env: Optional[dict] = None,
):
self.end_timestamp = None
self.end_state: Optional[str] = None
self.end_state: Optional[str] = "Indeterminate"
self.session_id = session_id
self.init_timestamp = get_ISO_time()
self.tags: List[str] = tags or []
- self.end_state: Optional[str] = "Indeterminate"
+ self.end_state: Optional[str] = None

Comment on lines +1 to +251
return response

def override(self):
self._override_completion()
self._override_completion_async()
self._override_answer()
self._override_answer_async()

def _override_completion(self):
from ai21.clients.studio.resources.chat import ChatCompletions

global original_create
original_create = ChatCompletions.create

def patched_function(*args, **kwargs):
# Call the original function with its original arguments
init_timestamp = get_ISO_time()
session = kwargs.get("session", None)
if "session" in kwargs.keys():
del kwargs["session"]
result = original_create(*args, **kwargs)
return self.handle_response(result, kwargs, init_timestamp, session=session)

# Override the original method with the patched one
ChatCompletions.create = patched_function

def _override_completion_async(self):
from ai21.clients.studio.resources.chat import AsyncChatCompletions

global original_create_async
original_create_async = AsyncChatCompletions.create

async def patched_function(*args, **kwargs):
# Call the original function with its original arguments
init_timestamp = get_ISO_time()
session = kwargs.get("session", None)
if "session" in kwargs.keys():
del kwargs["session"]
result = await original_create_async(*args, **kwargs)
return self.handle_response(result, kwargs, init_timestamp, session=session)

# Override the original method with the patched one
AsyncChatCompletions.create = patched_function

def _override_answer(self):
from ai21.clients.studio.resources.studio_answer import StudioAnswer

global original_answer
original_answer = StudioAnswer.create

def patched_function(*args, **kwargs):
# Call the original function with its original arguments
init_timestamp = get_ISO_time()

session = kwargs.get("session", None)
if "session" in kwargs.keys():
del kwargs["session"]
result = original_answer(*args, **kwargs)
return self.handle_response(result, kwargs, init_timestamp, session=session)

StudioAnswer.create = patched_function

def _override_answer_async(self):
from ai21.clients.studio.resources.studio_answer import AsyncStudioAnswer

global original_answer_async
original_answer_async = AsyncStudioAnswer.create

async def patched_function(*args, **kwargs):
# Call the original function with its original arguments
init_timestamp = get_ISO_time()

session = kwargs.get("session", None)
if "session" in kwargs.keys():
del kwargs["session"]
result = await original_answer_async(*args, **kwargs)
return self.handle_response(result, kwargs, init_timestamp, session=session)

AsyncStudioAnswer.create = patched_function

def undo_override(self):
if (
self.original_create is not None
and self.original_create_async is not None
and self.original_answer is not None
and self.original_answer_async is not None
):
from ai21.clients.studio.resources.chat import (
ChatCompletions,
AsyncChatCompletions,
)
from ai21.clients.studio.resources.studio_answer import (
StudioAnswer,
AsyncStudioAnswer,
)

ChatCompletions.create = self.original_create
AsyncChatCompletions.create = self.original_create_async
StudioAnswer.create = self.original_answer
AsyncStudioAnswer.create = self.original_answer_async
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Entelligence AI Bot Icon Entelligence AI Bot v4

ℹ️ Logic Error

Incorrect Use of getattr for Tool Calls

The getattr function is incorrectly used to access choice.delta.tool_calls. The current implementation will always return None because the first argument should be an object, not a string. This could lead to missing tool call events in the accumulated delta.

- if getattr("choice.delta", "tool_calls", None):
+ if getattr(choice.delta, "tool_calls", None):
Commitable Code Suggestion:
Suggested change
import inspect
import pprint
from typing import Optional
from agentops.llms.instrumented_provider import InstrumentedProvider
from agentops.time_travel import fetch_completion_override_from_time_travel_cache
from ..event import ErrorEvent, LLMEvent, ActionEvent, ToolEvent
from ..session import Session
from ..log_config import logger
from ..helpers import check_call_stack_for_agent_id, get_ISO_time
from ..singleton import singleton
@singleton
class AI21Provider(InstrumentedProvider):
original_create = None
original_create_async = None
original_answer = None
original_answer_async = None
def __init__(self, client):
super().__init__(client)
self._provider_name = "AI21"
def handle_response(
self, response, kwargs, init_timestamp, session: Optional[Session] = None
):
"""Handle responses for AI21"""
from ai21.stream.stream import Stream
from ai21.stream.async_stream import AsyncStream
from ai21.models.chat.chat_completion_chunk import ChatCompletionChunk
from ai21.models.chat.chat_completion_response import ChatCompletionResponse
from ai21.models.responses.answer_response import AnswerResponse
llm_event = LLMEvent(init_timestamp=init_timestamp, params=kwargs)
action_event = ActionEvent(init_timestamp=init_timestamp, params=kwargs)
if session is not None:
llm_event.session_id = session.session_id
def handle_stream_chunk(chunk: ChatCompletionChunk):
# We take the first ChatCompletionChunk and accumulate the deltas from all subsequent chunks to build one full chat completion
if llm_event.returns is None:
llm_event.returns = chunk
# Manually setting content to empty string to avoid error
llm_event.returns.choices[0].delta.content = ""
try:
accumulated_delta = llm_event.returns.choices[0].delta
llm_event.agent_id = check_call_stack_for_agent_id()
llm_event.model = kwargs["model"]
llm_event.prompt = [
message.model_dump() for message in kwargs["messages"]
]
# NOTE: We assume for completion only choices[0] is relevant
choice = chunk.choices[0]
if choice.delta.content:
accumulated_delta.content += choice.delta.content
if choice.delta.role:
accumulated_delta.role = choice.delta.role
if getattr("choice.delta", "tool_calls", None):
accumulated_delta.tool_calls += ToolEvent(logs=choice.delta.tools)
if choice.finish_reason:
# Streaming is done. Record LLMEvent
llm_event.returns.choices[0].finish_reason = choice.finish_reason
llm_event.completion = {
"role": accumulated_delta.role,
"content": accumulated_delta.content,
}
llm_event.prompt_tokens = chunk.usage.prompt_tokens
llm_event.completion_tokens = chunk.usage.completion_tokens
llm_event.end_timestamp = get_ISO_time()
self._safe_record(session, llm_event)
except Exception as e:
self._safe_record(
session, ErrorEvent(trigger_event=llm_event, exception=e)
)
kwargs_str = pprint.pformat(kwargs)
chunk = pprint.pformat(chunk)
logger.warning(
f"Unable to parse a chunk for LLM call. Skipping upload to AgentOps\n"
f"chunk:\n {chunk}\n"
f"kwargs:\n {kwargs_str}\n"
)
# if the response is a generator, decorate the generator
# For synchronous Stream
if isinstance(response, Stream):
def generator():
for chunk in response:
handle_stream_chunk(chunk)
yield chunk
return generator()
# For asynchronous AsyncStream
if isinstance(response, AsyncStream):
async def async_generator():
async for chunk in response:
handle_stream_chunk(chunk)
yield chunk
return async_generator()
# Handle object responses
try:
if isinstance(response, ChatCompletionResponse):
llm_event.returns = response
llm_event.agent_id = check_call_stack_for_agent_id()
llm_event.model = kwargs["model"]
llm_event.prompt = [
message.model_dump() for message in kwargs["messages"]
]
llm_event.prompt_tokens = response.usage.prompt_tokens
llm_event.completion = response.choices[0].message.model_dump()
llm_event.completion_tokens = response.usage.completion_tokens
llm_event.end_timestamp = get_ISO_time()
self._safe_record(session, llm_event)
elif isinstance(response, AnswerResponse):
action_event.returns = response
action_event.agent_id = check_call_stack_for_agent_id()
action_event.action_type = "Contextual Answers"
action_event.logs = [
{"context": kwargs["context"], "question": kwargs["question"]},
response.model_dump() if response.model_dump() else None,
]
action_event.end_timestamp = get_ISO_time()
self._safe_record(session, action_event)
except Exception as e:
self._safe_record(session, ErrorEvent(trigger_event=llm_event, exception=e))
kwargs_str = pprint.pformat(kwargs)
response = pprint.pformat(response)
logger.warning(
f"Unable to parse response for LLM call. Skipping upload to AgentOps\n"
f"response:\n {response}\n"
f"kwargs:\n {kwargs_str}\n"
)
return response
def override(self):
self._override_completion()
self._override_completion_async()
self._override_answer()
self._override_answer_async()
def _override_completion(self):
from ai21.clients.studio.resources.chat import ChatCompletions
global original_create
original_create = ChatCompletions.create
def patched_function(*args, **kwargs):
# Call the original function with its original arguments
init_timestamp = get_ISO_time()
session = kwargs.get("session", None)
if "session" in kwargs.keys():
del kwargs["session"]
result = original_create(*args, **kwargs)
return self.handle_response(result, kwargs, init_timestamp, session=session)
# Override the original method with the patched one
ChatCompletions.create = patched_function
def _override_completion_async(self):
from ai21.clients.studio.resources.chat import AsyncChatCompletions
global original_create_async
original_create_async = AsyncChatCompletions.create
async def patched_function(*args, **kwargs):
# Call the original function with its original arguments
init_timestamp = get_ISO_time()
session = kwargs.get("session", None)
if "session" in kwargs.keys():
del kwargs["session"]
result = await original_create_async(*args, **kwargs)
return self.handle_response(result, kwargs, init_timestamp, session=session)
# Override the original method with the patched one
AsyncChatCompletions.create = patched_function
def _override_answer(self):
from ai21.clients.studio.resources.studio_answer import StudioAnswer
global original_answer
original_answer = StudioAnswer.create
def patched_function(*args, **kwargs):
# Call the original function with its original arguments
init_timestamp = get_ISO_time()
session = kwargs.get("session", None)
if "session" in kwargs.keys():
del kwargs["session"]
result = original_answer(*args, **kwargs)
return self.handle_response(result, kwargs, init_timestamp, session=session)
StudioAnswer.create = patched_function
def _override_answer_async(self):
from ai21.clients.studio.resources.studio_answer import AsyncStudioAnswer
global original_answer_async
original_answer_async = AsyncStudioAnswer.create
async def patched_function(*args, **kwargs):
# Call the original function with its original arguments
init_timestamp = get_ISO_time()
session = kwargs.get("session", None)
if "session" in kwargs.keys():
del kwargs["session"]
result = await original_answer_async(*args, **kwargs)
return self.handle_response(result, kwargs, init_timestamp, session=session)
AsyncStudioAnswer.create = patched_function
def undo_override(self):
if (
self.original_create is not None
and self.original_create_async is not None
and self.original_answer is not None
and self.original_answer_async is not None
):
from ai21.clients.studio.resources.chat import (
ChatCompletions,
AsyncChatCompletions,
)
from ai21.clients.studio.resources.studio_answer import (
StudioAnswer,
AsyncStudioAnswer,
)
ChatCompletions.create = self.original_create
AsyncChatCompletions.create = self.original_create_async
StudioAnswer.create = self.original_answer
AsyncStudioAnswer.create = self.original_answer_async
if getattr(choice.delta, "tool_calls", None):

🔒 Security Suggestion

Sanitize Sensitive Data Before Logging

The current implementation logs potentially sensitive information, such as kwargs and chunk, in the event of an error. Consider sanitizing or redacting sensitive data before logging to prevent exposure of sensitive information.

Comment on lines 91 to 118
header=None,
) -> Response:
result = Response()
try:
# Create request session with retries configured
request_session = requests.Session()
request_session.mount(url, HTTPAdapter(max_retries=retry_config))

if api_key is not None:
JSON_HEADER["X-Agentops-Api-Key"] = api_key
session = cls.get_session()

if parent_key is not None:
JSON_HEADER["X-Agentops-Parent-Key"] = parent_key

if jwt is not None:
JSON_HEADER["Authorization"] = f"Bearer {jwt}"

res = request_session.post(
url, data=payload, headers=JSON_HEADER, timeout=20
)
# Update headers for this request
headers = dict(session.headers)
if api_key:
headers["X-Agentops-Api-Key"] = api_key
if parent_key:
headers["X-Agentops-Parent-Key"] = parent_key
if jwt:
headers["Authorization"] = f"Bearer {jwt}"

try:
res = session.post(url, data=payload, headers=headers, timeout=20)
result.parse(res)

except requests.exceptions.Timeout:
result.code = 408
result.status = HttpStatus.TIMEOUT
raise ApiServerException(
"Could not reach API server - connection timed out"
)

except requests.exceptions.HTTPError as e:
try:
result.parse(e.response)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Entelligence AI Bot Icon Entelligence AI Bot v4

🔒 Security Suggestion

Redact Sensitive Headers in Logs

Ensure that sensitive headers such as API keys and JWT tokens are not logged or exposed in error messages. Consider using a logging filter to redact these values from logs to prevent accidental exposure.

Comment on lines 47 to 57


def get_installed_packages():

try:
return {
# TODO: test
# TODO: add to opt out
"Installed Packages": {
dist.metadata["Name"]: dist.version
dist.metadata.get("Name"): dist.metadata.get("Version")
for dist in importlib.metadata.distributions()
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Entelligence AI Bot Icon Entelligence AI Bot v4

ℹ️ Logic Error

Handle None Values in Metadata

The change from dist.metadata["Name"] to dist.metadata.get("Name") and similarly for Version is a good practice to avoid KeyError if the metadata is missing. However, this change introduces a potential issue where None values could be included in the dictionary if the metadata is not present. This could lead to unexpected behavior when processing the installed packages list.

 def get_installed_packages():
     try:
         return {
             "Installed Packages": {
-                dist.metadata.get("Name"): dist.metadata.get("Version")
+                name: version
+                for dist in importlib.metadata.distributions()
+                if (name := dist.metadata.get("Name")) is not None
+                and (version := dist.metadata.get("Version")) is not None
             }
         }
Commitable Code Suggestion:
Suggested change
def get_installed_packages():
try:
return {
# TODO: test
# TODO: add to opt out
"Installed Packages": {
dist.metadata["Name"]: dist.version
dist.metadata.get("Name"): dist.metadata.get("Version")
for dist in importlib.metadata.distributions()
}
}
def get_installed_packages():
try:
return {
"Installed Packages": {
name: version
for dist in importlib.metadata.distributions()
if (name := dist.metadata.get("Name")) is not None
and (version := dist.metadata.get("Version")) is not None
}
}

Comment on lines 370 to 518
Streaming

```python python
from mistralai import Mistral
import agentops

# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)

client = Mistral(
# This is the default and can be omitted
api_key=os.environ.get("MISTRAL_API_KEY"),
)

message = client.chat.stream(
messages=[
{
"role": "user",
"content": "Tell me something cool about streaming agents",
}
],
model="open-mistral-nemo",
)

response = ""
for event in message:
if event.data.choices[0].finish_reason == "stop":
print("\n")
print(response)
print("\n")
else:
response += event.text

agentops.end_session('Success')
```

Async

```python python
import asyncio
from mistralai import Mistral

client = Mistral(
# This is the default and can be omitted
api_key=os.environ.get("MISTRAL_API_KEY"),
)


async def main() -> None:
message = await client.chat.complete_async(
messages=[
{
"role": "user",
"content": "Tell me something interesting about async agents",
}
],
model="open-mistral-nemo",
)
print(message.choices[0].message.content)


await main()
```

Async Streaming

```python python
import asyncio
from mistralai import Mistral

client = Mistral(
# This is the default and can be omitted
api_key=os.environ.get("MISTRAL_API_KEY"),
)


async def main() -> None:
message = await client.chat.stream_async(
messages=[
{
"role": "user",
"content": "Tell me something interesting about async streaming agents",
}
],
model="open-mistral-nemo",
)

response = ""
async for event in message:
if event.data.choices[0].finish_reason == "stop":
print("\n")
print(response)
print("\n")
else:
response += event.text


await main()
```
</details>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Entelligence AI Bot Icon Entelligence AI Bot v4

ℹ️ Performance Improvement

Optimize HTTP Requests with Class-Level Session

The introduction of a class-level session object for HTTP requests in agentops/http_client.py is a significant improvement. This change reduces the overhead of creating a new session for each request, enhancing performance and reliability. Ensure that the session is properly closed when no longer needed to prevent resource leaks.

🔒 Security Suggestion

Secure Handling of Sensitive Information

In the agentops/init.py and agentops/client.py, ensure that sensitive information such as API keys are not logged. Consider using environment variables or secure vaults to manage sensitive data securely.

ℹ️ Logic Error

Clarify Default Session End State

In the agentops/session.py, setting the default session end state to 'Indeterminate' could lead to confusion if not properly documented or handled. Ensure that the logic for determining the session end state is robust and clearly documented to avoid misinterpretation.

Comment on lines 122 to 133
result.status = Response.get_status(e.response.status_code)
result.body = {"error": str(e)}
raise ApiServerException(f"HTTPError: {e}")

except requests.exceptions.RequestException as e:
result.body = {"error": str(e)}
raise ApiServerException(f"RequestException: {e}")

# Handle error status codes
if result.code == 401:
raise ApiServerException(
f"API server: invalid API key: {api_key}. Find your API key at https://app.agentops.ai/settings/projects"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Entelligence AI Bot Icon Entelligence AI Bot v4

ℹ️ Error Handling

Comprehensive Error Handling for HTTP Requests

The refactored error handling in the post method now includes a catch-all for requests.exceptions.RequestException, which is a good practice to handle unexpected request errors. Ensure that the error messages logged are informative but do not expose sensitive information.

Commitable Code Suggestion:
Suggested change
result.status = Response.get_status(e.response.status_code)
result.body = {"error": str(e)}
raise ApiServerException(f"HTTPError: {e}")
except requests.exceptions.RequestException as e:
result.body = {"error": str(e)}
raise ApiServerException(f"RequestException: {e}")
# Handle error status codes
if result.code == 401:
raise ApiServerException(
f"API server: invalid API key: {api_key}. Find your API key at https://app.agentops.ai/settings/projects"
+ except requests.exceptions.RequestException as e:
+ result.body = {"error": str(e)}
+ raise ApiServerException(f"RequestException: {e}")

Comment on lines 57 to 88


class HttpClient:
_session = None # Class-level session object

@classmethod
def get_session(cls) -> requests.Session:
if cls._session is None:
cls._session = requests.Session()
# Configure session defaults
adapter = HTTPAdapter(
max_retries=retry_config,
pool_connections=1, # Assuming api.agentops.ai is the only host
pool_maxsize=100, # Maximum number of connections to save in the pool
)
cls._session.mount("http://", adapter)
cls._session.mount("https://", adapter)
cls._session.headers.update(
{
"Content-Type": "application/json; charset=UTF-8",
"Accept": "*/*",
"User-Agent": "AgentOps-Client",
}
)
return cls._session

@staticmethod
@classmethod
def post(
cls,
url: str,
payload: bytes,
api_key: Optional[str] = None,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Entelligence AI Bot Icon Entelligence AI Bot v4

ℹ️ Performance Improvement

Class-level Session Object for Improved Performance

The introduction of a class-level session object in HttpClient improves performance by reusing the same session for multiple requests, reducing the overhead of creating a new session each time. This change is beneficial for applications making frequent HTTP requests.

Commitable Code Suggestion:
Suggested change
class HttpClient:
_session = None # Class-level session object
@classmethod
def get_session(cls) -> requests.Session:
if cls._session is None:
cls._session = requests.Session()
# Configure session defaults
adapter = HTTPAdapter(
max_retries=retry_config,
pool_connections=1, # Assuming api.agentops.ai is the only host
pool_maxsize=100, # Maximum number of connections to save in the pool
)
cls._session.mount("http://", adapter)
cls._session.mount("https://", adapter)
cls._session.headers.update(
{
"Content-Type": "application/json; charset=UTF-8",
"Accept": "*/*",
"User-Agent": "AgentOps-Client",
}
)
return cls._session
@staticmethod
@classmethod
def post(
cls,
url: str,
payload: bytes,
api_key: Optional[str] = None,
+ @classmethod
+ def get_session(cls) -> requests.Session:
+ if cls._session is None:
+ cls._session = requests.Session()
+ # Configure session defaults
+ adapter = HTTPAdapter(
+ max_retries=retry_config,
+ pool_connections=1, # Assuming api.agentops.ai is the only host
+ pool_maxsize=100, # Maximum number of connections to save in the pool
+ )
+ cls._session.mount("http://", adapter)
+ cls._session.mount("https://", adapter)
+ cls._session.headers.update(
+ {
+ "Content-Type": "application/json; charset=UTF-8",
+ "Accept": "*/*",
+ "User-Agent": "AgentOps-Client",
+ }
+ )
+ return cls._session

Comment on lines 142 to 176

return result

@staticmethod
@classmethod
def get(
cls,
url: str,
api_key: Optional[str] = None,
jwt: Optional[str] = None,
header=None,
) -> Response:
result = Response()
try:
# Create request session with retries configured
request_session = requests.Session()
request_session.mount(url, HTTPAdapter(max_retries=retry_config))

if api_key is not None:
JSON_HEADER["X-Agentops-Api-Key"] = api_key
session = cls.get_session()

if jwt is not None:
JSON_HEADER["Authorization"] = f"Bearer {jwt}"

res = request_session.get(url, headers=JSON_HEADER, timeout=20)
# Update headers for this request
headers = dict(session.headers)
if api_key:
headers["X-Agentops-Api-Key"] = api_key
if jwt:
headers["Authorization"] = f"Bearer {jwt}"

try:
res = session.get(url, headers=headers, timeout=20)
result.parse(res)

except requests.exceptions.Timeout:
result.code = 408
result.status = HttpStatus.TIMEOUT
raise ApiServerException(
"Could not reach API server - connection timed out"
)

except requests.exceptions.HTTPError as e:
try:
result.parse(e.response)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Entelligence AI Bot Icon Entelligence AI Bot v4

ℹ️ Logic Error

Ensure Correct Header Management in Class-level Session

The get method now uses a class-level session object, which is a positive change. However, ensure that the session headers are correctly updated for each request to avoid unintended header persistence across different requests.

Commitable Code Suggestion:
Suggested change
return result
@staticmethod
@classmethod
def get(
cls,
url: str,
api_key: Optional[str] = None,
jwt: Optional[str] = None,
header=None,
) -> Response:
result = Response()
try:
# Create request session with retries configured
request_session = requests.Session()
request_session.mount(url, HTTPAdapter(max_retries=retry_config))
if api_key is not None:
JSON_HEADER["X-Agentops-Api-Key"] = api_key
session = cls.get_session()
if jwt is not None:
JSON_HEADER["Authorization"] = f"Bearer {jwt}"
res = request_session.get(url, headers=JSON_HEADER, timeout=20)
# Update headers for this request
headers = dict(session.headers)
if api_key:
headers["X-Agentops-Api-Key"] = api_key
if jwt:
headers["Authorization"] = f"Bearer {jwt}"
try:
res = session.get(url, headers=headers, timeout=20)
result.parse(res)
except requests.exceptions.Timeout:
result.code = 408
result.status = HttpStatus.TIMEOUT
raise ApiServerException(
"Could not reach API server - connection timed out"
)
except requests.exceptions.HTTPError as e:
try:
result.parse(e.response)
# Update headers for this request
headers = dict(session.headers)
if api_key:
headers["X-Agentops-Api-Key"] = api_key
if jwt:
headers["Authorization"] = f"Bearer {jwt}"
try:
res = session.get(url, headers=headers, timeout=20)
result.parse(res)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants