Skip to content

Commit

Permalink
(docs) Document StandardLoggingPayload Spec (#7201)
Browse files Browse the repository at this point in the history
* add slp spec to docs

* docs slp

* test slp enforcement
  • Loading branch information
ishaan-jaff authored Dec 12, 2024
1 parent 431c86c commit 621c713
Show file tree
Hide file tree
Showing 4 changed files with 195 additions and 85 deletions.
85 changes: 1 addition & 84 deletions docs/my-website/docs/proxy/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,90 +113,7 @@ Removes any field with `user_api_key_*` from metadata.

Found under `kwargs["standard_logging_object"]`. This is a standard payload, logged for every response.

```python
class StandardLoggingPayload(TypedDict):
id: str
trace_id: str # Trace multiple LLM calls belonging to same overall request (e.g. fallbacks/retries)
call_type: str
response_cost: float
response_cost_failure_debug_info: Optional[
StandardLoggingModelCostFailureDebugInformation
]
status: StandardLoggingPayloadStatus
total_tokens: int
prompt_tokens: int
completion_tokens: int
startTime: float # Note: making this camelCase was a mistake, everything should be snake case
endTime: float
completionStartTime: float # time the first token of the LLM response is returned (for streaming responses)
response_time: float # time the LLM takes to respond (for streaming uses time to first token)
model_map_information: StandardLoggingModelInformation
model: str
model_id: Optional[str]
model_group: Optional[str]
api_base: str
metadata: StandardLoggingMetadata
cache_hit: Optional[bool]
cache_key: Optional[str]
saved_cache_cost: float
request_tags: list
end_user: Optional[str]
requester_ip_address: Optional[str]
messages: Optional[Union[str, list, dict]]
response: Optional[Union[str, list, dict]]
error_str: Optional[str]
model_parameters: dict
hidden_params: StandardLoggingHiddenParams
class StandardLoggingHiddenParams(TypedDict):
model_id: Optional[str]
cache_key: Optional[str]
api_base: Optional[str]
response_cost: Optional[str]
additional_headers: Optional[StandardLoggingAdditionalHeaders]
class StandardLoggingAdditionalHeaders(TypedDict, total=False):
x_ratelimit_limit_requests: int
x_ratelimit_limit_tokens: int
x_ratelimit_remaining_requests: int
x_ratelimit_remaining_tokens: int
class StandardLoggingMetadata(StandardLoggingUserAPIKeyMetadata):
"""
Specific metadata k,v pairs logged to integration for easier cost tracking
"""
spend_logs_metadata: Optional[
dict
] # special param to log k,v pairs to spendlogs for a call
requester_ip_address: Optional[str]
requester_metadata: Optional[dict]
class StandardLoggingModelInformation(TypedDict):
model_map_key: str
model_map_value: Optional[ModelInfo]
StandardLoggingPayloadStatus = Literal["success", "failure"]
class StandardLoggingModelCostFailureDebugInformation(TypedDict, total=False):
"""
Debug information, if cost tracking fails.
Avoid logging sensitive information like response or optional params
"""
error_str: Required[str]
traceback_str: Required[str]
model: str
cache_hit: Optional[bool]
custom_llm_provider: Optional[str]
base_model: Optional[str]
call_type: str
custom_pricing: Optional[bool]
```

[👉 **Standard Logging Payload Specification**](./logging_spec)

## Langfuse

Expand Down
114 changes: 114 additions & 0 deletions docs/my-website/docs/proxy/logging_spec.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@

# StandardLoggingPayload Specification

Found under `kwargs["standard_logging_object"]`. This is a standard payload, logged for every successful and failed response.

## StandardLoggingPayload

| Field | Type | Description |
|-------|------|-------------|
| `id` | `str` | Unique identifier |
| `trace_id` | `str` | Trace multiple LLM calls belonging to same overall request |
| `call_type` | `str` | Type of call |
| `response_cost` | `float` | Cost of the response in USD ($) |
| `response_cost_failure_debug_info` | `StandardLoggingModelCostFailureDebugInformation` | Debug information if cost tracking fails |
| `status` | `StandardLoggingPayloadStatus` | Status of the payload |
| `total_tokens` | `int` | Total number of tokens |
| `prompt_tokens` | `int` | Number of prompt tokens |
| `completion_tokens` | `int` | Number of completion tokens |
| `startTime` | `float` | Start time of the call |
| `endTime` | `float` | End time of the call |
| `completionStartTime` | `float` | Time to first token for streaming requests |
| `response_time` | `float` | Total response time. If streaming, this is the time to first token |
| `model_map_information` | `StandardLoggingModelInformation` | Model mapping information |
| `model` | `str` | Model name sent in request |
| `model_id` | `Optional[str]` | Model ID of the deployment used |
| `model_group` | `Optional[str]` | `model_group` used for the request |
| `api_base` | `str` | LLM API base URL |
| `metadata` | `StandardLoggingMetadata` | Metadata information |
| `cache_hit` | `Optional[bool]` | Whether cache was hit |
| `cache_key` | `Optional[str]` | Optional cache key |
| `saved_cache_cost` | `float` | Cost saved by cache |
| `request_tags` | `list` | List of request tags |
| `end_user` | `Optional[str]` | Optional end user identifier |
| `requester_ip_address` | `Optional[str]` | Optional requester IP address |
| `messages` | `Optional[Union[str, list, dict]]` | Messages sent in the request |
| `response` | `Optional[Union[str, list, dict]]` | LLM response |
| `error_str` | `Optional[str]` | Optional error string |
| `error_information` | `Optional[StandardLoggingPayloadErrorInformation]` | Optional error information |
| `model_parameters` | `dict` | Model parameters |
| `hidden_params` | `StandardLoggingHiddenParams` | Hidden parameters |

## StandardLoggingUserAPIKeyMetadata

| Field | Type | Description |
|-------|------|-------------|
| `user_api_key_hash` | `Optional[str]` | Hash of the litellm virtual key |
| `user_api_key_alias` | `Optional[str]` | Alias of the API key |
| `user_api_key_org_id` | `Optional[str]` | Organization ID associated with the key |
| `user_api_key_team_id` | `Optional[str]` | Team ID associated with the key |
| `user_api_key_user_id` | `Optional[str]` | User ID associated with the key |
| `user_api_key_team_alias` | `Optional[str]` | Team alias associated with the key |

## StandardLoggingMetadata

Inherits from `StandardLoggingUserAPIKeyMetadata` and adds:

| Field | Type | Description |
|-------|------|-------------|
| `spend_logs_metadata` | `Optional[dict]` | Key-value pairs for spend logging |
| `requester_ip_address` | `Optional[str]` | Requester's IP address |
| `requester_metadata` | `Optional[dict]` | Additional requester metadata |

## StandardLoggingAdditionalHeaders

| Field | Type | Description |
|-------|------|-------------|
| `x_ratelimit_limit_requests` | `int` | Rate limit for requests |
| `x_ratelimit_limit_tokens` | `int` | Rate limit for tokens |
| `x_ratelimit_remaining_requests` | `int` | Remaining requests in rate limit |
| `x_ratelimit_remaining_tokens` | `int` | Remaining tokens in rate limit |

## StandardLoggingHiddenParams

| Field | Type | Description |
|-------|------|-------------|
| `model_id` | `Optional[str]` | Optional model ID |
| `cache_key` | `Optional[str]` | Optional cache key |
| `api_base` | `Optional[str]` | Optional API base URL |
| `response_cost` | `Optional[str]` | Optional response cost |
| `additional_headers` | `Optional[StandardLoggingAdditionalHeaders]` | Additional headers |

## StandardLoggingModelInformation

| Field | Type | Description |
|-------|------|-------------|
| `model_map_key` | `str` | Model map key |
| `model_map_value` | `Optional[ModelInfo]` | Optional model information |

## StandardLoggingModelCostFailureDebugInformation

| Field | Type | Description |
|-------|------|-------------|
| `error_str` | `str` | Error string |
| `traceback_str` | `str` | Traceback string |
| `model` | `str` | Model name |
| `cache_hit` | `Optional[bool]` | Whether cache was hit |
| `custom_llm_provider` | `Optional[str]` | Optional custom LLM provider |
| `base_model` | `Optional[str]` | Optional base model |
| `call_type` | `str` | Call type |
| `custom_pricing` | `Optional[bool]` | Whether custom pricing was used |

## StandardLoggingPayloadErrorInformation

| Field | Type | Description |
|-------|------|-------------|
| `error_code` | `Optional[str]` | Optional error code (eg. "429") |
| `error_class` | `Optional[str]` | Optional error class (eg. "RateLimitError") |
| `llm_provider` | `Optional[str]` | LLM provider that returned the error (eg. "openai")` |

## StandardLoggingPayloadStatus

A literal type with two possible values:
- `"success"`
- `"failure"`
2 changes: 1 addition & 1 deletion docs/my-website/sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ const sidebars = {
{
type: "category",
label: "Logging, Alerting, Metrics",
items: ["proxy/logging", "proxy/team_logging","proxy/alerting", "proxy/prometheus",],
items: ["proxy/logging", "proxy/logging_spec", "proxy/team_logging","proxy/alerting", "proxy/prometheus"],
},
{
type: "category",
Expand Down
79 changes: 79 additions & 0 deletions tests/documentation_tests/test_standard_logging_payload.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
import os
import re
import sys

from typing import get_type_hints

sys.path.insert(
0, os.path.abspath("../..")
) # Adds the parent directory to the system path

from litellm.types.utils import StandardLoggingPayload


def get_all_fields(type_dict, prefix=""):
"""Recursively get all fields from TypedDict and its nested types"""
fields = set()

# Get type hints for the TypedDict
hints = get_type_hints(type_dict)

for field_name, field_type in hints.items():
full_field_name = f"{prefix}{field_name}" if prefix else field_name
fields.add(full_field_name)

# Check if the field type is another TypedDict we should process
if hasattr(field_type, "__annotations__"):
nested_fields = get_all_fields(field_type)
fields.update(nested_fields)
return fields


def test_standard_logging_payload_documentation():
# Get all fields from StandardLoggingPayload and its nested types
all_fields = get_all_fields(StandardLoggingPayload)

print("All fields in StandardLoggingPayload: ")
for _field in all_fields:
print(_field)

# Read the documentation
docs_path = "../../docs/my-website/docs/proxy/logging_spec.md"

try:
with open(docs_path, "r", encoding="utf-8") as docs_file:
content = docs_file.read()

# Extract documented fields from the table
doc_field_pattern = re.compile(r"\|\s*`([^`]+?)`\s*\|")
documented_fields = set(doc_field_pattern.findall(content))

# Clean up documented fields (remove whitespace)
documented_fields = {field.strip() for field in documented_fields}

# Clean up documented fields (remove whitespace)
documented_fields = {field.strip() for field in documented_fields}
print("\n\nDocumented fields: ")
for _field in documented_fields:
print(_field)

# Compare and find undocumented fields
undocumented_fields = all_fields - documented_fields

print("\n\nUndocumented fields: ")
for _field in undocumented_fields:
print(_field)

if undocumented_fields:
raise Exception(
f"\nFields not documented in 'StandardLoggingPayload': {undocumented_fields}"
)

print(
f"All {len(all_fields)} fields are documented in 'StandardLoggingPayload'"
)

except FileNotFoundError:
raise Exception(
f"Documentation file not found at {docs_path}. Please ensure the documentation exists."
)

0 comments on commit 621c713

Please sign in to comment.