Skip to content

Commit

Permalink
feat(docs): update to include pezzo proxy
Browse files Browse the repository at this point in the history
  • Loading branch information
arielweinberger committed Nov 24, 2023
1 parent f55e6fb commit 5f16125
Show file tree
Hide file tree
Showing 10 changed files with 166 additions and 159 deletions.
27 changes: 25 additions & 2 deletions docs/client/request-caching.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,10 @@ Utilizing caching can sometimes reduce your development costs and execution time

## Usage

To enable caching, simply set `cache: enabled` in the Pezzo Options parameter. Here is an example:
To enable caching, simply set the `X-Pezzo-Cache-Enabled: true` header. Here is an example:

<Tabs>
<Tab title="Node.js">
```ts
const response = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
Expand All @@ -23,9 +25,30 @@ const response = await openai.chat.completions.create({
}
]
}, {
cache: true
headers: {
"X-Pezzo-Cache-Enabled": true,
}
});

```
</Tab>
<Tab title="Python">
```py
chat_completion = openai.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "user",
"content": "Tell me 5 fun facts about yourself",
}
],
headers={
"X-Pezzo-Cache-Enabled": "true"
}
)
```
</Tab>
</Tabs>

## Cached Requests in the Console

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: "Running With Docker Compose"
title: "Docker Compose"
description: "Learn how to run the full Pezzo stack locally with Docker Compose."
---

Expand Down
113 changes: 53 additions & 60 deletions docs/introduction/tutorial-observability/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,30 +3,6 @@ title: "Tutorial: Observability"
description: "In just a few lines of code, monitor your AI operations seamlessly."
---

<Tip>
This tutorial is for users who want to use Pezzo for observability and monitoring only.

If you also want to use Pezzo to manage your prompts, version control and prompt deployment, check out the [Prompt Management tutorial](/introduction/tutorial-prompt-management/overview).
</Tip>

**Prefer a video tutorial?** We've prepared a 5-minute video for you! If you want to see the code example, [it's available on Codesandbox](https://codesandbox.io/p/sandbox/pezzo-example-observability-6d2qp6?file=%2Fsrc%2Fapp.ts%3A1%2C1).


<div style={{
position: "relative",
paddingBottom: "62.5%",
height: 0
}}>
<iframe
src="https://www.loom.com/embed/dd2276e912834b6c8d67f1355e449cae?sid=d6cea226-90bc-4586-ab6c-7150369b06c9"
frameborder="0"
webkitallowfullscreen
mozallowfullscreen
allowfullscreen
style={{ position: "absolute", top: 0, left: 0, width: "100%", height: "100%" }}
></iframe>
</div>

## What you'll learn

You're going to learn how to easily use Pezzo to supercharge your AI operations with monitoring and observability. It takes just a few lines of code!
Expand All @@ -37,31 +13,25 @@ You're going to learn how to easily use Pezzo to supercharge your AI operations
Cloud](https://app.pezzo.ai).
</Note>

## Install depdendencies

Install the Pezzo Client and the OpenAI SDK:

```bash
npm i @pezzo/client openai
```

## Making calls to OpenAI
## Using Pezzo with OpenAI

Here is a code example:

```ts app.ts
import { Pezzo, PezzoOpenAI } from "@pezzo/client";
<Tabs>
<Tab title="Node.js">
```ts
import OpenAI from "openai";

// Initialize the Pezzo client
const pezzo = new Pezzo({
apiKey: "<Your Pezzo API key>",
projectId: "<Your Pezzo project ID>",
environment: "Production",
const openai = new OpenAI({
baseURL: "https://proxy.pezzo.ai/openai/v1",
defaultHeaders: {
"X-Pezzo-Api-Key": "<Your API Key>",
"X-Pezzo-Project-Id": "<Your Project ID>",
"X-Pezzo-Environment": "Production",
}
});

// Initialize the OpenAI client
const openai = new PezzoOpenAI(pezzo);

async function main() {
// Make calls to the OpenAI API as you normally would!
const completion = await openai.chat.completions.create(
Expand All @@ -71,37 +41,48 @@ async function main() {
messages: [
{
role: "user",
content: "Tell me {numFacts} fun facts about {topic}",
content: "Tell me 5 fun facts about yourself",
},
],
},
{
variables: {
// You can define variables that will be interpolated during execution.
numFacts: 3,
topic: "Artificial Intelligence",
},
properties: {
// You can optionally specify custom properties that will be associated with the request.
someProperty: "someValue",
},
}
);
}

main();

```
</Tab>
<Tab title="Python">
```py
import openai

openai.base_url = "https://proxy.pezzo.ai/openai/v1"
openai.default_headers = {
"X-Pezzo-Api-Key": "<Your API Key>",
"X-Pezzo-Project-Id": "<Your Project ID>",
"X-Pezzo-Environment": "Production"
}

[Click here to run this example on CodeSandbox](https://codesandbox.io/p/sandbox/pezzo-example-observability-6d2qp6?file=%2Fsrc%2Fapp.ts%3A1%2C1)
chat_completion = openai.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "user",
"content": "Tell me 5 fun facts about yourself",
}
]
)
```
</Tab>
</Tabs>

**Let's explain what's going on here:**

- First, we initialize the Pezzo client and the OpenAI client. We pass the Pezzo client to the OpenAI client so it can use it to fetch the prompt.
- Then, we make a call to the OpenAI API as we normally would.
- (Optional) We specify additional parameters in the second argument, these are `variables` and `properties`.
We import and instantiate the OpenAI client with a few additional parameters. First, the `baseURL` which tells the OpenAI client to proxy the request through Pezzo.

The result will go directly to OpenAI and the response will be reported to Pezzo.
Then, we set a few default headers that will be present at any request made to OpenAI. These are:
- `X-Pezzo-Api-Key` - Your Pezzo API key. You can find it in your Organization page in [Pezzo Cloud](https://app.pezzo.ai).
- `X-Pezzo-Project-Id` - The ID of the project you want to use. You can find it anywhere [Pezzo Cloud](https://app.pezzo.ai).
- `X-Pezzo-Environment` - The name of the environment to use. By default, any Pezzo project is automatically created with a `Production` environment.

## Monitoring Requests

Expand All @@ -112,3 +93,15 @@ If you want to learn more about Pezzo's observability feature, check out the [Ob
<Frame caption="Pezzo Project Requests View">
<img src="/platform/observability/requests-view.png" />
</Frame>

# Next Steps

<CardGroup cols={2}>
<Card
title="Request Caching"
icon="bolt"
href="/client/request-caching"
>
Save up to 90% of your AI costs with Pezzo's request caching.
</Card>
</CardGroup>
19 changes: 1 addition & 18 deletions docs/introduction/tutorial-prompt-management/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,23 +8,6 @@ This tutorial is for users who want to manage their AI operations in Pezzo end-t
If you wish to only use Pezzo for monitoring and observability, check out the [Observability Tutorial](/introduction/tutorial-observability/overview).
</Tip>

**Prefer a video tutorial?** We've prepared a 5-minute video for you! If you want to see the code example, [it's available on Codesandbox](https://codesandbox.io/p/sandbox/pezzo-example-prompt-management-qv5f86?file=%2Fsrc%2Fapp.ts%3A1%2C1).

<div style={{
position: "relative",
paddingBottom: "62.5%",
height: 0
}}>
<iframe
src="https://www.loom.com/embed/4d7fe830a03943c190a22fbe4704926f?sid=d0312d90-9b34-46d2-bdbd-938911e2ac76"
frameborder="0"
webkitallowfullscreen
mozallowfullscreen
allowfullscreen
style={{ position: "absolute", top: 0, left: 0, width: "100%", height: "100%" }}
></iframe>
</div>

## What you'll learn
You're going to learn how to manage your AI prompts with Pezzo, so you can streamline delivery and collaborate with your team. This includes:

Expand Down Expand Up @@ -149,7 +132,7 @@ npm i @pezzo/client openai

Here is a code example:

```ts app.ts
```ts Node.js
import { Pezzo, PezzoOpenAI } from "@pezzo/client";

// Initialize the Pezzo client
Expand Down
17 changes: 5 additions & 12 deletions docs/introduction/what-is-pezzo.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,24 +22,17 @@ Pezzo is a powerful open-source toolkit designed to streamline the process of AI
# Next Steps
<CardGroup cols={2}>
<Card
title="Observability"
title="Tutorial: Observability & Monitoring"
icon="eye"
href="/platform/observability/overview"
href="/introduction/tutorial-observability/overview"
>
Learn about Pezzo's robust observability features.
Learn about Pezzo's robust observability & monitoring features.
</Card>
<Card
title="Prompt Management"
title="Tutorial: Prompt Management"
icon="wrench"
href="/platform/prompt-management/overview"
href="/introduction/tutorial-prompt-management/overview"
>
Learn how you can streamline your AI delivery with Pezzo.
</Card>
<Card
title="Recipe: OpenAI With Pezzo"
icon="code"
href="/client/integrations/openai"
>
Get started with Pezzo and OpenAI in 5 minutes.
</Card>
</CardGroup>
22 changes: 6 additions & 16 deletions docs/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -36,35 +36,25 @@
"group": "Getting Started",
"pages": [
"introduction/what-is-pezzo",
"introduction/tutorial-prompt-management/overview",
"introduction/tutorial-observability/overview",
"introduction/docker-compose"
"introduction/tutorial-prompt-management/overview"
]
},
{
"group": "Observability",
"group": "Features",
"pages": [
"platform/proxy/overview",
"platform/observability/overview",
"platform/observability/requests",
"platform/observability/metrics"
]
},
{
"group": "Prompt Management",
"pages": [
"platform/prompt-management/overview",
"client/request-caching",
"platform/prompt-management/environments",
"platform/prompt-management/prompt-editor",
"platform/prompt-management/versioning-and-deployments"
]
},
{
"group": "Pezzo SDK",
"group": "Deployment",
"pages": [
"client/pezzo-client-node",
"client/pezzo-client-python",
"client/integrations/openai",
"client/request-caching"
"deployment/docker-compose"
]
},
{
Expand Down
19 changes: 0 additions & 19 deletions docs/platform/observability/metrics.mdx

This file was deleted.

43 changes: 40 additions & 3 deletions docs/platform/observability/overview.mdx
Original file line number Diff line number Diff line change
@@ -1,10 +1,47 @@
---
title: 'Observability Overview'
sidebarTitle: 'Overview'
title: 'Monitoring & Observability'
---
Pezzo enabels you to easily observe your Geneartive AI operations. This includes:

- **Traces and Requests** - Pezzo automatically traces your operations and requests and provides you with a detailed view of the execution of your operations.
- **Advanced Filtering** - Filter your traces and requests by any field, including custom fields (e.g. user ID, correlation ID, etc).
- **Metrics** - Pezzo automatically collects metrics from your LLM calls (execution time, cost, error rate, and more) and provides you with detailed dashboards.
- **Alerts** *(Coming Soon)* - Define alerts on your metrics and get notified when something goes wrong.
- **Alerts** *(Coming Soon)* - Define alerts on your metrics and get notified when something goes wrong.

Observe your Generative AI operations with Pezzo.

# Requests View

Every project on Pezzo has a dedicated Requests view. This view shows you all the requests that have been made to your project.

<Frame caption="Pezzo Project Requests View">
<img src="/platform/observability/requests-view.png" />
</Frame>

## Filters

You can filter requests by various criteria. For example, you can filter by the status of the request, by date/time, or by the cost of execution.

<Frame caption="Filters">
<img src="/platform/observability/filters.png" />
</Frame>

## Inspector

You can inspect the details of any request by clicking on it. This will open the Inspector view.

<Frame caption="Request Inspector">
<img src="/platform/observability/request-inspector.png" />
</Frame>

# Prompt Metrics

<Tip>
If you feel that there are metrics that are missing, please let us know by [creating an issue on GitHub](https://github.com/pezzolabs/pezzo/issues).
</Tip>

If you manage your prompts with Pezzo, you are able to view insights and metrics for a specific prompt.

<Frame caption="Prompt Metrics">
<img src="/platform/observability/prompt-metrics.png" />
</Frame>
Loading

0 comments on commit 5f16125

Please sign in to comment.