Skip to content

Commit

Permalink
Merge branch 'docs' into dev
Browse files Browse the repository at this point in the history
  • Loading branch information
ksopyla committed Jul 12, 2024
2 parents ef5c8d2 + 3f718a6 commit 9157e7c
Show file tree
Hide file tree
Showing 9 changed files with 159 additions and 89 deletions.
8 changes: 6 additions & 2 deletions docs/_data/navigation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,14 +20,18 @@ docs:
url: /docs/quick-start-guide/
- title: "Project Setup"
url: /docs/how-to-setup-llm-proxy-project/
- title: "Demo"
url: /docs/demo/
- title: Advanced Settings
children:
- title: "Input/output capturing"
- title: "Transaction capturing"
url: /docs/storing-transactions/
- title: "REST API"
url: /docs/backend-rest-api
- title: "Supported Models"
url: /docs/supported-gen-ai-models
- title: "Gen AI API integrations"
url: /docs/gen-ai-api-integrations-with-list-of-examples/
- title: Deployment Cookbook
children:
- title: "Deployment recepies"
Expand All @@ -49,7 +53,7 @@ docs:
- title: "Login Page"
url: /docs/login-page
- title: "Organization Dashboard"
url: /docs/organization-dashboard
url: /docs/organization-dashboard/
- title: "Project Dashboard"
url: /docs/project-dashboard
- title: "Transactions View"
Expand Down
4 changes: 2 additions & 2 deletions docs/_docs/00-introduction.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: "Introduction"
permalink: /docs/introduction/
excerpt: "Intorduction, key features and philosophy of Prompt Sail"
excerpt: "Introduction, key features and philosophy of Prompt Sail"
last_modified_at: 2023-12-28T14:48:05+01:00
redirect_from:
- /theme-setup/
Expand Down Expand Up @@ -35,7 +35,7 @@ There are two options to run the Prompt Sail docker containers:
* [pull the images from Github Container Repository (ghcr.io)](/docs/quick-start-guide/#pull-and-run-the-docker-images-from-ghcr).


In the next page you will:
On the next page you will:

* learn how to run Prompt Sail on your local machine
* make your first API call to OpenAI
Expand Down
36 changes: 20 additions & 16 deletions docs/_docs/01-quick-start-guide.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: "Quick Start Guide"
permalink: /docs/quick-start-guide/
excerpt: "How build docker images and run Prompt Sail on your local machine and make your first API call."
excerpt: "How to build docker images and run Prompt Sail on your local machine and make your first API call."
last_modified_at: 2023-12-28T15:18:35+01:00
redirect_from:
- /theme-setup/
Expand All @@ -14,9 +14,9 @@ toc_sticky: true

## Run Prompt Sail on your local machine

Prompt Sail is build as a set of docker containers. One for backend (promptsail-backend) and one for frontend (promptsail-ui).
Prompt Sail is built as a set of docker containers. One for the backend (promptsail-backend) and one for the frontend (promptsail-ui).

- **promptsail-backend** is a proxy that sits between your LLM framework of choice (LangChain, OpenAI python lib etc) and LLM provider API. You change `api_base` to point to Prompt Sail `proxy_url` and then it will captures and logs all your prompts and responses.
- **promptsail-backend** is a proxy that sits between your LLM framework of choice (LangChain, OpenAI python lib etc) and LLM provider API. You change `api_base` to point to Prompt Sail `proxy_url` and then it will capture and log all your prompts and responses.
- **promptsail-ui** is a user interface that allows you to view, search and analyze all transactions (prompts and responses).


Expand All @@ -29,7 +29,7 @@ There are two options to run the Prompt Sail docker containers:
### Build the Docker images from the source code


Building from source will give you the latest version of the code with the newest features. However, please note that there might be uncaught bugs that could affect the stability of the application.
Building from a source will give you the latest version of the code with the newest features. However, please note that there might be uncaught bugs that could affect the stability of the application.
{: .notice--warning}


Expand Down Expand Up @@ -73,7 +73,7 @@ If you've previously pulled Prompt Sail images from ghcr, ensure to pull the pro
docker-compose -f docker-compose.yml up
```

If you want to run the dev version of the images, you can pull the `dev-release` tag insted of `latest`. More on image tagging strategy and deployments you will find at [Deployment Cookbook - Local Deployment](/docs/deploy-promptsail-local#pull-and-run-the-docker-images-from-ghcr) section.
If you want to run the dev version of the images, you can pull the `dev-release` tag insted of `latest`. More on image tagging strategy and deployments can be found at [Deployment](/docs/deploy-promptsail-local#pull-and-run-the-docker-images-from-ghcr)](/docs/deploy-promptsail-local#pull-and-run-the-docker-images-from-ghcr) Cookbook - Local Deployment](/docs/deploy-promptsail-local#pull-and-run-the-docker-images-from-ghcr) section.


All the environment variables are set to default and non-production deployment in the [docker-compose.yml](https://github.com/PromptSail/prompt_sail/blob/main/docker-compose.yml) it is recommended to change them to your own values.
Expand Down Expand Up @@ -111,7 +111,7 @@ The MongoDB database should be running at [http://localhost:27017/](http://local

Mongo-Express acts as a web-based MongoDB admin interface. It should be accessible at [http://localhost:8081/](http://localhost:8081/).
- Default login credentials: `admin`:`pass`
- It is not necessary to use Mongo-Express to run Prompt Sail, but it can be helpful for debugging and monitoring the database.
- It is not necessary to use Mongo-Express to run Prompt Sail, but it can help debug and monitor the database.


**All the settings** can be changed in the appropriate `dokcer-compose` files:
Expand All @@ -123,10 +123,10 @@ Mongo-Express acts as a web-based MongoDB admin interface. It should be accessib

## Create your first project and add at least one AI provider

In the UI, go to your [Organization's dasboard](/docs/organization-dashboard/). Using the [Add new project](/docs/how-to-setup-llm-proxy-project/) form, create your first project and add at least one AI provider.
In the Prompt Sail UI, go to your [Organization's dashboard](/docs/organization-dashboard) . Using the [Add new project](/docs/how-to-setup-llm-proxy-project/) form, create your first project and add at least one AI provider.


## Make your first API call
## Make your first Gen AI API call

### OpenAI Chat model example

Expand Down Expand Up @@ -157,12 +157,16 @@ openai_key = os.getenv("OPENAI_API_KEY")
openai_org_id = os.getenv("OPENAI_ORG_ID")
```

Make an API call to OpenAI via Prompt Sail without tagging.
What is and where to get **api_base** [see here](https://promptsail.github.io/prompt_sail/docs/storing-transactions/)

To make an API call to OpenAI via Prompt Sail, you will need a `proxy_url`. This can be obtained in the Prompt Sail UI - under the AI Providers tab in your [Project's Dashboard](/docs/project-dashboard). Before continuing, make sure that the OpenAI is in your project's AI providers list. If not, you will need to add it first.

**Once you have the auto-generated `proxy-url`, replace the `api_base` address with it in your code.**

More about `proxy_url` [you can learn here](https://promptsail.github.io/prompt_sail/docs/storing-transactions/)

```python

api_base = "http://localhost:8000/project1/openai/"
api_base = "http://localhost:8000/projectxyz/openai/"

ps_client = OpenAI(
base_url=api_base,
Expand All @@ -189,8 +193,10 @@ pprint(response.choices[0].message)
```


Make an API call to OpenAI via Prompt Sail adding some tags for the transaction.
How structure of **api_base** for passing tags looks like, [see here](https://promptsail.github.io/prompt_sail/docs/storing-transactions/)
It is also possible to tag Gen AI API calls by passing tags in the `proxy_url`. To simplify this process, use the proxy url generator found in the AI Providers tab of your [Project Dashboard]((/docs/project-dashboard)). This tool is available for every AI provider you've added.

More about the structure of `proxy_url` (aka `api_base`) for passing tags [here](/docs/storing-transactions/)


```python

Expand Down Expand Up @@ -222,6 +228,4 @@ pprint(response.choices[0].message)

## More examples

You can find more examples as jupyter notebooks in the repository folder [prompt_sail/examples](https://github.com/PromptSail/prompt_sail/tree/docs/examples).

All tested integraion are documented in [LLM Integration](/docs/llm-integrations/) section.
You can find more examples as jupyter notebooks in the repository folder [prompt_sail/examples](https://github.com/PromptSail/prompt_sail/tree/docs/examples). All examples of tested integrations are also documented in [Gen AI API integrations](/docs/gen-ai-api-integrations-with-list-of-examples/) section.
4 changes: 2 additions & 2 deletions docs/_docs/02-how-to-setup-llm-proxy-project.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,9 @@ This section contains information about the AI providers in use. Once you have a
1. **AI Providers (Select Provider)**: This functionality allows users to choose from a list of available AI providers to integrate into their project. The dropdown list typically contains a variety of AI providers offering different services or functionalities.
2. **Deployment name**: The deployment name is a unique identifier for the deployment of your AI model or service within the project. It is used to create a proxy URL for a project.
3. **Api base URL**: The API base URL is the root URL for the API endpoints provided by your AI service or model. It serves as the starting point for accessing various functionalities and resources offered by the AI provider.
4. **Proxy URL**: The proxy URL serves as an intermediary between the project and the AI provider's API. It is generated automatically based on the deployment name and project slug. After filled in neccsesary information copy this link to used it
4. **Proxy URL**: The proxy URL serves as an intermediary between the project and the AI provider's API. It is generated automatically based on the deployment name and project slug. After filling in the necessary information copy this link to use it.

After entering the necessary information, click Add AI Provider to add it to the project. **You need to add at least one Ai Provider**
After entering the necessary information, click Add AI Provider to add it to the project. **You need to add at least one AI Provider**

## Complete Project Creation

Expand Down
27 changes: 27 additions & 0 deletions docs/_docs/03-demo.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
---
title: "Demo"
permalink: /docs/demo/
excerpt: "What is and how to use a Prompt sail demo"
last_modified_at: 2024-07-10T12:46:35+01:00
redirect_from:
- /theme-setup/
toc: true
toc_sticky: true
---


## Prompt Sail Demo


The Demo provides an insight into Prompt Sail's user interface. It allows you to explore the layout of the organization's and project's dashboards. You can also find out what details of stored transactions (AI API calls) are logged and what statistics are provided for transactions, projects and organization.

Experience the functionalities of Prompt Sail firsthand by utilizing our [**FREE DEMO**](https://try-promptsail.azurewebsites.net/).
{: .notice--warning}

**No installation or account creation is required, making it a straightforward process.**

In the Demo, you also have the opportunity to test the logging of your LLM API calls. The easiest way to proceed with this is to follow the instructions in the section [Make your first LLM API call](https://promptsail.github.io/prompt_sail/docs/quick-start-guide/#make-your-first-api-call)


**Warning**: The Prompt Sail demo is public, so avoid including any sensitive information in your prompts when testing the Demo. Each new deployment of [**Demo page**](https://try-promptsail.azurewebsites.net/) resets the database, causing projects and transactions in the demo to be regularly deleted.
{: .notice--warning}
Original file line number Diff line number Diff line change
Expand Up @@ -14,37 +14,40 @@ toc_sticky: true

Prompt Sail stores transactions by acting as a proxy for libraries and capturing the request and response data.

All the magic happens when you replace **api_base**(or similar parameter) which originaly points to your LLM provider endpoint by ours **proxy_url**. Thanks to this substitution we can bypass your request and grab response transparently.
In the Prompt Sail, the transaction refers to the single Gen AI model call with its full context. It includes metadata, the request arguments as well as the input and output of the AI API call.
{: .notice--warning}

All the magic happens when you replace `api_base`(or a similar parameter) which originally points to your LLM provider endpoint by our `proxy_url`. Thanks to this substitution we can bypass your request and grab a response transparently.

Before you start using Prompt Sail as a proxy, you need to configure the `project` and add `ai-providers` via UI, those information eventaully will be used to create your unique **proxy_url**.

In one project you can have multiple AI deployments, each with its own **proxy_url**.
Before you start using Prompt Sail as a proxy, you need to configure the *Project* and add an *AI Provider* via UI, that information eventually will be used to create your unique `proxy_url`.

In one project you can have multiple *AI Providers* (aka *AI Deployments*), each with its `proxy_url`.


### The **proxy_url** structure is as follows:

### The `proxy_url` structure is as follows:

```
http://localhost:8000/project_slug/deployment_name/
```

where:
* **project_slug** is a slugified project name, configured in the UI while creating a project
* **deployment_name** is a slugified AI deployment name, configured in the project settings with the target AI provider api url eg. https://api.openai.com/v1/, you can configure multiple AI deployments for a single project
* `project_slug` is a slugified project name, configured in the UI while creating a project
* `deployment_name` is a slugified AI deployment name, configured in the project settings with the target AI provider *API Base URL* eg. for OpenAI: https://api.openai.com/v1/. (Note that you can configure multiple *AI Deployments* for a single project.)

Through the **proxy_url**, it is also possible to tag transactions.
Through the `proxy_url`, it is also possible to tag transactions.

### The **proxy_url** structure for passing the tags is as follows:
### The `proxy_url` structure for passing the tags is as follows:

```
http://localhost:8000/project_slug/deployment_name/?tags=tag1,tag2,tag3&target_path=
```

where:
* **tags** is a comma-separated list of tags. This is optional and can be used to tag a transaction eg. with a specific user_id,
* `tags` is a comma-separated list of tags. This is optional and can be used to tag a transaction eg. with a specific user_id,
department_name, prompting_technique etc. Tags can help you filter and analyze transactions in the UI.
* **target_path** is required in proxy url when tags are added to it and is used for capturing the target path of particular requests. If you send requests by Python libraries, target_path should be empty (like this: target_path=). In such cases, it will be filled by external Python packages (eg. Langchain, OpenAI).
* `target_path` is required in proxy url when tags are added to it and is used for capturing the target path of particular requests. If you send requests by Python libraries, `target_path` should be empty (like this: `target_path=`)`. In such cases, it will be filled by external Python packages (eg. Langchain, OpenAI).


Proxy on your behalf makes a call to the configured AI API and log the request and response data in the database.
Expand Down
Loading

0 comments on commit 9157e7c

Please sign in to comment.