Skip to content

Commit

Permalink
update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
mxyng committed Dec 21, 2023
1 parent 83295d9 commit 6f55659
Showing 1 changed file with 70 additions and 0 deletions.
70 changes: 70 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
# Ollama Python Library

The Ollama Python library provides the easiest way to integrate your Python 3 project with [Ollama](https://github.com/jmorganca/ollama).

## Getting Started

Requires Python 3.8 or higher.

```sh
pip install ollama
```

A global default client is provided for convenience and can be used in the same way as the synchronous client.

```python
import ollama
response = ollama.chat(model='llama2', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}])
```

```python
import ollama
message = {'role': 'user', 'content': 'Why is the sky blue?'}
for part in ollama.chat(model='llama2', messages=[message], stream=True):
print(part['message']['content'], end='', flush=True)
```


### Using the Synchronous Client

```python
from ollama import Client
message = {'role': 'user', 'content': 'Why is the sky blue?'}
response = Client().chat(model='llama2', messages=[message])
```

Response streaming can be enabled by setting `stream=True`. This modifies the function to return a Python generator where each part is an object in the stream.

```python
from ollama import Client
message = {'role': 'user', 'content': 'Why is the sky blue?'}
for part in Client().chat(model='llama2', messages=[message], stream=True):
print(part['message']['content'], end='', flush=True)
```

### Using the Asynchronous Client

```python
import asyncio
from ollama import AsyncClient

async def chat():
message = {'role': 'user', 'content': 'Why is the sky blue?'}
response = await AsyncClient().chat(model='llama2', messages=[message])

asyncio.run(chat())
```

Similar to the synchronous client, setting `stream=True` modifies the function to return a Python asynchronous generator.

```python
import asyncio
from ollama import AsyncClient

async def chat():
message = {'role': 'user', 'content': 'Why is the sky blue?'}
async for part in await AsyncClient().chat(model='llama2', messages=[message], stream=True):
print(part['message']['content'], end='', flush=True)

asyncio.run(chat())
```

0 comments on commit 6f55659

Please sign in to comment.