-
Notifications
You must be signed in to change notification settings - Fork 494
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
1 changed file
with
60 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,60 @@ | ||
# Ollama Python Library | ||
|
||
The Ollama Python library provides the easiest way to integrate your Python 3 project with [Ollama](https://github.com/jmorganca/ollama). | ||
|
||
## Getting Started | ||
|
||
Requires Python 3.8 or higher. | ||
|
||
```sh | ||
pip install ollama | ||
``` | ||
|
||
```python | ||
from ollama import chat | ||
response = chat(model='llama2', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}]) | ||
``` | ||
|
||
### Using the Synchronous Client | ||
|
||
```python | ||
from ollama import Client | ||
message = {'role': 'user', 'content': 'Why is the sky blue?'} | ||
Client().chat(model='llama2', messages=[message]) | ||
``` | ||
|
||
Response streaming can be enabled by setting `stream=True`. This modifies the function to return a Python generator where each part is an object in the stream. | ||
|
||
```python | ||
from ollama import Client | ||
message = {'role': 'user', 'content': 'Why is the sky blue?'} | ||
for part in Client().chat(model='llama2', messages=[message], stream=True): | ||
print(part.get('message'), end='', flush=True) | ||
``` | ||
|
||
### Using the Asynchronous Client | ||
|
||
```python | ||
import asyncio | ||
from ollama import AsyncClient | ||
|
||
async def chat(): | ||
message = {'role': 'user', 'content': 'Why is the sky blue?'} | ||
await AsyncClient().chat(model='llama2', messages=[message]) | ||
|
||
asyncio.run(chat()) | ||
``` | ||
|
||
Similar to the synchronous client, setting `stream=True` modifies the function to return a Python asynchronous generator. | ||
|
||
```python | ||
import asyncio | ||
from ollama import AsyncClient | ||
|
||
async def chat(): | ||
message = {'role': 'user', 'content': 'Why is the sky blue?'} | ||
async for part in await AsyncClient().chat(model='llama2', messages=[message], stream=True): | ||
print(part.get('message'), end='', flush=True) | ||
|
||
asyncio.run(chat()) | ||
``` |