diff --git a/README.md b/README.md index e69de29..e2279a3 100644 --- a/README.md +++ b/README.md @@ -0,0 +1,60 @@ +# Ollama Python Library + +The Ollama Python library provides the easiest way to integrate your Python 3 project with [Ollama](https://github.com/jmorganca/ollama). + +## Getting Started + +Requires Python 3.8 or higher. + +```sh +pip install ollama +``` + +```python +from ollama import chat +response = chat(model='llama2', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}]) +``` + +### Using the Synchronous Client + +```python +from ollama import Client +message = {'role': 'user', 'content': 'Why is the sky blue?'} +Client().chat(model='llama2', messages=[message]) +``` + +Response streaming can be enabled by setting `stream=True`. This modifies the function to return a Python generator where each part is an object in the stream. + +```python +from ollama import Client +message = {'role': 'user', 'content': 'Why is the sky blue?'} +for part in Client().chat(model='llama2', messages=[message], stream=True): + print(part.get('message'), end='', flush=True) +``` + +### Using the Asynchronous Client + +```python +import asyncio +from ollama import AsyncClient + +async def chat(): + message = {'role': 'user', 'content': 'Why is the sky blue?'} + await AsyncClient().chat(model='llama2', messages=[message]) + +asyncio.run(chat()) +``` + +Similar to the synchronous client, setting `stream=True` modifies the function to return a Python asynchronous generator. + +```python +import asyncio +from ollama import AsyncClient + +async def chat(): + message = {'role': 'user', 'content': 'Why is the sky blue?'} + async for part in await AsyncClient().chat(model='llama2', messages=[message], stream=True): + print(part.get('message'), end='', flush=True) + +asyncio.run(chat()) +```