Skip to content

Latest commit

 

History

History
53 lines (37 loc) · 1.64 KB

README.md

File metadata and controls

53 lines (37 loc) · 1.64 KB

ollama

Lifecycle: experimental CRAN status R-CMD-check

The goal of ollama is to wrap the ollama API and provide infrastructure to be used within {gptstudio}

Installation

You can install the development version of ollama like so:

pak::pak("calderonsamuel/ollama")

Prerequisites

The user is in charge of downloading ollama and providing networking configuration. We recommend using the official docker image, which trivializes this process.

The following code downloads the default ollama image and runs an “ollama” container exposing the 11434 port.

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

By default, this package will use http://localhost:11434 as API host url. Although we provide methods to change this, only do it if you are absolutely sure of what it means.

Example

This is a basic example which shows you how to solve a common problem:

library(ollama)
## basic example code