Skip to content

Commit

Permalink
docs: improve syntax highlighting in code blocks (ollama#8854)
Browse files Browse the repository at this point in the history
  • Loading branch information
fyvri authored Feb 7, 2025
1 parent abb8dd5 commit b901a71
Show file tree
Hide file tree
Showing 16 changed files with 158 additions and 127 deletions.
44 changes: 23 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Get up and running with large language models.

### Linux

```
```shell
curl -fsSL https://ollama.com/install.sh | sh
```

Expand All @@ -42,7 +42,7 @@ The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `olla

To run and chat with [Llama 3.2](https://ollama.com/library/llama3.2):

```
```shell
ollama run llama3.2
```

Expand Down Expand Up @@ -92,13 +92,13 @@ Ollama supports importing GGUF models in the Modelfile:

2. Create the model in Ollama

```
```shell
ollama create example -f Modelfile
```

3. Run the model

```
```shell
ollama run example
```

Expand All @@ -110,7 +110,7 @@ See the [guide](docs/import.md) on importing models for more information.

Models from the Ollama library can be customized with a prompt. For example, to customize the `llama3.2` model:

```
```shell
ollama pull llama3.2
```

Expand Down Expand Up @@ -145,27 +145,27 @@ For more information on working with a Modelfile, see the [Modelfile](docs/model

`ollama create` is used to create a model from a Modelfile.

```
```shell
ollama create mymodel -f ./Modelfile
```

### Pull a model

```
```shell
ollama pull llama3.2
```

> This command can also be used to update a local model. Only the diff will be pulled.
### Remove a model

```
```shell
ollama rm llama3.2
```

### Copy a model

```
```shell
ollama cp llama3.2 my-model
```

Expand All @@ -184,37 +184,39 @@ I'm a basic program that prints the famous "Hello, world!" message to the consol

```
ollama run llava "What's in this image? /Users/jmorgan/Desktop/smile.png"
The image features a yellow smiley face, which is likely the central focus of the picture.
```

> **Output**: The image features a yellow smiley face, which is likely the central focus of the picture.
### Pass the prompt as an argument

```shell
ollama run llama3.2 "Summarize this file: $(cat README.md)"
```
$ ollama run llama3.2 "Summarize this file: $(cat README.md)"
Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
```

> **Output**: Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
### Show model information

```
```shell
ollama show llama3.2
```

### List models on your computer

```
```shell
ollama list
```

### List which models are currently loaded

```
```shell
ollama ps
```

### Stop a model which is currently running

```
```shell
ollama stop llama3.2
```

Expand All @@ -230,13 +232,13 @@ See the [developer guide](https://github.com/ollama/ollama/blob/main/docs/develo

Next, start the server:

```
```shell
./ollama serve
```

Finally, in a separate shell, run a model:

```
```shell
./ollama run llama3.2
```

Expand All @@ -246,7 +248,7 @@ Ollama has a REST API for running and managing models.

### Generate a response

```
```shell
curl http://localhost:11434/api/generate -d '{
"model": "llama3.2",
"prompt":"Why is the sky blue?"
Expand All @@ -255,7 +257,7 @@ curl http://localhost:11434/api/generate -d '{

### Chat with a model

```
```shell
curl http://localhost:11434/api/chat -d '{
"model": "llama3.2",
"messages": [
Expand Down
3 changes: 2 additions & 1 deletion api/examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,10 @@

Run the examples in this directory with:

```
```shell
go run example_name/main.go
```

## Chat - Chat with a model
- [chat/main.go](chat/main.go)

Expand Down
2 changes: 1 addition & 1 deletion app/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,6 @@ If you want to build the installer, youll need to install
In the top directory of this repo, run the following powershell script
to build the ollama CLI, ollama app, and ollama installer.

```
```powershell
powershell -ExecutionPolicy Bypass -File .\scripts\build_windows.ps1
```
33 changes: 17 additions & 16 deletions docs/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Certain endpoints stream responses as JSON objects. Streaming can be disabled by

## Generate a completion

```shell
```
POST /api/generate
```

Expand Down Expand Up @@ -485,7 +485,7 @@ A single JSON object is returned:

## Generate a chat completion

```shell
```
POST /api/chat
```

Expand Down Expand Up @@ -878,6 +878,7 @@ curl http://localhost:11434/api/chat -d '{
```

##### Response

```json
{
"model": "llama3.2",
Expand Down Expand Up @@ -924,7 +925,7 @@ A single JSON object is returned:

## Create a Model

```shell
```
POST /api/create
```

Expand Down Expand Up @@ -1020,7 +1021,7 @@ curl http://localhost:11434/api/create -d '{

A stream of JSON objects is returned:

```
```json
{"status":"quantizing F16 model to Q4_K_M"}
{"status":"creating new layer sha256:667b0c1932bc6ffc593ed1d03f895bf2dc8dc6df21db3042284a6f4416b06a29"}
{"status":"using existing layer sha256:11ce4ee3e170f6adebac9a991c22e22ab3f8530e154ee669954c4bc73061c258"}
Expand Down Expand Up @@ -1051,7 +1052,7 @@ curl http://localhost:11434/api/create -d '{

A stream of JSON objects is returned:

```
```json
{"status":"parsing GGUF"}
{"status":"using existing layer sha256:432f310a77f4650a88d0fd59ecdd7cebed8d684bafea53cbff0473542964f0c3"}
{"status":"writing manifest"}
Expand Down Expand Up @@ -1118,7 +1119,7 @@ Return 200 OK if the blob exists, 404 Not Found if it does not.

## Push a Blob

```shell
```
POST /api/blobs/:digest
```

Expand All @@ -1142,7 +1143,7 @@ Return 201 Created if the blob was successfully created, 400 Bad Request if the

## List Local Models

```shell
```
GET /api/tags
```

Expand Down Expand Up @@ -1195,7 +1196,7 @@ A single JSON object will be returned.

## Show Model Information

```shell
```
POST /api/show
```

Expand Down Expand Up @@ -1261,7 +1262,7 @@ curl http://localhost:11434/api/show -d '{

## Copy a Model

```shell
```
POST /api/copy
```

Expand All @@ -1284,7 +1285,7 @@ Returns a 200 OK if successful, or a 404 Not Found if the source model doesn't e

## Delete a Model

```shell
```
DELETE /api/delete
```

Expand All @@ -1310,7 +1311,7 @@ Returns a 200 OK if successful, 404 Not Found if the model to be deleted doesn't

## Pull a Model

```shell
```
POST /api/pull
```

Expand Down Expand Up @@ -1382,7 +1383,7 @@ if `stream` is set to false, then the response is a single JSON object:

## Push a Model

```shell
```
POST /api/push
```

Expand Down Expand Up @@ -1447,7 +1448,7 @@ If `stream` is set to `false`, then the response is a single JSON object:

## Generate Embeddings

```shell
```
POST /api/embed
```

Expand Down Expand Up @@ -1515,7 +1516,7 @@ curl http://localhost:11434/api/embed -d '{
```

## List Running Models
```shell
```
GET /api/ps
```

Expand Down Expand Up @@ -1562,7 +1563,7 @@ A single JSON object will be returned.

> Note: this endpoint has been superseded by `/api/embed`
```shell
```
POST /api/embeddings
```

Expand Down Expand Up @@ -1602,7 +1603,7 @@ curl http://localhost:11434/api/embeddings -d '{

## Version

```shell
```
GET /api/version
```

Expand Down
Loading

0 comments on commit b901a71

Please sign in to comment.