This repository uses Llama 3.1 with Ollama
Follow these steps to get up and running with the Ollama application and the LLaMA 3.1 model:
- Ollama application (v0.3.14 or higher)
- Python 3.x
- Internet connection for downloading the model
-
Download and Install Ollama
First, download the Ollama application from the official website:
Install and run the application on your local machine.
-
Pull the LLaMA 3.1 Model
Once the Ollama app is running, you'll need to pull the LLaMA 3.1 model. Run the following command in your terminal:
ollama run llama3.1
-
Run LLaMA 3.1 Model
To verify that the model is properly installed and working, list the available models by running the following command:
ollama list
This will initiate the model, and you'll be able to interact with it via the Ollama interface.
Ensure Ollama is running in the background when executing Python scripts. Replace the prompt with your own custom input to experiment with different AI outputs.
This project is licensed under the MIT License - see the LICENSE file for details.