Skip to content

Llama4U is a privacy-focused AI assistant developed using Ollama, LangChain and Llama3.

License

Notifications You must be signed in to change notification settings

virajmalia/llama4u

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The Llama4U Project

Python application

Llama4U is a privacy-focused AI assistant developed using Ollama, LangChain and Llama3. A completely free AI solution that can be hosted locally, while providing online capabilities in a responsible and user-controllable way.

APIs that have usage limitations or require keys to be registered with an online account won't be added to this project.

Steps to run

  1. Host llama3 model from Ollama on your computer.
  2. Clone this repository.

There are 2 usage modes:

LangServe

  1. pip install -U langchain-cli && langchain serve
  2. The default server is hosted on 127.0.0.1 or localhost and port 8000.
  3. Playground can be accessed at <host_ip>:<port>/llama4u/playground.

CLI

  1. cd app/ && pip install -e .
  2. llama4u
  3. llama4u --help for full CLI.

List of chat commands

  • /search: Perform online search using DuckDuckGo

Current motivations for feature-set

  • Perplexity AI
  • ChatGPT/GPT4o

System requirements

  • Powerful CPU or Nvidia GPU (>=8G VRAM)
  • Ubuntu 22.04
  • Works on WSL2 with nvidia CUDA

Use these steps to setup nvidia cuda drivers, if GPU is not being used:

# nvidia GPU setup for Ubuntu 22.04
curl -fSsL https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/3bf863cc.pub | sudo gpg --dearmor | sudo tee /usr/share/keyrings/nvidia-drivers.gpg > /dev/null 2>&1
echo 'deb [signed-by=/usr/share/keyrings/nvidia-drivers.gpg] https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/ /' | sudo tee /etc/apt/sources.list.d/nvidia-drivers.list
sudo apt update
sudo apt install cuda-toolkit-12-4
export PATH=/usr/local/cuda-12/bin:~/.local/bin:${PATH}
export CUDACXX=$(which nvcc)
if -z $CUDACXX; then
    echo "nvcc not found in PATH."
    exit /b 1
fi
echo $CUDACXX && $CUDACXX --version

Credits

  • Meta, for the open source Llama models
  • Ollama
  • LangChain community

About

Llama4U is a privacy-focused AI assistant developed using Ollama, LangChain and Llama3.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages