Skip to content

Latest commit

 

History

History
46 lines (29 loc) · 1.52 KB

README.md

File metadata and controls

46 lines (29 loc) · 1.52 KB

Local AI Open Orca For Dummies

GIF showing example being used

Welcome to Local AI Open Orca For Dummies, the simplest way to run a Large Language Model (LLM) locally on your machine! No more complex setups, just straightforward AI fun with OpenOrca.

P.S. This is a project by a frustrated developer who tried many complex approaches to running different LLMs models locally and decided to make it easier for everyone.

Installation

Install the ctransformers package. Choose the installation command based on your system and GPU availability:

  • No GPU acceleration:

    pip install ctransformers
    
  • CUDA GPU acceleration:

    pip install ctransformers[cuda]
    
  • AMD ROCm GPU acceleration (Linux only):

    CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
    
  • Metal GPU acceleration for macOS systems only:

    CT_METAL=1 pip install ctransformers --no-binary ctransformers
    

Running the AI

Once you've installed the necessary packages, you can run the main.py.

That's it! You're now ready to explore the capabilities of AI running locally on your machine. Enjoy experimenting with OpenOrca and discovering the exciting possibilities of local AI.

Acknowledgements

This project uses the model provided by TheBloke/Mistral-7B-OpenOrca-GGUF on Hugging Face.

License

This project is licensed under the MIT License - see the LICENSE file for details.