Skip to content

A demo Jupyter Notebook showcasing a simple local RAG (Retrieval Augmented Generation) pipeline to chat with your PDFs.

License

Notifications You must be signed in to change notification settings

uddamvathanak/local_pdf_rag

 
 

Repository files navigation

Chat with PDF locally with Ollama demo 🚀

If you have any questions or suggestions, please feel free to create an issue in this repository, I will do my best to respond.

Give me a star if you like this repo :) and shoutout to the repo that I fork and please check out the original github-repo: https://github.com/tonykipkemboi/ollama_pdf_rag

Running the Streamlit application

  1. Clone repo

  2. Install Dependencies: Execute to install dependencies

    pip install -r requirements.txt
  3. Pull ollama model: Get the nomic embed text model

    ollama pull nomic-embed-text
    ollama pull llama3.1 # can be any model that the user want to test
  4. Launch the App: Run to start the Streamlit interface on localhost

    streamlit run streamlit_app.py

About

A demo Jupyter Notebook showcasing a simple local RAG (Retrieval Augmented Generation) pipeline to chat with your PDFs.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 88.3%
  • Python 11.7%