Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enabler: decide on strategy for local LLMs/embeddings #315

Open
ludwiktrammer opened this issue Jan 27, 2025 · 0 comments
Open

enabler: decide on strategy for local LLMs/embeddings #315

ludwiktrammer opened this issue Jan 27, 2025 · 0 comments
Labels
enabler This enables development of other issue

Comments

@ludwiktrammer
Copy link
Collaborator

The aim of this ticket is to prepare a document with Ragbit's strategy when it comes to running LLMs/embeddings locally:

Do we want to directly support running models locally? If yes - using which libraries. If no - what's the alternative (for example: documenting how to use LiteLLM to connect with local models available via http API)

@ludwiktrammer ludwiktrammer added the enabler This enables development of other issue label Jan 27, 2025
@ludwiktrammer ludwiktrammer moved this to Backlog in ragbits Jan 27, 2025
@ludwiktrammer ludwiktrammer self-assigned this Jan 27, 2025
@mhordynski mhordynski moved this from Backlog to Ready in ragbits Jan 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enabler This enables development of other issue
Projects
Status: Ready
Development

No branches or pull requests

1 participant