You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The aim of this ticket is to prepare a document with Ragbit's strategy when it comes to running LLMs/embeddings locally:
Do we want to directly support running models locally? If yes - using which libraries. If no - what's the alternative (for example: documenting how to use LiteLLM to connect with local models available via http API)
The text was updated successfully, but these errors were encountered:
The aim of this ticket is to prepare a document with Ragbit's strategy when it comes to running LLMs/embeddings locally:
Do we want to directly support running models locally? If yes - using which libraries. If no - what's the alternative (for example: documenting how to use
LiteLLM
to connect with local models available via http API)The text was updated successfully, but these errors were encountered: