You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I want to a add support for open source LLM models like Llama2 using langchain that will be self hosted in local and it will not be required to have gpu support to run. but GPU support will make it reply faster. In this way I want to reduce COST for chatgpt api.how is the idea?
The text was updated successfully, but these errors were encountered:
Sajid576
changed the title
[Add support for open source LLM models like LLama2/Mistral ]
Add support for open source LLM models like LLama2/Mistral
Mar 27, 2024
Hi,
I want to a add support for open source LLM models like Llama2 using langchain that will be self hosted in local and it will not be required to have gpu support to run. but GPU support will make it reply faster. In this way I want to reduce COST for chatgpt api.how is the idea?
The text was updated successfully, but these errors were encountered: