-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chatbot that remembers conversation history #64
Comments
See here for examples of using chat engine. |
Have played around with this a bit more in a new notebook here. I think 'context' basically finds/retrieves a load of context info from our database and then uses that to answer the question (i.e. the model is called once per 'chat' and its essentially "heres a load of context, can you answer this"). Overall, I think 'context' mode seems better. |
Some examples of using the chat engine from #66. Will continue with using the "context" engine as it seems the most consistent engine. React seems to be quite volatile and doesn't always makes the best decisions in determining whether or not to use the query engine. But it does note here that it really depends on the quality of the LLM. We do get better performance using 13b models over 7b (quantized), so perhaps could be better in the future if we have access to higher quality quantized LLMs. While working on this, we noticed an issue with the prompt creation in the chat engine. This has been fixed in this PR by @rwood-97 and I. |
In the current llama-index model, the conversation history is not tracked and so each question also queries the database for an answer. It would be interesting to investigate how we can have a conversation with the data (multiple back-and-forth instead of a single question and answer).
Looking at the
llama-index
documentation, it looks like it has some ability to do this: https://gpt-index.readthedocs.io/en/latest/core_modules/query_modules/chat_engines/root.htmlWould need to replace the
query_engine
calls withchat_engine
. Would also need to play around with something like the ReAct Agent (llama-index have a few implemented) which decides how the chatbot will interact with the database during the conversation.The text was updated successfully, but these errors were encountered: