Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable search prompt using ddg #16

Merged
merged 2 commits into from
Jun 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 15 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,26 @@
# The Llama4U Project
[![Python application](https://github.com/virajmalia/llama4u/actions/workflows/CI.yml/badge.svg)](https://github.com/virajmalia/llama4u/actions/workflows/CI.yml)

## Vision
Develop a free and open source, fully-featured AI solution with agents.
Llama4U is a privacy-focused AI assistant developed using [Ollama][1], [LangChain][2] and [Llama3][3]. A completely free AI solution that can be hosted locally, while providing online capabilities in a responsible and user-controllable way.

#### *APIs that have usage limitations or require keys to be registered with an online account won't be added to this project.*

## Steps to run
1. Host `llama3` model from [Ollama][1] on your computer.
2. Clone this repository.
3. `pip install -e .`
4. `llama4u`

`llama4u --help` for full CLI.

## List of chat commands

- `/search`: Perform online search using DuckDuckGo

## Current motivations for feature-set
- Perplexity AI
- ChatGPT/GPT4o

## Rule
- APIs that have usage limitations or require keys to be registered with an online account won't be added to this project.

## System requirements
- Powerful CPU or Nvidia GPU (>=8G VRAM)
- Ubuntu 22.04
Expand All @@ -32,16 +42,6 @@ fi
echo $CUDACXX && $CUDACXX --version
```

## Steps to run
1. Host `llama3` model from [Ollama][1] on your computer
2. `pip install -e .`
3. `llama4u`

`llama4u --help` for full CLI

## Description
Llama4U is an AI assistant developed using [Ollama][1], [LangChain][2] and [Llama3][3]. A completely free AI solution that can be hosted locally, while providing online capabilities in a responsible and user-controllable way.

## Credits
- Meta, for the open source Llama models
- Ollama
Expand Down
1 change: 0 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,6 @@ dependencies = [
"langchain-core",
"langchain-community",
"langchain-chroma",
"duckduckgo_search",
"termcolor"
]

Expand Down
9 changes: 9 additions & 0 deletions src/llama4u.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
from termcolor import colored
from langchain_community.chat_models.ollama import ChatOllama
from langchain_community.chat_message_histories.in_memory import ChatMessageHistory
from langchain_community.tools.ddg_search.tool import DuckDuckGoSearchRun
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
from input.input import parse_arguments
Expand Down Expand Up @@ -46,9 +47,17 @@ def get_session_history(self, session_id):
async def chat_session(self):
""" Chat session with history """
while True:
# Get input
print(colored('>>> ', 'yellow'), end="")
user_prompt = input()

# Redirect search queries
if user_prompt.startswith("/search"):
search_results = DuckDuckGoSearchRun().run(user_prompt.replace("/search", ""))
user_prompt = \
f"Summarize the following search results as if you are answering:{search_results}"

# Invoke chain
response = self.with_msg_history.invoke(
{"input": user_prompt},
config={"configurable": {"session_id": "abc123"}},
Expand Down
37 changes: 0 additions & 37 deletions src/sources/search.py

This file was deleted.

Loading