Welcome to the "Agents Meet RAG" hackathon repository. This repository contains the collaborative work and resources developed as part of our hackathon event, focusing on the integration of LLM Agents with RAG and evaluating these systems.
-
/agents: This directory is the core of the agents' development. It contains all the code and resources necessary for building and refining our custom agents. Here, you'll find the implementations of agents, along with modular tools designed to enhance their query processing capabilities.
-
/tests: Dedicated to housing pytest tests, this directory contains automated testing scripts that are crucial for ensuring the reliability and effectiveness of the agents. The tests are based on the evaluators found in the
/evaluations
folder and are designed to assess the performance of the agents under various scenarios. -
/evaluations: Here lies the evaluation framework of our project. This directory includes the evaluation metrics and methodologies used to assess the performance of the agents. The evaluations can be directly invoked in the tests to measure aspects like answer relevancy, context precision, and overall response quality of the agents.
-
/data: This directory contains the datasets and other related data resources for the project. While it's not mandatory to use the data housed here, it provides valuable resources for testing, and refining the agents and evaluation methods.
To dive into the project:
- Clone the repository.
- Explore the
/agents
,/tests
,/evaluations
, and/data
directories to understand the existing structure and codebase. - Delve into the
explanation
notebooks (found in each folder) for explanations and development ideas. - Follow the setup guidelines in the notebooks to prepare your development environment.
Thank you for being part of the "Agents Meet RAG" hackathon. Happy coding and exploring!