-
Notifications
You must be signed in to change notification settings - Fork 602
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
optimize full_text_search_with_langchain
Signed-off-by: ChengZi <[email protected]>
- Loading branch information
1 parent
845eed2
commit b83d1b9
Showing
1 changed file
with
45 additions
and
52 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -18,19 +18,18 @@ | |
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# Using full-text search with LangChain and Milvus\n", | ||
"# Using Full-Text Search with LangChain and Milvus\n", | ||
"\n", | ||
"[Full-text search](https://milvus.io/docs/full-text-search.md#Full-Text-Search) retrieves documents with specific terms or phrases in text datasets and ranks results by relevance. It overcomes semantic search limitations to provide accurate, context-relevant results. Also, it simplifies vector searches, accepting raw text and automatically converting it into sparse embeddings without manual generation. By integrating full-text search with semantic-based dense vector search, you can enhance the accuracy and relevance of search results.\n", | ||
"[Full-text search](https://milvus.io/docs/full-text-search.md#Full-Text-Search) is a traditional method for retrieving documents that contain specific terms or phrases by directly matching keywords within the text. It ranks results based on relevance, typically determined by factors such as term frequency and proximity. While semantic search excels at understanding intent and context, full-text search provides precision for exact keyword matching, making it a valuable complementary tool. The BM25 algorithm is a popular ranking method for full-text search, particularly useful in Retrieval-Augmented Generation (RAG).\n", | ||
"\n", | ||
"BM25 is an important ranking algorithm in full-text search. Using the BM25 algorithm for relevance scoring, this feature is particularly valuable in retrieval-augmented generation (RAG) scenarios, where it prioritizes documents that closely match specific search terms. \n", | ||
"Since [Milvus 2.5](https://milvus.io/blog/introduce-milvus-2-5-full-text-search-powerful-metadata-filtering-and-more.md), full-text search is natively supported through the `Sparse-BM25` approach, by representing the BM25 algorithm as sparse vectors. Milvus accepts raw text as input and automatically converts it into sparse vectors stored in a specified field, eliminating the need for manual sparse embedding generation.\n", | ||
"\n", | ||
"Milvus 2.5 introduced the full-text search [feature](https://milvus.io/blog/introduce-milvus-2-5-full-text-search-powerful-metadata-filtering-and-more.md). As a further layer of framework, LangChain's Milvus integration has also launched this feature, making it easy to integrate full-text search into your application.\n", | ||
"LangChain's integration with Milvus has also introduced this feature, simplifying the process of incorporating full-text search into RAG applications. By combining full-text search with semantic search with dense vectors, you can achieve a hybrid approach that leverages both semantic context from dense embeddings and precise keyword relevance from word matching. This integration enhances the accuracy, relevance, and user experience of search systems.\n", | ||
"\n", | ||
"In this tutorial, we will show you how to use LangChain and Milvus to use full-text search into your application.\n", | ||
"This tutorial will show how to use LangChain and Milvus to implement full-text search in your application.\n", | ||
"\n", | ||
"> - Full text search is available in Milvus Standalone and Milvus Distributed but not Milvus Lite, although adding it to Milvus Lite is on the roadmap.\n", | ||
"> - Before reading this tutorial, you need to have a basic understanding of [full-text search](https://milvus.io/docs/full-text-search.md#Full-Text-Search). In addition, you also need to know the [basic usage](https://milvus.io/docs/basic_usage_langchain.md) of LangChain Milvus integration.\n", | ||
"\n" | ||
"> - Full-text search is available in Milvus Standalone and Milvus Distributed, but not in Milvus Lite, although it is on the roadmap for future inclusion. It will also be available in Zilliz Cloud (fully-managed Milvus) soon. Please reach out to [email protected] for more information.\n", | ||
"> - Before proceeding with this tutorial, ensure you have a basic understanding of [full-text search](https://milvus.io/docs/full-text-search.md#Full-Text-Search) and the [basic usage](https://milvus.io/docs/basic_usage_langchain.md) of LangChain Milvus integration." | ||
] | ||
}, | ||
{ | ||
|
@@ -48,7 +47,7 @@ | |
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"! pip install --upgrade --quiet langchain langchain-core langchain-community langchain-text-splitters langchain-milvus langchain-openai langchain-voyageai bs4" | ||
"! pip install --upgrade --quiet langchain langchain-core langchain-community langchain-text-splitters langchain-milvus langchain-openai bs4 #langchain-voyageai" | ||
] | ||
}, | ||
{ | ||
|
@@ -72,12 +71,12 @@ | |
} | ||
}, | ||
"source": [ | ||
"We will use the models from OpenAI and VoyageAI. You should prepare the environment variables `OPENAI_API_KEY` from [OpenAI](https://platform.openai.com/docs/quickstart) and `VOYAGE_API_KEY` from [VoyageAI](https://docs.voyageai.com/docs/api-key-and-installation)." | ||
"We will use the models from OpenAI. You should prepare the environment variables `OPENAI_API_KEY` from [OpenAI](https://platform.openai.com/docs/quickstart)." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 2, | ||
"execution_count": 1, | ||
"metadata": { | ||
"collapsed": false, | ||
"jupyter": { | ||
|
@@ -91,8 +90,7 @@ | |
"source": [ | ||
"import os\n", | ||
"\n", | ||
"os.environ[\"OPENAI_API_KEY\"] = \"sk-***********\"\n", | ||
"os.environ[\"VOYAGE_API_KEY\"] = \"pa-***********\"" | ||
"os.environ[\"OPENAI_API_KEY\"] = \"sk-***********\"" | ||
] | ||
}, | ||
{ | ||
|
@@ -104,7 +102,7 @@ | |
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 3, | ||
"execution_count": 2, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
|
@@ -121,16 +119,16 @@ | |
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 4, | ||
"execution_count": 3, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from langchain_core.documents import Document\n", | ||
"\n", | ||
"docs = [\n", | ||
" Document(page_content=\"I like apple\", metadata={\"foo\": \"bar\"}),\n", | ||
" Document(page_content=\"I like banana\", metadata={\"foo\": \"baz\"}),\n", | ||
" Document(page_content=\"I like orange\", metadata={\"foo\": \"qux\"}),\n", | ||
" Document(page_content=\"I like apple\", metadata={\"category\": \"fruit\"}),\n", | ||
" Document(page_content=\"I like swimming\", metadata={\"category\": \"sport\"}),\n", | ||
" Document(page_content=\"I like dogs\", metadata={\"category\": \"pets\"}),\n", | ||
"]" | ||
] | ||
}, | ||
|
@@ -141,14 +139,14 @@ | |
"## Initialization with BM25 Function\n", | ||
"### Hybrid Search\n", | ||
"\n", | ||
"Unlike simply passing an embedding to the `VectorStore`, the Milvus VectorStore provides a `builtin_function` parameter. Through this parameter, you can pass an instance of the BM25 function.\n", | ||
"For full-text search Milvus VectorStore accepts a `builtin_function` parameter. Through this parameter, you can pass in an instance of the `BM25BuiltInFunction`. This is different than semantic search which usually passes dense embeddings to the `VectorStore`, \n", | ||
"\n", | ||
"Here is a simple example of combining OpenAI embeddings with the BM25 function from Milvus:" | ||
"Here is a simple example of hybrid search in Milvus with OpenAI dense embedding for semantic search and BM25 for full-text search:" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 5, | ||
"execution_count": 4, | ||
"metadata": { | ||
"collapsed": false, | ||
"jupyter": { | ||
|
@@ -198,7 +196,7 @@ | |
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 6, | ||
"execution_count": 5, | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
|
@@ -207,16 +205,18 @@ | |
"['dense1', 'dense2', 'sparse']" | ||
] | ||
}, | ||
"execution_count": 6, | ||
"execution_count": 5, | ||
"metadata": {}, | ||
"output_type": "execute_result" | ||
} | ||
], | ||
"source": [ | ||
"from langchain_voyageai import VoyageAIEmbeddings\n", | ||
"# from langchain_voyageai import VoyageAIEmbeddings\n", | ||
"\n", | ||
"embedding1 = OpenAIEmbeddings(model=\"text-embedding-ada-002\")\n", | ||
"embedding2 = OpenAIEmbeddings(model=\"text-embedding-3-large\")\n", | ||
"# embedding2 = VoyageAIEmbeddings(model=\"voyage-3\") # You can also use embedding from other embedding model providers, e.g VoyageAIEmbeddings\n", | ||
"\n", | ||
"embedding1 = OpenAIEmbeddings(model=\"text-embedding-3-large\")\n", | ||
"embedding2 = VoyageAIEmbeddings(model=\"voyage-3\")\n", | ||
"\n", | ||
"vectorstore = Milvus.from_documents(\n", | ||
" documents=docs,\n", | ||
|
@@ -225,7 +225,7 @@ | |
" input_field_names=\"text\", output_field_names=\"sparse\"\n", | ||
" ),\n", | ||
" text_field=\"text\", # `text` is the input field name of BM25BuiltInFunction\n", | ||
" # `sparse` is the output field name of BM25BuiltInFunction, and `dense1` and `dense2` are the output field names of OpenAIEmbeddings and VoyageAIEmbeddings\n", | ||
" # `sparse` is the output field name of BM25BuiltInFunction, and `dense1` and `dense2` are the output field names of embedding1 and embedding2\n", | ||
" vector_field=[\"dense1\", \"dense2\", \"sparse\"],\n", | ||
" connection_args={\n", | ||
" \"uri\": URI,\n", | ||
|
@@ -241,7 +241,7 @@ | |
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"In this example, we have three vector fields. Among them, `sparse` is used as the output field for `BM25BuiltInFunction`, while the other two, `dense1` and `dense2`, are automatically assigned as the output fields for `OpenAIEmbeddings` and `VoyageAIEmbeddings`, respectively. \n", | ||
"In this example, we have three vector fields. Among them, `sparse` is used as the output field for `BM25BuiltInFunction`, while the other two, `dense1` and `dense2`, are automatically assigned as the output fields for the two `OpenAIEmbeddings` models. \n", | ||
"\n", | ||
"In this way, you can define multiple vector fields and assign different combinations of embeddings or functions to them, enabling hybrid search.\n" | ||
] | ||
|
@@ -255,16 +255,16 @@ | |
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 7, | ||
"execution_count": 6, | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"data": { | ||
"text/plain": [ | ||
"[Document(metadata={'foo': 'qux', 'pk': 454646931479251686}, page_content='I like orange')]" | ||
"[Document(metadata={'pk': 454646931479251755, 'category': 'fruit'}, page_content='I like apple')]" | ||
] | ||
}, | ||
"execution_count": 7, | ||
"execution_count": 6, | ||
"metadata": {}, | ||
"output_type": "execute_result" | ||
} | ||
|
@@ -293,7 +293,7 @@ | |
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 8, | ||
"execution_count": 7, | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
|
@@ -302,7 +302,7 @@ | |
"['sparse']" | ||
] | ||
}, | ||
"execution_count": 8, | ||
"execution_count": 7, | ||
"metadata": {}, | ||
"output_type": "execute_result" | ||
} | ||
|
@@ -341,7 +341,7 @@ | |
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 9, | ||
"execution_count": 8, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
|
@@ -381,16 +381,16 @@ | |
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 10, | ||
"execution_count": 9, | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"data": { | ||
"text/plain": [ | ||
"{'auto_id': True, 'description': '', 'fields': [{'name': 'text', 'description': '', 'type': <DataType.VARCHAR: 21>, 'params': {'max_length': 65535, 'enable_match': True, 'enable_analyzer': True, 'analyzer_params': {'tokenizer': 'standard', 'filter': ['lowercase', {'type': 'length', 'max': 40}, {'type': 'stop', 'stop_words': ['of', 'to']}]}}}, {'name': 'pk', 'description': '', 'type': <DataType.INT64: 5>, 'is_primary': True, 'auto_id': True}, {'name': 'dense', 'description': '', 'type': <DataType.FLOAT_VECTOR: 101>, 'params': {'dim': 1536}}, {'name': 'sparse', 'description': '', 'type': <DataType.SPARSE_FLOAT_VECTOR: 104>, 'is_function_output': True}, {'name': 'foo', 'description': '', 'type': <DataType.VARCHAR: 21>, 'params': {'max_length': 65535}}], 'enable_dynamic_field': False, 'functions': [{'name': 'bm25_function_7c99f463', 'description': '', 'type': <FunctionType.BM25: 1>, 'input_field_names': ['text'], 'output_field_names': ['sparse'], 'params': {}}]}" | ||
"{'auto_id': True, 'description': '', 'fields': [{'name': 'text', 'description': '', 'type': <DataType.VARCHAR: 21>, 'params': {'max_length': 65535, 'enable_match': True, 'enable_analyzer': True, 'analyzer_params': {'tokenizer': 'standard', 'filter': ['lowercase', {'type': 'length', 'max': 40}, {'type': 'stop', 'stop_words': ['of', 'to']}]}}}, {'name': 'pk', 'description': '', 'type': <DataType.INT64: 5>, 'is_primary': True, 'auto_id': True}, {'name': 'dense', 'description': '', 'type': <DataType.FLOAT_VECTOR: 101>, 'params': {'dim': 1536}}, {'name': 'sparse', 'description': '', 'type': <DataType.SPARSE_FLOAT_VECTOR: 104>, 'is_function_output': True}, {'name': 'category', 'description': '', 'type': <DataType.VARCHAR: 21>, 'params': {'max_length': 65535}}], 'enable_dynamic_field': False, 'functions': [{'name': 'bm25_function_bc8fe320', 'description': '', 'type': <FunctionType.BM25: 1>, 'input_field_names': ['text'], 'output_field_names': ['sparse'], 'params': {}}]}" | ||
] | ||
}, | ||
"execution_count": 10, | ||
"execution_count": 9, | ||
"metadata": {}, | ||
"output_type": "execute_result" | ||
} | ||
|
@@ -435,23 +435,16 @@ | |
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 11, | ||
"execution_count": 16, | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"name": "stderr", | ||
"output_type": "stream", | ||
"text": [ | ||
"USER_AGENT environment variable not set, consider setting it to identify your requests.\n" | ||
] | ||
}, | ||
{ | ||
"data": { | ||
"text/plain": [ | ||
"Document(metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}, page_content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.\\nAnother quite distinct approach, LLM+P (Liu et al. 2023), involves relying on an external classical planner to do long-horizon planning. This approach utilizes the Planning Domain Definition Language (PDDL) as an intermediate interface to describe the planning problem. In this process, LLM (1) translates the problem into “Problem PDDL”, then (2) requests a classical planner to generate a PDDL plan based on an existing “Domain PDDL”, and finally (3) translates the PDDL plan back into natural language. Essentially, the planning step is outsourced to an external tool, assuming the availability of domain-specific PDDL and a suitable planner which is common in certain robotic setups but not in many other domains.\\nSelf-Reflection#')" | ||
] | ||
}, | ||
"execution_count": 11, | ||
"execution_count": 16, | ||
"metadata": {}, | ||
"output_type": "execute_result" | ||
} | ||
|
@@ -495,7 +488,7 @@ | |
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 12, | ||
"execution_count": 11, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
|
@@ -522,7 +515,7 @@ | |
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 13, | ||
"execution_count": 12, | ||
"metadata": { | ||
"collapsed": false, | ||
"jupyter": { | ||
|
@@ -586,7 +579,7 @@ | |
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 14, | ||
"execution_count": 13, | ||
"metadata": { | ||
"collapsed": false, | ||
"jupyter": { | ||
|
@@ -618,16 +611,16 @@ | |
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 16, | ||
"execution_count": 15, | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"data": { | ||
"text/plain": [ | ||
"'PAL (Program-aided Language models) and PoT (Program of Thoughts prompting) are approaches that involve using language models to generate programming language statements to solve natural language reasoning problems. This method offloads the solution step to a runtime, such as a Python interpreter, effectively decoupling complex computation and reasoning. PAL and PoT rely on language models with strong coding skills to perform these tasks.'" | ||
"'PAL (Program-aided Language models) and PoT (Program of Thoughts prompting) are approaches that involve using language models to generate programming language statements to solve natural language reasoning problems. This method offloads the solution step to a runtime, such as a Python interpreter, allowing for complex computation and reasoning to be handled externally. PAL and PoT rely on language models with strong coding skills to effectively perform these tasks.'" | ||
] | ||
}, | ||
"execution_count": 16, | ||
"execution_count": 15, | ||
"metadata": {}, | ||
"output_type": "execute_result" | ||
} | ||
|