From 0d64653f1ba023c8d177ba8e5b767f5c8d108ebb Mon Sep 17 00:00:00 2001 From: Yehonal Date: Wed, 21 Aug 2024 17:15:53 +0200 Subject: [PATCH] chore: Update AI-Memory setup instructions and file generation script --- README.md | 129 ++++++++++++++++++++++++++----- generate-files.sh | 6 +- gpt-schema.dist.yml | 8 +- gpt-values-override-conf.dist.sh | 7 +- 4 files changed, 119 insertions(+), 31 deletions(-) diff --git a/README.md b/README.md index fb3fc89..b8435eb 100644 --- a/README.md +++ b/README.md @@ -1,36 +1,123 @@ - # AI-Memory -**Elasticsearch API and GPT Model** --------------------------------- - -**Overview** ------------- - This project utilizes an Elasticsearch API and a GPT model to store and manage a chronological repository of information about specific topics, activities, and interactions. The GPT model functions as an extended memory system, or Retriever-Augmented Generator (RAG), to provide suggestions, manage tasks, and offer reminders. -**Key Features** ----------------- +## Features * **Chronological Tracking**: The model tracks the addition and modification of information, allowing it to understand the sequence of events or data entries. * **Information Retrieval**: The model can efficiently retrieve information from Elasticsearch using queries that might involve specific dates, topics, or statuses. * **Decision Making**: Based on retrieved data, the model generates reasoned responses that consider historical data. * **Assistant Capabilities**: The model provides suggestions, manages tasks, and offers reminders. -**Usage** ---------- +## Getting Started -* **Elasticsearch API**: The API is used to store and manage data. -* **GPT Model**: The model is used to generate responses and provide suggestions, and can be interacted with using natural language inputs. +This guide will help you set up and use the AI-Memory project, which utilizes an Elasticsearch API and a GPT model to store and manage a chronological repository of information. + +### 1. Install Elasticsearch + +To install Elasticsearch, follow the official [Elasticsearch documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html). You can choose between a self-hosted solution (for free) or a cloud-managed one. + +### 2. Create the Index + +You need to create an index with the prefix `index-ai-memory-` and a suffix that you can set in the configuration file under the `AI_MEMORY_ELASTIC_SEARCH_INDEX_SUFFIX` variable. This can be done via the Elasticsearch CLI or Kibana. + +Example using Elasticsearch CLI: +```sh +curl -X PUT "localhost:9200/index-ai-memory-your_suffix?pretty" +``` + +### 3. Create an API Key + +You need to create an API key for your Elasticsearch index. This can be done via the Elasticsearch CLI or Kibana. + +Example using Elasticsearch CLI: +```sh +curl -X POST "localhost:9200/_security/api_key?pretty" -H 'Content-Type: application/json' -d' +{ + "name": "ai-memory-key", + "role_descriptors": { + "ai_memory_role": { + "cluster": ["all"], + "index": [ + { + "names": ["index-ai-memory-*"], + "privileges": ["all"] + } + ] + } + } +} +' +``` + +### 4. Configure Environment Variables + +Copy the `gpt-values-override-conf.dist.sh` file and replace `dist` with the ID you want. Set the needed values in the new file. + +Example: +```sh +cp gpt-values-override-conf.dist.sh gpt-values-override-conf.myid.sh +``` + +Edit `gpt-values-override-conf.myid.sh` to set your values: +```sh +export AI_MEMORY_ELASTIC_SEARCH_URL="https://your-elastic-search-url" +export AI_MEMORY_ELASTIC_SEARCH_INDEX_SUFFIX="your_suffix" +export AI_MEMORY_PERSONAL_NAME="Your Name" +export AI_MEMORY_EXTRA_PERSONAL_INFO="Your additional info" +``` + +### 5. Generate Files -**Guidelines** -------------- +Run the `generate-files.sh` script. This will generate the necessary YAML and Markdown files for the GPT Builder. The script also checks if the environment files are not in sync with the dist file. -* **Personal Info**: When searching or creating documents it refers to yourself. -* **Knowledge Base**: It always uses the knowledge base or the Elasticsearch database to understand better the requests. -* **Custom Mappings (experimental)**: It uses the `x-elasticsearch-type` property to configure custom mappings for the index, allowing for the specification of Elasticsearch data types for each field. +```sh +bash generate-files.sh +``` + +### 6. Use GPT Builder + +Once the files have been generated under the `/out` folder, go to ChatGPT (a Plus subscription is needed) and use the GPT Builder. Fill the "instructions" box with the content of the generated `gpt-instructions.md` file and create a new action with the content of the generated `gpt-schema.yml` file. + +### 7. Set API Key for the Action + +Set the API key for the action using the one generated from Elasticsearch. It's important to + + select + + the Authentication type: `ApiKey`. The API key box should contain the value `ApiKey ` (the prefix `ApiKey` is fundamental) and the Auth should be set to `Custom`. + +### Using the GPT + +Once everything is ready, you can use the created GPT by asking it to store or read from its memory. + +#### Examples of Requests + +- **Store Information**: + ```sh + Store the following information: "Meeting with John on Monday at 10 AM." + ``` + +- **Retrieve Information**: + ```sh + What meetings do I have scheduled for Monday? + ``` + +- **Personal Information**: + ```sh + What is my name? + ``` + +- **Extra Personal Information**: + ```sh + What languages do I speak? + ``` + +## Components + +* **Elasticsearch API**: The API is used to store and manage data. +* **GPT Model**: The model is used to generate responses and provide suggestions, and can be interacted with using natural language inputs. -**License** ----------- +### License -This project is licensed under MIT license. +This project is licensed under the MIT license. diff --git a/generate-files.sh b/generate-files.sh index c5c864b..8263831 100755 --- a/generate-files.sh +++ b/generate-files.sh @@ -76,10 +76,10 @@ process_conf_file() { fi # Replace placeholders in the files using envsubst - envsubst "out/gpt-schema.$my_id.yml" - envsubst "out/gpt-instructions.$my_id.md" + envsubst "out/$my_id.gpt-schema.yml" + envsubst "out/$my_id.gpt-instructions.md" - echo "Files gpt-schema.$my_id.yml and gpt-instructions.$my_id.md have been generated." + echo "Files $my_id.gpt-schema.yml and $my_id.gpt-instructions.md have been generated." } # Loop over all configuration files, skipping the .dist.sh file diff --git a/gpt-schema.dist.yml b/gpt-schema.dist.yml index 93304ea..ed6b83e 100644 --- a/gpt-schema.dist.yml +++ b/gpt-schema.dist.yml @@ -9,7 +9,7 @@ servers: - url: ${AI_MEMORY_ELASTIC_SEARCH_URL} paths: - /${AI_MEMORY_ELASTIC_SEARCH_INDEX}/_doc/: + /index-ai-memory-${AI_MEMORY_ELASTIC_SEARCH_INDEX_SUFFIX}/_doc/: post: summary: Add a new document. The content field is always required. operationId: addDocument @@ -40,7 +40,7 @@ paths: schema: $ref: "#/components/schemas/Error" - "/${AI_MEMORY_ELASTIC_SEARCH_INDEX}/_update/{id}": + "/index-ai-memory-${AI_MEMORY_ELASTIC_SEARCH_INDEX_SUFFIX}/_update/{id}": post: summary: Update a document by ID operationId: updateDocument @@ -70,7 +70,7 @@ paths: schema: $ref: "#/components/schemas/Error" - /${AI_MEMORY_ELASTIC_SEARCH_INDEX}/_update_by_query: + /index-ai-memory-${AI_MEMORY_ELASTIC_SEARCH_INDEX_SUFFIX}/_update_by_query: post: summary: Bulk update documents by query operationId: bulkUpdateDocuments @@ -162,7 +162,7 @@ paths: schema: $ref: "#/components/schemas/Error" - /${AI_MEMORY_ELASTIC_SEARCH_INDEX}/_mapping: + /index-ai-memory-${AI_MEMORY_ELASTIC_SEARCH_INDEX_SUFFIX}/_mapping: put: summary: Configure the index with custom mappings operationId: configureIndex diff --git a/gpt-values-override-conf.dist.sh b/gpt-values-override-conf.dist.sh index 66bfaa1..422d4a9 100644 --- a/gpt-values-override-conf.dist.sh +++ b/gpt-values-override-conf.dist.sh @@ -6,9 +6,10 @@ export AI_MEMORY_ELASTIC_SEARCH_URL="https://your-elastic-search-url" # Elasticsearch index name -# This is the name of the index where the documents will be stored -# Example: index-ai-memory-default -export AI_MEMORY_ELASTIC_SEARCH_INDEX="index-ai-memory-default" +# This is the sufffix of the index where the documents will be stored +# The final index name will be: index-ai-memory-${AI_MEMORY_ELASTIC_SEARCH_INDEX_SUFFIX} +# Example: default +export AI_MEMORY_ELASTIC_SEARCH_INDEX_SUFFIX="default" # Personal name # This is the name that will be used in the model's responses