Skip to content

Commit

Permalink
chore: Update AI-Memory setup instructions and file generation script
Browse files Browse the repository at this point in the history
  • Loading branch information
Yehonal committed Aug 21, 2024
1 parent a2f1e12 commit 0d64653
Show file tree
Hide file tree
Showing 4 changed files with 119 additions and 31 deletions.
129 changes: 108 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,36 +1,123 @@

# AI-Memory

**Elasticsearch API and GPT Model**
--------------------------------

**Overview**
------------

This project utilizes an Elasticsearch API and a GPT model to store and manage a chronological repository of information about specific topics, activities, and interactions. The GPT model functions as an extended memory system, or Retriever-Augmented Generator (RAG), to provide suggestions, manage tasks, and offer reminders.

**Key Features**
----------------
## Features

* **Chronological Tracking**: The model tracks the addition and modification of information, allowing it to understand the sequence of events or data entries.
* **Information Retrieval**: The model can efficiently retrieve information from Elasticsearch using queries that might involve specific dates, topics, or statuses.
* **Decision Making**: Based on retrieved data, the model generates reasoned responses that consider historical data.
* **Assistant Capabilities**: The model provides suggestions, manages tasks, and offers reminders.

**Usage**
---------
## Getting Started

* **Elasticsearch API**: The API is used to store and manage data.
* **GPT Model**: The model is used to generate responses and provide suggestions, and can be interacted with using natural language inputs.
This guide will help you set up and use the AI-Memory project, which utilizes an Elasticsearch API and a GPT model to store and manage a chronological repository of information.

### 1. Install Elasticsearch

To install Elasticsearch, follow the official [Elasticsearch documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html). You can choose between a self-hosted solution (for free) or a cloud-managed one.

### 2. Create the Index

You need to create an index with the prefix `index-ai-memory-` and a suffix that you can set in the configuration file under the `AI_MEMORY_ELASTIC_SEARCH_INDEX_SUFFIX` variable. This can be done via the Elasticsearch CLI or Kibana.

Example using Elasticsearch CLI:
```sh
curl -X PUT "localhost:9200/index-ai-memory-your_suffix?pretty"
```

### 3. Create an API Key

You need to create an API key for your Elasticsearch index. This can be done via the Elasticsearch CLI or Kibana.

Example using Elasticsearch CLI:
```sh
curl -X POST "localhost:9200/_security/api_key?pretty" -H 'Content-Type: application/json' -d'
{
"name": "ai-memory-key",
"role_descriptors": {
"ai_memory_role": {
"cluster": ["all"],
"index": [
{
"names": ["index-ai-memory-*"],
"privileges": ["all"]
}
]
}
}
}
'
```

### 4. Configure Environment Variables

Copy the `gpt-values-override-conf.dist.sh` file and replace `dist` with the ID you want. Set the needed values in the new file.

Example:
```sh
cp gpt-values-override-conf.dist.sh gpt-values-override-conf.myid.sh
```

Edit `gpt-values-override-conf.myid.sh` to set your values:
```sh
export AI_MEMORY_ELASTIC_SEARCH_URL="https://your-elastic-search-url"
export AI_MEMORY_ELASTIC_SEARCH_INDEX_SUFFIX="your_suffix"
export AI_MEMORY_PERSONAL_NAME="Your Name"
export AI_MEMORY_EXTRA_PERSONAL_INFO="Your additional info"
```

### 5. Generate Files

**Guidelines**
-------------
Run the `generate-files.sh` script. This will generate the necessary YAML and Markdown files for the GPT Builder. The script also checks if the environment files are not in sync with the dist file.

* **Personal Info**: When searching or creating documents it refers to yourself.
* **Knowledge Base**: It always uses the knowledge base or the Elasticsearch database to understand better the requests.
* **Custom Mappings (experimental)**: It uses the `x-elasticsearch-type` property to configure custom mappings for the index, allowing for the specification of Elasticsearch data types for each field.
```sh
bash generate-files.sh
```

### 6. Use GPT Builder

Once the files have been generated under the `/out` folder, go to ChatGPT (a Plus subscription is needed) and use the GPT Builder. Fill the "instructions" box with the content of the generated `gpt-instructions.md` file and create a new action with the content of the generated `gpt-schema.yml` file.

### 7. Set API Key for the Action

Set the API key for the action using the one generated from Elasticsearch. It's important to

select

the Authentication type: `ApiKey`. The API key box should contain the value `ApiKey <yourapikey>` (the prefix `ApiKey` is fundamental) and the Auth should be set to `Custom`.

### Using the GPT

Once everything is ready, you can use the created GPT by asking it to store or read from its memory.

#### Examples of Requests

- **Store Information**:
```sh
Store the following information: "Meeting with John on Monday at 10 AM."
```

- **Retrieve Information**:
```sh
What meetings do I have scheduled for Monday?
```

- **Personal Information**:
```sh
What is my name?
```

- **Extra Personal Information**:
```sh
What languages do I speak?
```

## Components

* **Elasticsearch API**: The API is used to store and manage data.
* **GPT Model**: The model is used to generate responses and provide suggestions, and can be interacted with using natural language inputs.

**License**
----------
### License

This project is licensed under MIT license.
This project is licensed under the MIT license.
6 changes: 3 additions & 3 deletions generate-files.sh
Original file line number Diff line number Diff line change
Expand Up @@ -76,10 +76,10 @@ process_conf_file() {
fi

# Replace placeholders in the files using envsubst
envsubst <gpt-schema.dist.yml >"out/gpt-schema.$my_id.yml"
envsubst <gpt-instructions.dist.md >"out/gpt-instructions.$my_id.md"
envsubst <gpt-schema.dist.yml >"out/$my_id.gpt-schema.yml"
envsubst <gpt-instructions.dist.md >"out/$my_id.gpt-instructions.md"

echo "Files gpt-schema.$my_id.yml and gpt-instructions.$my_id.md have been generated."
echo "Files $my_id.gpt-schema.yml and $my_id.gpt-instructions.md have been generated."
}

# Loop over all configuration files, skipping the .dist.sh file
Expand Down
8 changes: 4 additions & 4 deletions gpt-schema.dist.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ servers:
- url: ${AI_MEMORY_ELASTIC_SEARCH_URL}

paths:
/${AI_MEMORY_ELASTIC_SEARCH_INDEX}/_doc/:
/index-ai-memory-${AI_MEMORY_ELASTIC_SEARCH_INDEX_SUFFIX}/_doc/:
post:
summary: Add a new document. The content field is always required.
operationId: addDocument
Expand Down Expand Up @@ -40,7 +40,7 @@ paths:
schema:
$ref: "#/components/schemas/Error"

"/${AI_MEMORY_ELASTIC_SEARCH_INDEX}/_update/{id}":
"/index-ai-memory-${AI_MEMORY_ELASTIC_SEARCH_INDEX_SUFFIX}/_update/{id}":
post:
summary: Update a document by ID
operationId: updateDocument
Expand Down Expand Up @@ -70,7 +70,7 @@ paths:
schema:
$ref: "#/components/schemas/Error"

/${AI_MEMORY_ELASTIC_SEARCH_INDEX}/_update_by_query:
/index-ai-memory-${AI_MEMORY_ELASTIC_SEARCH_INDEX_SUFFIX}/_update_by_query:
post:
summary: Bulk update documents by query
operationId: bulkUpdateDocuments
Expand Down Expand Up @@ -162,7 +162,7 @@ paths:
schema:
$ref: "#/components/schemas/Error"

/${AI_MEMORY_ELASTIC_SEARCH_INDEX}/_mapping:
/index-ai-memory-${AI_MEMORY_ELASTIC_SEARCH_INDEX_SUFFIX}/_mapping:
put:
summary: Configure the index with custom mappings
operationId: configureIndex
Expand Down
7 changes: 4 additions & 3 deletions gpt-values-override-conf.dist.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,10 @@
export AI_MEMORY_ELASTIC_SEARCH_URL="https://your-elastic-search-url"

# Elasticsearch index name
# This is the name of the index where the documents will be stored
# Example: index-ai-memory-default
export AI_MEMORY_ELASTIC_SEARCH_INDEX="index-ai-memory-default"
# This is the sufffix of the index where the documents will be stored
# The final index name will be: index-ai-memory-${AI_MEMORY_ELASTIC_SEARCH_INDEX_SUFFIX}
# Example: default
export AI_MEMORY_ELASTIC_SEARCH_INDEX_SUFFIX="default"

# Personal name
# This is the name that will be used in the model's responses
Expand Down

0 comments on commit 0d64653

Please sign in to comment.