diff --git a/langchain-rag-23ai/lab1/images/ignorewarning1.png b/langchain-rag-23ai/lab1/images/ignorewarning1.png new file mode 100644 index 0000000..bec4ce9 Binary files /dev/null and b/langchain-rag-23ai/lab1/images/ignorewarning1.png differ diff --git a/langchain-rag-23ai/lab1/rag1.md b/langchain-rag-23ai/lab1/rag1.md index a112c74..db03029 100644 --- a/langchain-rag-23ai/lab1/rag1.md +++ b/langchain-rag-23ai/lab1/rag1.md @@ -66,6 +66,8 @@ Let's import the libraries. ![Run code in Jupyter](images/selectcodecellrun.png) + You may get a warning message as shown above. You can ignore this and click **Run** to proceed. + ```python # Import libraries and modules @@ -251,10 +253,10 @@ Note: Embedding models are used to vectorize data. To learn more about embedding **Choose an LLM to generate your response** -You have 3 choices in this lab. Choose only one, but you can go back and choose a different one to see its response. Just go back to the LLM cell and run it. Note: For OpenAI, you need to get an API key from openai.com. +You have 3 choices in this lab. Choose only one, but you can go back and choose a different one to see its response. Just go back to the LLM cell and run it. Note: For OpenAI, you CANNOT run this in the LiveLabs environment. This sample code is for informational purposes so you can run this example in your own environment. * Choice 1 - OCI GenAI LLM with meta.llama-2.70b-chat -* Choice 2 - OpenAI ChatGPT 3.5 (Requires OpenAI API key) +* Choice 2 - OpenAI ChatGPT 3.5 (CANNOT run in this LiveLabs environment) * Choice 3 - OCI GenAI LLM with Cohere Command Note: To learn more about using other LLMs and accessing LLMs with secure API keys, see the LiveLabs on LLMs for Oracle AI Vector Search. @@ -384,5 +386,5 @@ You may now [proceed to the next lab](#next). ## Acknowledgements -* **Authors** - Vijay Balebail, Milton Wan, Doug Hood, Rajeev Rumale +* **Authors** - Vijay Balebail, Milton Wan, Douglas Hood, Rajeev Rumale * **Last Updated By/Date** - Milton Wan, April 2024 diff --git a/langchain-rag-23ai/prepare-env/images/copyocid.png b/langchain-rag-23ai/prepare-env/images/copyocid.png new file mode 100644 index 0000000..ec1621f Binary files /dev/null and b/langchain-rag-23ai/prepare-env/images/copyocid.png differ diff --git a/langchain-rag-23ai/prepare-env/images/pasteocid.png b/langchain-rag-23ai/prepare-env/images/pasteocid.png new file mode 100644 index 0000000..2d645c7 Binary files /dev/null and b/langchain-rag-23ai/prepare-env/images/pasteocid.png differ diff --git a/langchain-rag-23ai/prepare-env/prep-env.md b/langchain-rag-23ai/prepare-env/prep-env.md index 3b0aef8..b62c4c1 100644 --- a/langchain-rag-23ai/prepare-env/prep-env.md +++ b/langchain-rag-23ai/prepare-env/prep-env.md @@ -17,17 +17,21 @@ In this lab, you will: ## Task 1: Retrieve the Compartment OCID -1. On LiveLabs before launching remote desktop, copy the Compartment OCID and save it, then Launch Remote Desktop. +1. On LiveLabs before launching remote desktop, you should have copied the Compartment OCID and saved it. ![LiveLabs launch lab](images/lllaunchlab.png) -2. Once your noVNC session has started open the **.env** file from the terminal with vi editor in the /home/oracle/AIdemo directory. +2. Once your noVNC session has started copy your OCID to the noVNC Clipboard. + + ![LiveLabs launch lab](images/copyocid.png) + +3. Open the **.env** file from the terminal with vi editor in the /home/oracle/AIdemo directory. ![vi editor](images/vienvpwd.png) 3. In vi editor, position the cursor right after the variable COMPARTMENT_OCID= -4. Hit **ESC** key in vi and type the **i** key to go to insert mode. Insert the OCID you copied to the variable COMPARTMENT_OCID. If there is already a value there, remove it and add yours as shown. +4. Hit **ESC** key in vi and type the **i** key to go to edit mode. Delete the existing key if there is one. Insert the OCID you copied to the Clipboard to COMPARTMENT_OCID=. 5. Hit **ESC** key again, and type **:wq** to save the .env file.