-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can I use the ollama locally without internet? #2152
Comments
Hi there, I tried to edit the code to disable the checking local models. I really have interest in the "graph_store" feature with neo4j, but my config is I found that these data could be inserted into qdrant, but not in neo4j. Is that the "vector_store" and "graph_store" conflict with each other? And, I tried to just use "graph_store" with ollama locally as below: config = { But, it failed to get the embedding from the open ai due to no internet. Finally, I tried to just use the "vector_store" only as below, it ran successfully: config = { BR |
🚀 The feature
Hi there,
May I ask how could I use my Ollama model without internet?
Steps:
I have installed all the dependencies in my local ubuntu server with internet. (python packages, neo4j docker image and my python code)
Copied these code and docker image to a new ubuntu server without internet and tried to run my test code:
from mem0 import Memory
config = {
"graph_store": {
"provider": "neo4j",
"config": {
"url": "neo4j://10.100.xx.xx:7687",
"username": "neo4j",
"password": "neo4j"
},
"llm" : {
"provider": "ollama",
"config": {
"model": "llama3.2:latest",
"temperature": 0.2,
"max_tokens": 8000,
"ollama_base_url": "http://10.100.xx.xx:11434",
},
}
},
"llm": {
"provider": "ollama",
"config": {
"model": "llama3.2:latest",
"temperature": 0.2,
"max_tokens": 8000,
"ollama_base_url": "http://10.100.xx.xx:11434",
},
},
"embedder": {
"provider": "ollama",
"config": {
"model": "nomic-embed-text:latest",
"ollama_base_url": "http://10.100.xx.xx:11434",
},
},
"version": "v1.1"
}
m = Memory.from_config(config)
m.add("I'm visiting Paris", user_id="john")
memories = m.get_all(user_id="john")
print(memories)
m.delete_all(user_id="alice123")
m.add("I like going to hikes", user_id="alice123")
m.add("I love to play badminton", user_id="alice123")
m.add("I hate playing badminton", user_id="alice123")
m.add("My friend name is john and john has a dog named tommy", user_id="alice123")
m.add("My name is Alice", user_id="alice123")
m.add("John loves to hike and Harry loves to hike as well", user_id="alice123")
m.add("My friend name is kimi and kimi is a F1 fans.", user_id="alice123")
m.add("So, kimi and john are friends.", user_id="alice123")
friends_result = m.search("Who are my friends?", user_id="alice123")
print(friends_result)
The above code followed these two links as below:
https://docs.mem0.ai/open-source/graph_memory/overview of tabs Advanced (Custom LLM) and https://docs.mem0.ai/examples/mem0-with-ollama
The llama3.2 and nomic-embed-text models have been imported Ollama:
(mem0) byxf@gpu:~/workspaces/images/mem0$ ollama list
NAME ID SIZE MODIFIED
llama3.2:latest a80c4f17acd5 2.0 GB 2 months ago
nomic-embed-text:latest 0a109f422b47 274 MB 2 months ago
Traceback (most recent call last):
File "/home/byxf/workspaces/images/mem0/mem0_with_neo4j.py", line 41, in
m = Memory.from_config(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/byxf/miniforge3/envs/mem0/lib/python3.12/site-packages/mem0/memory/main.py", line 63, in from_config
return cls(config)
^^^^^^^^^^^
File "/home/byxf/miniforge3/envs/mem0/lib/python3.12/site-packages/mem0/memory/main.py", line 37, in init
self.embedding_model = EmbedderFactory.create(self.config.embedder.provider, self.config.embedder.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/byxf/miniforge3/envs/mem0/lib/python3.12/site-packages/mem0/utils/factory.py", line 56, in create
return embedder_instance(base_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/byxf/miniforge3/envs/mem0/lib/python3.12/site-packages/mem0/embeddings/ollama.py", line 32, in init
self._ensure_model_exists()
File "/home/byxf/miniforge3/envs/mem0/lib/python3.12/site-packages/mem0/embeddings/ollama.py", line 40, in _ensure_model_exists
self.client.pull(self.config.model)
File "/home/byxf/miniforge3/envs/mem0/lib/python3.12/site-packages/ollama/_client.py", line 421, in pull
return self._request(
^^^^^^^^^^^^^^
File "/home/byxf/miniforge3/envs/mem0/lib/python3.12/site-packages/ollama/_client.py", line 177, in _request
return cls(**self._request_raw(*args, **kwargs).json())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/byxf/miniforge3/envs/mem0/lib/python3.12/site-packages/ollama/_client.py", line 122, in _request_raw
raise ResponseError(e.response.text, e.response.status_code) from None
ollama._types.ResponseError: pull model manifest: Get "https://registry.ollama.ai/v2/library/bge-m3/manifests/latest";: dial tcp: lookup registry.ollama.ai on 127.0.0.53:53: server misbehaving
It is appreciated that you may guide me how to disable the checking model feature, or could you please add this feature?
BR
Kimi
Motivation, pitch
Since, some companies will not allow to download or install open-source models or software in these Linux servers directly.
These open-source models or software must be downloaded and installed in a virtual env in test server with internet (I just use the miniforge to make the python virtual env and install these dependencies), and then coping these virtual env directly to the prod Linux server (prod Linux server has no internet, could not use apt, pip command to install any dependencies)
So, could you please just check the models existing in local disk or not, and provide a warning message if not.
BR
Kimi
The text was updated successfully, but these errors were encountered: