Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to run the main.py though following the README file #73

Open
Riverise opened this issue Mar 3, 2025 · 3 comments
Open

Unable to run the main.py though following the README file #73

Riverise opened this issue Mar 3, 2025 · 3 comments

Comments

@Riverise
Copy link

Riverise commented Mar 3, 2025

I'm trying to run the main.py with openai model in the "Running Experiments" Part of README but failed.

Preliminary work:

  • conda create -n hipporag python=3.10
  • conda activate hipporag
  • pip install hipporag
  • pip install -r requirements.txt
  • export CUDA_VISIBLE_DEVICES=0,1,2,3
  • export HF_HOME=my-path
  • export OPENAI_API_KEY=my-key

Running command:

python main.py --dataset sample --llm_base_url xxxx --llm_name gpt-4o-mini --embedding_name xxxx

Traceback Info:

/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/onnxscript/converter.py:823: FutureWarning: 'onnxscript.values.Op.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/onnxscript/converter.py:823: FutureWarning: 'onnxscript.values.OnnxFunction.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
Traceback (most recent call last):
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1764, in _get_module
return importlib.import_module("." + module_name, self.name)
File "/root/anaconda3/envs/hipporag22/lib/python3.10/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/transformers/processing_utils.py", line 33, in
from .image_utils import ChannelDimension, is_valid_image, is_vision_available
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/transformers/image_utils.py", line 58, in
from torchvision.transforms import InterpolationMode
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/torchvision/init.py", line 10, in
from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils # usort:skip
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/torchvision/models/init.py", line 2, in
from .convnext import *
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/torchvision/models/convnext.py", line 8, in
from ..ops.misc import Conv2dNormActivation, Permute
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/torchvision/ops/init.py", line 1, in
from ._register_onnx_ops import _register_custom_op
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/torchvision/ops/_register_onnx_ops.py", line 5, in
from torch.onnx import symbolic_opset11 as opset11
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/torch/onnx/init.py", line 49, in
from ._internal.exporter import ( # usort:skip. needs to be last to avoid circular import
ImportError: cannot import name 'DiagnosticOptions' from 'torch.onnx._internal.exporter' (/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/torch/onnx/_internal/exporter/init.py)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/workspace/HippoRAG2/main.py", line 5, in
from src.hipporag.HippoRAG import HippoRAG
File "/workspace/HippoRAG2/src/hipporag/init.py", line 1, in
from .HippoRAG import HippoRAG
File "/workspace/HippoRAG2/src/hipporag/HippoRAG.py", line 23, in
from .information_extraction.openie_vllm_offline import VLLMOfflineOpenIE
File "/workspace/HippoRAG2/src/hipporag/information_extraction/openie_vllm_offline.py", line 9, in
from ..llm.vllm_offline import VLLMOffline
File "/workspace/HippoRAG2/src/hipporag/llm/vllm_offline.py", line 27, in
from vllm import SamplingParams, LLM
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/vllm/init.py", line 3, in
from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 11, in
from vllm.config import (CacheConfig, CompilationConfig, ConfigFormat,
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/vllm/config.py", line 22, in
from vllm.model_executor.layers.quantization import (QUANTIZATION_METHODS,
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/vllm/model_executor/init.py", line 1, in
from vllm.model_executor.parameter import (BasevLLMParameter,
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/vllm/model_executor/parameter.py", line 7, in
from vllm.distributed import get_tensor_model_parallel_rank
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/vllm/distributed/init.py", line 1, in
from .communication_op import *
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/vllm/distributed/communication_op.py", line 6, in
from .parallel_state import get_tp_group
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 38, in
import vllm.distributed.kv_transfer.kv_transfer_agent as kv_transfer
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/vllm/distributed/kv_transfer/kv_transfer_agent.py", line 15, in
from vllm.distributed.kv_transfer.kv_connector.factory import (
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/vllm/distributed/kv_transfer/kv_connector/factory.py", line 3, in
from .base import KVConnectorBase
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/vllm/distributed/kv_transfer/kv_connector/base.py", line 14, in
from vllm.sequence import IntermediateTensors
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/vllm/sequence.py", line 16, in
from vllm.inputs import SingletonInputs, SingletonInputsAdapter
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/vllm/inputs/init.py", line 7, in
from .registry import (DummyData, InputContext, InputProcessingContext,
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/vllm/inputs/registry.py", line 8, in
from transformers import BatchFeature, PretrainedConfig, ProcessorMixin
File "", line 1075, in _handle_fromlist
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1754, in getattr
module = self._get_module(self._class_to_module[name])
File "/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1766, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.processing_utils because of the following error (look up to see its traceback):
cannot import name 'DiagnosticOptions' from 'torch.onnx._internal.exporter' (/root/anaconda3/envs/hipporag22/lib/python3.10/site-packages/torch/onnx/_internal/exporter/init.py)

I've searched for the cause and resolution of the error in relevant posts and Pytorch Documentation, but still failed to fix it. I suspect it of a version problem of torch and I found there were several commits of the requirements.txt. Can you share me the package version that succeed running the main.py or how to fix the error and run through the main.py. Thank you very much!

The package installed

Package Version


accelerate 1.4.0
aiohappyeyeballs 2.4.6
aiohttp 3.11.13
aiohttp-cors 0.7.0
aiosignal 1.3.2
airportsdata 20250224
alembic 1.14.1
annotated-types 0.7.0
anyio 4.8.0
astor 0.8.1
async-timeout 5.0.1
attrs 25.1.0
backoff 2.2.1
blake3 1.0.4
Brotli 1.0.9
cachetools 5.5.2
certifi 2025.1.31
charset-normalizer 3.3.2
click 8.1.8
cloudpickle 3.1.1
colorful 0.5.6
colorlog 6.9.0
compressed-tensors 0.8.1
datasets 2.21.0
depyf 0.18.0
dill 0.3.8
diskcache 5.6.3
distlib 0.3.9
distro 1.9.0
docker-pycreds 0.4.0
dspy 2.5.29
einops 0.8.1
eval_type_backport 0.2.2
exceptiongroup 1.2.2
fastapi 0.115.11
filelock 3.13.1
frozenlist 1.5.0
fsspec 2024.6.1
gguf 0.10.0
gitdb 4.0.12
GitPython 3.1.44
gmpy2 2.2.1
google-api-core 2.24.1
google-auth 2.38.0
googleapis-common-protos 1.68.0
greenlet 3.1.1
gritlm 1.0.2
grpcio 1.70.0
h11 0.14.0
httpcore 1.0.7
httptools 0.6.4
httpx 0.28.1
huggingface-hub 0.29.1
idna 3.10
igraph 0.11.8
importlib_metadata 8.6.1
interegular 0.3.3
Jinja2 3.1.5
jiter 0.8.2
joblib 1.4.2
json_repair 0.39.1
jsonschema 4.23.0
jsonschema-specifications 2024.10.1
lark 1.2.2
litellm 1.51.0
lm-format-enforcer 0.10.11
magicattr 0.1.6
Mako 1.3.9
markdown-it-py 3.0.0
MarkupSafe 3.0.2
mdurl 0.1.2
mistral_common 1.5.3
mkl_fft 1.3.11
mkl_random 1.2.8
mkl-service 2.4.0
ml_dtypes 0.5.1
mpmath 1.3.0
msgpack 1.1.0
msgspec 0.19.0
mteb 1.36.5
multidict 6.1.0
multiprocess 0.70.16
nest-asyncio 1.6.0
networkx 3.4.2
numpy 1.26.4
nvidia-cublas-cu12 12.4.5.8
nvidia-cuda-cupti-cu12 12.4.127
nvidia-cuda-nvrtc-cu12 12.4.127
nvidia-cuda-runtime-cu12 12.4.127
nvidia-cudnn-cu12 9.1.0.70
nvidia-cufft-cu12 11.2.1.3
nvidia-curand-cu12 10.3.5.147
nvidia-cusolver-cu12 11.6.1.9
nvidia-cusparse-cu12 12.3.1.170
nvidia-cusparselt-cu12 0.6.2
nvidia-ml-py 12.570.86
nvidia-nccl-cu12 2.21.5
nvidia-nvjitlink-cu12 12.4.127
nvidia-nvtx-cu12 12.4.127
onnx 1.17.0
onnxscript 0.2.1
openai 1.65.2
opencensus 0.11.4
opencensus-context 0.1.3
opencv-python-headless 4.11.0.86
optuna 4.2.1
outlines 0.1.11
outlines_core 0.1.26
packaging 24.2
pandas 2.2.3
partial-json-parser 0.2.1.1.post5
pillow 11.1.0
pip 25.0
platformdirs 4.3.6
polars 1.24.0
prometheus_client 0.21.1
prometheus-fastapi-instrumentator 7.0.2
propcache 0.3.0
proto-plus 1.26.0
protobuf 5.29.3
psutil 7.0.0
py-cpuinfo 9.0.0
py-spy 0.4.0
pyarrow 19.0.1
pyasn1 0.6.1
pyasn1_modules 0.4.1
pycountry 24.6.1
pydantic 2.10.6
pydantic_core 2.27.2
Pygments 2.19.1
PySocks 1.7.1
python-dateutil 2.9.0.post0
python-dotenv 1.0.1
pytrec-eval-terrier 0.5.6
pytz 2025.1
PyYAML 6.0.2
pyzmq 26.2.1
ray 2.43.0
referencing 0.36.2
regex 2024.11.6
requests 2.32.3
rich 13.9.4
rpds-py 0.23.1
rsa 4.9
safetensors 0.5.3
scikit-learn 1.6.1
scipy 1.15.2
sentence-transformers 3.4.1
sentencepiece 0.2.0
sentry-sdk 2.22.0
setproctitle 1.3.5
setuptools 75.8.0
six 1.17.0
smart-open 7.1.0
smmap 5.0.2
sniffio 1.3.1
SQLAlchemy 2.0.38
starlette 0.46.0
sympy 1.13.1
tenacity 9.0.0
texttable 1.7.0
threadpoolctl 3.5.0
tiktoken 0.9.0
tokenizers 0.20.3
torch 2.4.0
torchaudio 2.4.0
torchvision 0.19.0
tqdm 4.66.6
transformers 4.45.2
triton 3.0.0
typing_extensions 4.12.2
tzdata 2025.1
ujson 5.10.0
urllib3 2.3.0
uvicorn 0.34.0
uvloop 0.21.0
virtualenv 20.29.2
vllm 0.6.6.post1
wandb 0.19.7
watchfiles 1.0.4
websockets 15.0
wheel 0.45.1
wrapt 1.17.2
xformers 0.0.28.post3
xgrammar 0.1.14
xxhash 3.5.0
yarl 1.18.3
zipp 3.21.0

@bernaljg
Copy link
Collaborator

bernaljg commented Mar 3, 2025

Hi, can you try again without running pip install -r requirements.txt file? Everything that HippoRAG needs is installed when you run pip install hipporag.

@Riverise
Copy link
Author

Riverise commented Mar 6, 2025

Thank you for your reply. I have tried again without running pip install -r requirements.txt and run only pip install hipporag, but still ran into some error.

Traceback (most recent call last):
  File "/home/HippoRAG/main.py", line 5, in <module>
    from src.hipporag.HippoRAG import HippoRAG
  File "/home/HippoRAG/src/hipporag/__init__.py", line 1, in <module>
    from .HippoRAG import HippoRAG
  File "/home/HippoRAG/src/hipporag/HippoRAG.py", line 10, in <module>
    from transformers import HfArgumentParser
  File "/root/anaconda3/envs/hip/lib/python3.10/site-packages/transformers/__init__.py", line 26, in <module>
    from . import dependency_versions_check
  File "/root/anaconda3/envs/hip/lib/python3.10/site-packages/transformers/dependency_versions_check.py", line 16, in <module>
    from .utils.versions import require_version, require_version_core
  File "/root/anaconda3/envs/hip/lib/python3.10/site-packages/transformers/utils/__init__.py", line 27, in <module>
    from .chat_template_utils import DocstringParsingException, TypeHintParsingException, get_json_schema
  File "/root/anaconda3/envs/hip/lib/python3.10/site-packages/transformers/utils/chat_template_utils.py", line 39, in <module>
    from torch import Tensor
  File "/root/anaconda3/envs/hip/lib/python3.10/site-packages/torch/__init__.py", line 367, in <module>
    from torch._C import *  # noqa: F403
ImportError: /root/anaconda3/envs/hip/lib/python3.10/site-packages/torch/lib/../../nvidia/cusparse/lib/libcusparse.so.12: undefined symbol: __nvJitLinkComplete_12_4, version libnvJitLink.so.12

My CUDA Driver version is 12.4 and Cuda compilation tools is release 12.2, V12.2.140. And the torch version is 2.5.1. I'm wondering how you can run through the pipeline while I cannot. Thank you.

@bernaljg
Copy link
Collaborator

bernaljg commented Mar 6, 2025

Ok actually, my advice was for running HippoRAG as a package but you want to reproduce our experiments. Sorry for the confusion.

In order to run main.py you should create a new conda environment and run only pip install -r requirements.txt instead of only pip install hipporag. That should lead you to the proper environment setup.

Please let me know if it still doesn't work since from the error messages it could also be a CUDA issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants