Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while importing unsloth in databricks #1294

Open
BurakaKrishna opened this issue Nov 15, 2024 · 2 comments
Open

Error while importing unsloth in databricks #1294

BurakaKrishna opened this issue Nov 15, 2024 · 2 comments

Comments

@BurakaKrishna
Copy link

Tried installing unsloth in databricks

%%capture
!pip install unsloth "xformers==0.0.28.post2"
# Also get the latest nightly Unsloth!
!pip uninstall unsloth -y && pip install --upgrade --no-cache-dir "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"

While importing
from unsloth import FastLanguageModel

Got error

🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
Unexpected internal error when monkey patching `PreTrainedModel.from_pretrained`: Failed to import transformers.modeling_utils because of the following error (look up to see its traceback):
operator torchvision::nms does not exist
Unexpected internal error when monkey patching `Trainer.train`: Failed to import transformers.trainer because of the following error (look up to see its traceback):
Failed to import transformers.integrations.integration_utils because of the following error (look up to see its traceback):
Failed to import transformers.modeling_utils because of the following error (look up to see its traceback):
partially initialized module 'torchvision' has no attribute 'extension' (most likely due to a circular import)
Unsloth: If you want to finetune Gemma 2, upgrade flash-attn to version 2.6.3 or higher!
Newer versions support faster and less memory usage kernels for Gemma 2's attention softcapping!
To update flash-attn, do the below:

pip install --no-deps --upgrade "flash-attn>=2.6.3"
AttributeError: partially initialized module 'torchvision' has no attribute 'extension' (most likely due to a circular import)
File <command-1593331561314759>, line 1
----> 1 from unsloth import FastLanguageModel
File /databricks/python/lib/python3.11/site-packages/torchvision/_meta_registrations.py:18, in register_meta.<locals>.wrapper(fn)
     17 def wrapper(fn):
---> 18     if torchvision.extension._has_ops():
     19         get_meta_lib().impl(getattr(getattr(torch.ops.torchvision, op_name), overload_name), fn)
     20     return fn

Anyone who has used unsloth in databricks have faced this issue ?

@BurakaKrishna
Copy link
Author

Tried

pip install -U torch==2.3.1
pip install torchvision==0.17.2

Was able to import torchvision, but got a new issue

Unsloth: Will patch your computer to enable 2x faster free finetuning.
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
    PyTorch 2.5.0+cu121 with CUDA 1201 (you have 2.2.2+cu121)
    Python  3.11.10 (you have 3.11.0rc1)
  Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
  Memory-efficient attention, SwiGLU, sparse and more won't be available.
  Set XFORMERS_MORE_DETAILS=1 for more details
Unsloth: Your Flash Attention 2 installation seems to be broken?
A possible explanation is you have a new CUDA version which isn't
yet compatible with FA2? Please file a ticket to Unsloth or FA2.
We shall now use Xformers instead, which does not have any performance hits!
We found this negligible impact by benchmarking on 1x A100.
ImportError: Unsloth: You have torch = 2.2.2+cu121 but xformers = 0.0.28.post2.
Please install xformers < 0.0.26 for torch = 2.2.2+cu121.

@danielhanchen
Copy link
Contributor

Try running wget -qO- https://raw.githubusercontent.com/unslothai/unsloth/main/unsloth/_auto_install.py | python - then running that directly

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants