Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'LlamaForCausalLM' object has no attribute 'update' #1297

Open
scigeek72 opened this issue Nov 15, 2024 · 1 comment
Open
Labels
fixed - pending confirmation Fixed, waiting for confirmation from poster

Comments

@scigeek72
Copy link

scigeek72 commented Nov 15, 2024

Hi, I am trying to run one of your codes for fine tuning code llama 34B model. Instead of running it on Colab, I have downloaded the file and running it on Azure VM (A100 with 80GB GPU RAM). Here are the particulars:

Unsloth 2024.11.7: Fast Llama patching. Transformers = 4.46.2.
GPU: NVIDIA A100 80GB PCIe. Max memory: 79.151 GB. Platform = Linux. Pytorch: 2.5.1+cu124. CUDA = 8.0. CUDA Toolkit = 12.4.
Bfloat16 = TRUE. FA [Xformers = 0.0.28.post3. FA2 = False]

While running this line:

 model, tokenizer = FastLanguageModel.from_pretrained(model_name = "unsloth/codellama-34b-bnb-4bit",max_seq_length = max_seq_length,dtype = dtype, load_in_4bit = load_in_4bit)

I am getting the following error:

Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████| 4/4 [00:04<00:00,  1.05s/it]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/anaconda/envs/llm/lib/python3.12/site-packages/unsloth/models/loader.py", line 350, in from_pretrained
    model, tokenizer = dispatch_model.from_pretrained(
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/anaconda/envs/llm/lib/python3.12/site-packages/unsloth/models/llama.py", line 1632, in from_pretrained
    model, tokenizer = model_patcher.post_patch(model, tokenizer)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/anaconda/envs/llm/lib/python3.12/site-packages/unsloth/models/llama.py", line 1815, in post_patch
    model, tokenizer = patch_model_and_tokenizer(model, tokenizer, downcast_rope = True)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/anaconda/envs/llm/lib/python3.12/site-packages/unsloth_zoo/patching_utils.py", line 161, in patch_model_and_tokenizer
    current_model.update({"unsloth_optimized" : True})
    ^^^^^^^^^^^^^^^^^^^^
  File "/anaconda/envs/llm/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1931, in __getattr__
    raise AttributeError(
AttributeError: 'LlamaForCausalLM' object has no attribute 'update'

I would appreciate it if you could help me fix this issue.
The script I am running as is can be found here:
Thanks.

@scigeek72 scigeek72 changed the title Error while running FastLanguageModel while Fine Tuning codellama 34B model Error while running FastLanguageModel when Fine Tuning codellama 34B model Nov 15, 2024
@danielhanchen
Copy link
Contributor

@scigeek72 Apologies on the issue just fixed it in the nightly branch! Please update Unsloth-Zoo via pip uninstall unsloth-zoo -y && pip install git+https://github.com/unslothai/unsloth-zoo.git@nightly

@danielhanchen danielhanchen added the fixed - pending confirmation Fixed, waiting for confirmation from poster label Nov 15, 2024
@danielhanchen danielhanchen changed the title Error while running FastLanguageModel when Fine Tuning codellama 34B model AttributeError: 'LlamaForCausalLM' object has no attribute 'update' Nov 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
fixed - pending confirmation Fixed, waiting for confirmation from poster
Projects
None yet
Development

No branches or pull requests

2 participants