-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unexpected keyword argument tokenizer
[FIXED]
#1285
Comments
So unsloth uses the newest version of trl always and just recently trl just updated and remove the tokenizer parameter to processing_class mentioned here 2348
|
Unsloth will make backward compability I think |
unexpected keyword argument tokenizer
Just fixed @avemio-digital @dame-cell ! Please update Unsloth on local machines via |
unexpected keyword argument tokenizer
unexpected keyword argument tokenizer
[FIXED]
This has been fixed in trl v0.12.1, make sure to run
please also note that |
I used orpo colab example for mistral model and I am getting this error. I am using below configs
from trl import ORPOConfig, ORPOTrainer
from unsloth import is_bfloat16_supported
orpo_trainer = ORPOTrainer(
model = model,
train_dataset = dataset,
tokenizer = tokenizer,
args = ORPOConfig(
max_length = max_seq_length,
max_prompt_length = max_seq_length//2,
max_completion_length = max_seq_length//2,
per_device_train_batch_size = 2,
gradient_accumulation_steps = 4,
beta = 0.1,
logging_steps = 1,
optim = "adamw_8bit",
lr_scheduler_type = "linear",
max_steps = 1500, # Change to num_train_epochs = 1 for full training runs
fp16 = not is_bfloat16_supported(),
bf16 = is_bfloat16_supported(),
output_dir = "outputs",
report_to = "none", # Use this for WandB etc
),
)
The text was updated successfully, but these errors were encountered: