We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(Windows Py3.9)
(labsession) D:\PYTHON\git_llmtuner\LLMTuner>pip3 install git+https://github.com/promptslab/LLMTuner Collecting git+https://github.com/promptslab/LLMTuner Cloning https://github.com/promptslab/LLMTuner to c:\users\luxury\appdata\local\temp\pip-req-build-2sdv2jxy Running command git clone --filter=blob:none --quiet https://github.com/promptslab/LLMTuner 'C:\Users\luxury\AppData\Local\Temp\pip-req-build-2sdv2jxy' Resolved https://github.com/promptslab/LLMTuner to commit 470be41ad646d205973bd40b80cabe54d4934559 Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [6 lines of output] Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "C:\Users\luxury\AppData\Local\Temp\pip-req-build-2sdv2jxy\setup.py", line 9, in <module> long_description=open('README.md').read(), UnicodeDecodeError: 'gbk' codec can't decode byte 0xa4 in position 2218: illegal multibyte sequence [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. [notice] A new release of pip is available: 23.3.1 -> 24.0 [notice] To update, run: python.exe -m pip install --upgrade pip
To solve this, only method is to clone the repo to the local directory, then modify the setup.py, for the line:
long_description=open('README.md', encoding='utf-8').read(),
Then in CMD use
Python setup.py install
After this installation, there are some problems when using the package.
trainable params: 3,538,944 || all params: 245,273,856 || trainable%: 1.442854145857274 None Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[9], [line 3](vscode-notebook-cell:?execution_count=9&line=3) [1](vscode-notebook-cell:?execution_count=9&line=1) tuner = Tuner(model,dataset) ----> [3](vscode-notebook-cell:?execution_count=9&line=3) trained_model = tuner.fit() File [d:\PYTHON\ENV\labsession\lib\site-packages\llmtuner-0.1.0-py3.10.egg\llmtuner\tuner\whisper_tuner.py:30](file:///D:/PYTHON/ENV/labsession/lib/site-packages/llmtuner-0.1.0-py3.10.egg/llmtuner/tuner/whisper_tuner.py:30), in Tuner.fit(self) [28](file:///D:/PYTHON/ENV/labsession/lib/site-packages/llmtuner-0.1.0-py3.10.egg/llmtuner/tuner/whisper_tuner.py:28) trainer.setup_trainer(self.training_args_dict) [29](file:///D:/PYTHON/ENV/labsession/lib/site-packages/llmtuner-0.1.0-py3.10.egg/llmtuner/tuner/whisper_tuner.py:29) else: ---> [30](file:///D:/PYTHON/ENV/labsession/lib/site-packages/llmtuner-0.1.0-py3.10.egg/llmtuner/tuner/whisper_tuner.py:30) trainer.setup_trainer() [32](file:///D:/PYTHON/ENV/labsession/lib/site-packages/llmtuner-0.1.0-py3.10.egg/llmtuner/tuner/whisper_tuner.py:32) # Start the training process [33](file:///D:/PYTHON/ENV/labsession/lib/site-packages/llmtuner-0.1.0-py3.10.egg/llmtuner/tuner/whisper_tuner.py:33) trainer.start_training() File [d:\PYTHON\ENV\labsession\lib\site-packages\llmtuner-0.1.0-py3.10.egg\llmtuner\trainer\whisper_trainer.py:76](file:///D:/PYTHON/ENV/labsession/lib/site-packages/llmtuner-0.1.0-py3.10.egg/llmtuner/trainer/whisper_trainer.py:76), in WhisperModelTrainer.setup_trainer(self, training_args_dict) [73](file:///D:/PYTHON/ENV/labsession/lib/site-packages/llmtuner-0.1.0-py3.10.egg/llmtuner/trainer/whisper_trainer.py:73) def setup_trainer(self, training_args_dict=None): [74](file:///D:/PYTHON/ENV/labsession/lib/site-packages/llmtuner-0.1.0-py3.10.egg/llmtuner/trainer/whisper_trainer.py:74) # Define default arguments for training ---> [76](file:///D:/PYTHON/ENV/labsession/lib/site-packages/llmtuner-0.1.0-py3.10.egg/llmtuner/trainer/whisper_trainer.py:76) training_args = Seq2SeqTrainingArguments(**training_args_dict) [77](file:///D:/PYTHON/ENV/labsession/lib/site-packages/llmtuner-0.1.0-py3.10.egg/llmtuner/trainer/whisper_trainer.py:77) # Customize the training arguments based on the type of model [78](file:///D:/PYTHON/ENV/labsession/lib/site-packages/llmtuner-0.1.0-py3.10.egg/llmtuner/trainer/whisper_trainer.py:78) if self.model.is_peft_applied: [79](file:///D:/PYTHON/ENV/labsession/lib/site-packages/llmtuner-0.1.0-py3.10.egg/llmtuner/trainer/whisper_trainer.py:79) # Settings specific to PEFT model TypeError: transformers.training_args_seq2seq.Seq2SeqTrainingArguments() argument after ** must be a mapping, not NoneType
The code used are from tutorial colab, but because of this step, I cannot proceed anymore.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
(Windows Py3.9)
To solve this, only method is to clone the repo to the local directory, then modify the setup.py, for the line:
Then in CMD use
After this installation, there are some problems when using the package.
The code used are from tutorial colab, but because of this step, I cannot proceed anymore.
The text was updated successfully, but these errors were encountered: