Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when trying to train a custom model #22

Open
silencesys opened this issue Jun 27, 2023 · 0 comments
Open

Error when trying to train a custom model #22

silencesys opened this issue Jun 27, 2023 · 0 comments

Comments

@silencesys
Copy link

Hi,

I noticed that you probably do not reply to issues very often. I'll have to give it a try though.

I am getting this error:

2023-06-27 18:38:22,087 - INFO - {'morphlex_embeddings_file': 'data/extra/dmii.vectors', 'training_files': ('training_data/training.tsv',), 'test_file': '../tests.tsv', 'output_dir': './MODEL', 'run_name': None, 'adjust_lengths': 0, 'gpu': False, 'known_chars_file': './data/extra/characters_training.txt', 'known_tags_file': './data/extra/all_tags.txt', 'save_model': True, 'save_vocab': True, 'tagger': False, 'tagger_weight': 1.0, 'tagger_embedding': 'bert', 'tagger_ignore_e_x': True, 'lemmatizer': False, 'lemmatizer_weight': 1.0, 'lemmatizer_accept_char_rnn_last': False, 'lemmatizer_hidden_dim': 128, 'lemmatizer_num_layers': 1, 'lemmatizer_char_attention': True, 'lemmatizer_state_dict': None, 'tag_embedding_dim': 0, 'tag_embedding_dropout': 0.0, 'char_lstm_layers': 0, 'char_lstm_dim': 128, 'char_emb_dim': 64, 'morphlex_freeze': True, 'pretrained_word_embeddings_file': None, 'word_embedding_dim': 0, 'bert_encoder': None, 'main_lstm_layers': 1, 'main_lstm_dim': 128, 'emb_dropouts': 0.0, 'label_smoothing': 0.1, 'learning_rate': 5e-05, 'epochs': 20, 'batch_size': 16, 'optimizer': 'adam', 'scheduler': 'multiply'}
2023-06-27 18:38:22,089 - INFO - Using 1 CPU threads
2023-06-27 18:38:22,104 - INFO - EncodersDecoders(
  (encoders): ModuleDict()
  (decoders): ModuleDict()
)
2023-06-27 18:38:22,104 - INFO - Trainable parameters=0
2023-06-27 18:38:22,104 - INFO - Not trainable parameters=0
2023-06-27 18:38:22,105 - INFO - Label smoothing=0.1
2023-06-27 18:38:22,105 - INFO - Setting optmizer=adam
Traceback (most recent call last):
  File "/usr/local/bin/pos", line 8, in <module>
    sys.exit(cli())
  File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/pos/cli.py", line 281, in train_and_tag
    optimizer = get_optimizer(model.parameters(), kwargs["optimizer"], kwargs["learning_rate"])
  File "/usr/local/lib/python3.10/dist-packages/pos/train.py", line 50, in get_optimizer
    return Adam(parameters, lr=lr)
  File "/usr/local/lib/python3.10/dist-packages/torch/optim/adam.py", line 137, in __init__
    super(Adam, self).__init__(params, defaults)
  File "/usr/local/lib/python3.10/dist-packages/torch/optim/optimizer.py", line 61, in __init__
    raise ValueError("optimizer got an empty parameter list")
ValueError: optimizer got an empty parameter list

with this script

!pos train-and-tag \
  --morphlex_embeddings_file data/extra/dmii.vectors \
  training_data/*.tsv \
  ../tests.tsv \
  ./MODEL

I have followed the instructions in the Readme. Are there any instructions missing or am I doing something wrong? I am also trying to run this on Google Colab.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant