You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i tried modifying dimentions but no luck, tried to give input as 2 seperate parameters but also did not work.
Please kindly help me which parameter should i update to avoid unpack error.
Thank You and Much Appreciated
ValueError Traceback (most recent call last)
Cell In [54], line 41
37 print(input_len)
38 # print('model',model(model_input))
39
40 # [seq_len] -> [seq_len, vocab]
---> 41 logprobs = torch.nn.functional.log_softmax(model(model_input)[0], dim=-1).cpu()
42 # [seq_len, vocab] -> [continuation_len, vocab]
43 logprobs = logprobs[input_len-continuation_len:]
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/models/gpt_neo/modeling_gpt_neo.py:974, in
GPTNeoForCausalLM.forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask,
inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
966 r"""
967 labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
968 Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set
969 ``labels = input_ids`` Indices are selected in ``[-100, 0, ..., config.vocab_size]`` All labels set to
970 ``-100`` are ignored (masked), the loss is only computed for labels in ``[0, ..., config.vocab_size]``
971 """
972 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
--> 974 transformer_outputs = self.transformer(
975 input_ids,
976 past_key_values=past_key_values,
977 attention_mask=attention_mask,
978 token_type_ids=token_type_ids,
979 position_ids=position_ids,
980 head_mask=head_mask,
981 inputs_embeds=inputs_embeds,
982 use_cache=use_cache,
983 output_attentions=output_attentions,
984 output_hidden_states=output_hidden_states,
985 return_dict=return_dict,
986 )
987 hidden_states = transformer_outputs[0]
989 lm_logits = self.lm_head(hidden_states)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self,
*input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/transformers/models/gpt_neo/modeling_gpt_neo.py:799, in
GPTNeoModel.forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask,
inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
796 global_attention_mask = None
798 # Local causal attention mask
--> 799 batch_size, seq_length = input_shape
800 full_seq_length = seq_length + past_length
801 local_attention_mask = GPTNeoAttentionMixin.create_local_attention_mask(
802 batch_size, full_seq_length, self.config.window_size, device, attention_mask
803 )
ValueError: not enough values to unpack (expected 2, got 1)
The text was updated successfully, but these errors were encountered:
Hi,
I'm trying to run the Cross-Encoder given example.
i faced this error in Line no 41.
i tried modifying dimentions but no luck, tried to give input as 2 seperate parameters but also did not work.
Please kindly help me which parameter should i update to avoid unpack error.
Thank You and Much Appreciated
The text was updated successfully, but these errors were encountered: