Skip to content
This repository has been archived by the owner on Jul 4, 2023. It is now read-only.

Commit

Permalink
Replaced .view with .reshape on Attention
Browse files Browse the repository at this point in the history
Replaced .view with .reshape in Attention module in order to avoid RuntimeError: "view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.".
  • Loading branch information
jmribeiro authored Dec 12, 2019
1 parent 46406d3 commit ec82cf1
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions torchnlp/nn/attention.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,9 +61,9 @@ def forward(self, query, context):
query_len = context.size(1)

if self.attention_type == "general":
query = query.view(batch_size * output_len, dimensions)
query = query.reshape(batch_size * output_len, dimensions)
query = self.linear_in(query)
query = query.view(batch_size, output_len, dimensions)
query = query.reshape(batch_size, output_len, dimensions)

# TODO: Include mask on PADDING_INDEX?

Expand Down

0 comments on commit ec82cf1

Please sign in to comment.