Why do we reshape the hidden tensor from RNN to FC layer


output = self.h2o(hidden.view(-1, self.hidden_size))
why are we reshaping it
and can you please tell the dimension in which it is trasformed…

For a 1-layered RNN, the shape of the hidden tensor would be (1, batch_size, hidden_size) (3D tensor). [Docs]
The expected input shape for the input to the next fully-connected layer is (batch_size, hidden_size) (2D tensor)

Hence we reshape hidden to a 2D tensor before passing.