Difference between RNN and RNNCell in PyTorch?

Can anyone please tell ?

Only a single layer of RNN Can have muliple stacked layers of RNNs
Sequential input has to be passed step-by-step by us explicitly A full sequence can be passed which will be implicitly handled step-by-step
Does not support bidirectional RNN Supports bidirectional RNNs
If required, dropout has to be added as an explicit new layer Dropout in-built supported

Thank you Gokul for replying!

To be precise i am currently trying to model a chemical equipment. I have created a dataset(numerical) using other simulation softwares and it is of the size 800 * 2, where 2 represents the number of features and 800 represents time instants at which these are collected. In order to do the dynamic mapping of the data, i am using RNN to solve this!

The problem i am facing is with understanding on how to break this 2D dataset into 3D dataset (seq_len,batch_size,#features) which is required by RNN. Can you please explain the output of a dataloader with a example of 2D input dataset?

  1. Convert each sample in your dataset to a numpy array of shape (800,2)
  2. Combine all the samples into a single numpy array of shape (num\_samples, 800, 2)
  3. Save this dataset using numpy.save()
  4. Similarly create and save the corresponding ground truth for the above data (I don’t know what that is)

To train:

  • While creating the PyTorch RNN, just mark batch_first=True.
  • You can now shuffle your data and sample your batches for each iteration and pass it to rnn.