Varying Accuracy during training

So I was training LeNet on MNIST dataset,
I used RMSProp as optimizer

loss_fn = nn.CrossEntropyLoss()

optimizer = optim.RMSprop(net.parameters(), lr=0.05, alpha=0.95, weight_decay=0.0001)

This is what the training looks like,

Any idea why this might be happening.

This looks trivial, as loss is infact decreasing in last few epochs.
As an experiment, can you please run it for more epochs so that we can check whether it still fluctuates or converges properly?

Yes,
I tried it after factory resetting my runtime and a smaller learning rate,
It did not showcase any such oscillations while training.
But now i need to know how does pytorch initialize parameters which give a 94% accuracy before training :sweat_smile:

This looks strange :sweat_smile:
Try restarting the runtime a few more times, if the initial accuravy is still high or not

I tried it for a couple of times,
its the same always.

Just to verify, are you sure you’re not using a pretrained LeNet?

No no no, the network was was defined and instantiated by me. definitely wasn’t a pre-trained network.