` def fit(self, X, Y, epochs = 1, learning_rate = 1, initialise = True, loss_fn = "mse", display_loss = False):`

What is the difference in setting **initialise = True** and **initialise = False** in the context of training the model in obtaining a consistent reduction of loss?

That is, if **initialise = True** **then after each epoch, the weights are set to some normal distribution, which means the weights that will be updated in the second or subsequent epochs will not be the continuation from a previous epoch.** So, **how does this achieve a consistent reduction in loss** when there could be **fluctuations in learning** by initialising the weights in every epoch to some random normal distribution.

**Isn’t initialise = False help in achieving a consistent reduction in the loss than the other?** Kindly help me with my understanding of this part.