In Sigmoid Neuron class fit function, it seems that we’re first initializing w and b values. Then for all input values we compute dw and db values and reinitialize w and b. Then at last we’re computing loss for all the inputs.
While in theory classes, it’s been explained that we first compute loss on that w and b values. Based upon that we reinitalise w and b values.
Two questions here:-
- In fit function on what basis we’re reinitializing the w and b values as we’re computing loss at the end? As per the algorithm we should first compute loss and on that basis we should reinitalize w and b values, right?
- Even if we calculate loss at the end for all the inputs, how we’re using that all reinitialized w and b values that we got at every iteration while computing loss?
Attached the screenshot of both fit function and learning algo. Please check and explain in detail.