I am getting different accuracy for same weights and threshold which I have stored using checkpoints. I am not able to figure out why

Accuracy (should be same)

Link for colab if needed

I am getting different accuracy for same weights and threshold which I have stored using checkpoints. I am not able to figure out why

Accuracy (should be same)

Link for colab if needed

Yes, perceptron is fairly a naive model.

Have you tried reducing the learning rate?

Are you talking about the part where weights and threshold are updated (w, b)?

I don’t know actually. I was following MP Neuron and Perceptron using Python: Checkpointing video. In Instructor’s notebook, the accuracy were exactly same.

This is my fit function

astanwar, there are couple of more videos after this part. where professor will be explaining about the learning rate, lr.

so just my suggestion to try:

if y = = 1 and y_pred == 0

self.w=self.w + lr * x --> use the learning rate lr

self.b=self.b - lr * 1 --> b=b-1

elif y= =0 and y_pred= =1

self.w=self.w- lr * x --> use the learning rate lr

self.b=self.b +lr * 1 --> b=b+1

I was able to solve the issue. The code snippet where I am storing the max_accuracy and checkpoints was inside the nested for loop. It is supposed to be outside.

Updated Code:

```
for i in range(epoch):
for x, y in zip(X, Y):
y_pred = self.model(x)
if y == 1 and y_pred == 0:
self.w = self.w + x
self.b = self.b + 1
elif y == 0 and y_pred == 1:
self.w = self.w - x
self.b = self.b - 1
accuracy[i] = accuracy_score(self.predict(X), Y)
if (accuracy[i] > max_accuracy):
max_accuracy = accuracy[i]
chkptw = self.w
chkptb = self.b
```

Thanks.