Why Acute Angle between weight vector and the positive instances?

In the DL course: Perceptron Module -> Video: Perceptron Learning - Why it works?
In the above-mentioned video, sir explained why learning algorithm works, stating the fact that we need an acute angle between weight vector and positive instance and an obtuse angle between weight vector and negative instances. Con you explain why this theory is being followed and some proof to this theory will be appriciated.

Hi @Harshit_Singhal,

There can be several explanations to this,
You can consider this as a triplet feature, where we want our output to be close to true instance as much as possible, whereas far from the negative instance.

If the angle between weight vector and positive instance is acute, then Cos() is +ve and so the dot product is +ve, hence it will be concluded as a positive instance by the model. This is the condition that is checked by the learning algo for each instance (i.e. suppose for an instance the dot pdt is +ve, so the predicted output is “its a positive instance” and if the real output is also +ve, then no modification, but if real output is -ve, then w=w-x)

Similarly, for a negative instance, the angle between weight vector and the instance should be obtuse, so that Cos() is -ve and so dot product is -ve and if dot pdt is -ve, model treats it as a negative instance. But if the scenario is opposite, means if the angle is acute, then dot pdt would be +ve, means a negative instance is classified as a positive instance by the model, so now we need to change the weights, which is done by the learning algorithm.
Hope it clears, please read carefully.
Thanks.

2 Likes