Doubts in Sigmoid Neuron lecture - Plotting, Loss computation and Hyper-parameters

  1. In sigmoid neuron plot what is the function of the angles in ‘ax.view_int()’ as we change angles,the plot changes but i cant figure out what is happening in the plot and how to efficiently use the angles to see the minimum loss in the plot.

  2. While computing the loss of a given dataset, i didn’t understand the meaning of steps taken as follows:
    ij= np.argmin(Loss)
    i= int(np.floor(ij/Loss.shape[1])
    j= int(ij-i*Loss.shape[1]

  3. In the class for sigmoid neuron, wht is the function of epochs,learning rate.
    while plotting the dataset,we have to constantly change these for better training of the data but i dont know what these attributes do precisely so i am having difficulty plotting these.kindly tech me how to use them and how to narrow the band(yellow line ) for the separation of the negative and positive points .

sir has taught these models in a hurry and i have not understood them clearly,kindly provide me some sources from where i can clear my concepts of deep learning.

ax.view_init() takes two arguments.
For understanding ‘angles’ argument (azimuth), imagine you have plot a car(vehicle) in 3 dimension. When you change the values of angle, its equivalent to rotating the car, so that you can view it from different sides. Depending on the angle, you can view it from front or rear or the side. But remember, the car is still the same, only thing changing is your point of view.

Another way of understanding changing the angle is to think that you are viewing a car and walking around it(instead of rotating the car). Again you can see the car from the different angles and get different view. And not forgetting its the same car.

3dplot

There is another argument: elevation. Extending the analogy above, the elevation argument in view_init allows you to adjust whether you are viewing the car from the same level as car or you are standing on a higher position (like standing on some table, you can adjust the height of table using the ‘elevation’ argument)

In video, one epoch represent one round of iterations over all the examles(rows) for minimizing loss. Its possible, one iteration might not be sufficient, and you need to do more. In that case you can increase the value of ‘epoch’
to do multiple rounds of calcuations.

learning_rate controls the ‘size’ of improvement in each step. For e.g. if you are standing on top of hill (height of hill represent the loss) and you want to go down (so the height is less and loss is minimum). You can decide to take large steps (large learning_rate) or small steps(small learning_rate).

2 Likes

In code above, we first the location number ‘ij’, which has minvalue of Loss.
np.argmin finds the location for the minimum value.
Next line, calculates the row number ‘i’ of the location ‘ij’, by dividing the location by total number of columns (see example below).
Finally, calculate column number ‘j’ associated with the location ‘ij’.
Example:
Let Loss is 2-dimenstional matrix. Lets say, its 3x4 matrix. This means there are total 12 elements in matrix represented by Loss.
Lets say the out of the 12 elements, minimum value of Loss occurs at 10th location (9 since we start counting from 0). ‘ij’ represents the value 9 in above code.
Now you know its a matrix, so this 10th element has a row number and a column number. i represent that row number (2) and j the column number(1).

1 Like

ij = np.argmin(Loss)

argmin flattens loss in 2D to 1D and returns the index of the minimum value.
In this case, we have 102 * 101 = 10301 indices and 6388 is the index which has the minimal value of the loss.
But we need individual parameters that are weight and bias corresponding to the minimal loss value. The shape of loss in 2D is (102, 101).

Here i and j are 2D indices. We know that ij is 6388 (index of minimum loss value in 1D). Now we want the individual i and j such that we can find out weight and bias corresponding to the minimal loss value.

i = int(np.floor(ij/Loss.shape[1]))
# 6388/101 = 63.2475247525
# floor (63.2475247525) = 63
# int(63) = 63
# We get i as 63```

j = int(ij - i * Loss.shape[1])
# 6388 - 63*101 = 6388 - 6363 = 25
# int(25) = 25
# We get j as 25

# Now we use this i and j to find out the weight and bias for corresponding
# minimum loss value index 6388 in 1D which we got from
# argmin, i.e., ij = np.argmin(Loss)
 
print(WW[63, 25], BB[63, 25])

Now we get the parameters w and b which is corresponding to the minimum loss value.

2 Likes