Type Error and "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:30: RuntimeWarning: overflow encountered in exp"

Hi All,

I was solving one problem consisting of 6 feature single class classification. I have used neural networks as the concept but while printing the plot between mean squared error and epochs i am not getting any image inside plus the sigmoid neuron part of the code after running fit function throws me " Positional Argument missing, ‘Y’ " error.

I have tried multiple things on internet or stackoverflow to resolve but nothing works fine for me. Kindly help me if any of you went through the same scenario and resolution(if possible).

Thanks in advance.

Hi @manasshrm3,
Can you please share a link to your notebook, so that we can look into the matter.

Hi Ishvinder,

Please find the link below.

https://colab.research.google.com/drive/1VNzW4Do21-uuHzp7i5gyrn8yl5o-DtMi#scrollTo=0iUeGlSz-Bo9&uniqifier=3

Regards,
Manas

Can you please give it a public access permission?

Hi Ishvinder,

I have shared it to PadhaiTeam mailbox with required permission to edit, download. Please let me know, if there is anything else needed from my end.

Thank you.

Here is the code:

class SigmoidNeuron:

    def __int__(self):

        self.w = np.random.randn()

        self.b = 0

    def perceptron(self, x):

        return np.dot(self.w, x) + self.b

    def sigmoid(self, x):

        return 1.0/(1.0 + np.exp(-x))

        

    def predict(self, X):

        Y_pred = []

        for x in X:

            y_pred = self.sigmoid(self.perceptron(x))

            Y_pred.append(y_pred)

        return np.array(Y_pred)

    

    def grad_w_mse(self, x, y):

        y_pred = self.sgimoid(self.perceptron(x))

        return (y_pred - y) * y_pred * (1 - y_pred) * x         

            

    def grad_b_mse(self, x, y):

        y_pred = self.sigmoid(self.perceptron(x))

        y_pred =  (y_pred - y) * y_pred * (1 - y_pred)

    def grad_w_ce(self, x, y):

        y_pred = self.sigmoid(self.perceptron(x))

        if y_pred == 0:

            return y_pred * x

        

        elif y_pred == 1:

            return -1 * (1 - y_pred) * x

        

        else:

            raise ValueError("y should be 0 or 1")

    

    def grad_b_ce(self, x, y):

        y_pred = self.sigmoid(self.perceptron(x))

        if y_pred == 0:

            return y_pred

        

        elif y_pred == 1:

            return -1 * (1 - y_pred)

        

        else:

            raise ValueError("y should be 0 or 1")

    

    def fit(self, X, Y, epochs = 1, eta = 0.01, initialise=True, display_loss = False, loss_fn = 'mse'):

        if initialise:

            self.w = np.random.randn(1, X.shape[0])       

            self.b = 0

        

        if display_loss:

            loss = {}

        for i in range(epochs):

            dw = 0

            db = 0

            for x, y in zip(X,Y):

                if loss_fn == 'mse':

                    dw += self.grad_w(x, y)             

                    db += self.grad_b(x, y)

                

                elif loss_fn == 'ce':

                    dw += self.grad_w(x, y)

                    db += self.grad_b(x, y)

            m = X.shape[0]

            self.w -= eta.dw/m

            self.b -= eta.db/m

            if display_loss:

                y_pred = self.sigmoid(self.perceptron(x))

                if loss_fn == 'mse':

                    loss[i] = mean_squared_error(Y, Y_pred)

                

                elif loss_fn == 'ce':

                    loss[i] == log_loss(Y, Y_pred)

            

            if display_loss:

                plt.plot(np.array(list(loss.values())))

                plt.xlabel('epochs')

                if loss_fn == 'mse':

                    plt.ylabel(mean_squared_error)

                elif loss_fn == 'ce':

                    plt.ylabel('log_loss')

               plt.show()

Error: ‘numpy.ndarray’ object has no attribute ‘w’

Please help.

There are some problems in a your code

  1. You are supposed to call self.grad_w_mse(x, y) and self.grad_w_ce(x, y) for the two conditions of loss types
  2. There’s typo when you call sigmoid
  3. Why would you initialise weight with shape (1, X.shape[0])
  4. The method grad_b_mse() doesn’t return anything.
  5. The weight update expression should be eta*db/m and not eta.db/m
1 Like

Hi Shahbaz,

I figured the mistake and rectified but still the error is same. Though i initialise my weights np.random.randn() this time. But no result. Can you suggest, what could it possibly be.

Here is the complete thing:

import matplotlib.pyplot as plt

import numpy as np

import pandas as pd

import seaborn as sns

df = pd.read_csv('/content/train.csv')

d = pd.DataFrame(df)

d

sns.jointplot(x=d['Parch'], y=d['Survived'], data=d)

data = d.drop(['Ticket', 'Cabin', 'Name', 'Pclass', 'SibSp', 'Parch', 'Embarked', 'PassengerId', 'Sex'], axis=1)

for i, r in data.iterrows():

    if pd.isnull(r).any():

        data.drop(i, inplace=True)

data_new = data.groupby('Survived')['Survived'].count()

sns.barplot(x=data_new.index, y=data_new.values)

from sklearn.model_selection import train_test_split

from sklearn.metrics import accuracy_score, mean_squared_error

from sklearn.preprocessing import StandardScaler, MinMaxScaler

type(data.info())

X1 = data.drop(['Survived'], axis=1)

Y1 = data['Survived']

X1.shape, Y1.shape

X1 = X1.values

Y1 = Y1.values

type(X1), type(Y1)

scaler = StandardScaler()

X_train, X_val, Y_train, Y_val = train_test_split(X1, Y1, stratify=Y, random_state=0)

X_train.shape, Y_train.shape, X_val.shape, Y_val.shape
R = np.random.random([100, 1])
scaler = StandardScaler()

scaler.fit(R)
StandardScaler(copy=True, with_mean=True, with_std=True)
RT = scaler.transform(R)
X_train = scaler.fit_transform(X_train)

X_val = scaler.transform(X_val)

minmax_scaler = MinMaxScaler()

Y_train = minmax_scaler.fit_transform(Y_train.reshape(-1, 1))

Y_val = minmax_scaler.transform(Y_val.reshape(-1, 1))

threshold = np.mean(Y_train)

scaled_threshold = list(minmax_scaler.transform(np.array([threshold]).reshape(-1, 1)))[0][0]

scaled_threshold

Y_train = (Y_train > scaled_threshold).astype("int").ravel()

Y_val = (Y_val > scaled_threshold).astype("int").ravel()
from sklearn.datasets import make_blobs

import matplotlib.colors

from tqdm import tqdm_notebook

my_cmap = matplotlib.colors.LinearSegmentedColormap.from_list("", ["red", "yellow", "green"])

data, labels = make_blobs(n_samples=800, centers=4, n_features=2, random_state=0)

print(data.shape, labels.shape)

plt.scatter(data[:, 0], data[:, 1], c=labels, cmap = my_cmap)

label_org = labels

labels = np.mod(label_org, 2)

plt.scatter(data[:, 0], data[:, 1], c=labels, cmap = my_cmap)

plt.scatter(X_train[:, 0], X_train[:, 1], c=Y_train, cmap = my_cmap);

class SigmoidNeuron:

    def __int__(self):

        self.w = None

        self.b = None

    def perceptron(self, x):

        return np.dot(x, self.w.T) + self.b

    def sigmoid(self, x):

        return 1.0/(1.0 + np.exp(-x))

         

    def grad_w_mse(self, x, y):

        y_pred = self.sigmoid(self.perceptron(x))

        return (y_pred - y) * y_pred * (1 - y_pred) * x         

            

    def grad_b_mse(self, x, y):

        y_pred = self.sigmoid(self.perceptron(x))

        return (y_pred - y) * y_pred * (1 - y_pred)

    def grad_w_ce(self, x, y):

        y_pred = self.sigmoid(self.perceptron(x))

        if y_pred == 0:

            return y_pred * x

        

        elif y_pred == 1:

            return -1 * (1 - y_pred) * x

        

        else:

            raise ValueError("y should be 0 or 1")

    

    def grad_b_ce(self, x, y):

        y_pred = self.sigmoid(self.perceptron(x))

        if y_pred == 0:

            return y_pred

        

        elif y_pred == 1:

            return -1 * (1 - y_pred)

        

        else:

            raise ValueError("y should be 0 or 1")

    

    def fit(self, X, Y, epochs = 1, eta = 0.01, initialise=True, display_loss = False, loss_fn = 'mse'):

        if initialise:

            self.w = np.random.randn(X.shape[0])

            self.b = 0

            

        if display_loss:

            loss = {}

        for i in tqdm_notebook(range(epochs), total = epochs, unit = 'epochs'):

            dw = 0

            db = 0

            for x, y in zip(X,Y):

                if loss_fn == 'mse':

                    dw += self.grad_w_mse(x, y)             

                    db += self.grad_b_mse(x, y)

                

                elif loss_fn == 'ce':

                    dw += self.grad_w_ce(x, y)

                    db += self.grad_b_ce(x, y)

            m = X.shape[0]

            self.w -= eta*dw/m

            self.b -= eta*db/m

            if display_loss:

                y_pred = self.sigmoid(self.perceptron(x))

                if loss_fn == 'mse':

                    loss[i] = mean_squared_error(Y, Y_pred)

                elif loss_fn == 'ce':

                    loss[i] == log_loss(Y, Y_pred)

        

        if display_loss:

            plt.plot(loss.values())

            plt.xlabel('epochs')

            if loss_fn == 'mse':

                plt.ylabel(mean_squared_error)

            elif loss_fn == 'ce':

                plt.ylabel('log_loss')

            plt.show()

    

    def predict(self, X):

        Y_pred = []

        for x in X:

            y_pred = self.sigmoid(self.perceptron(x))

            Y_pred.append(y_pred)

        return np.array(Y_pred)

sn =SigmoidNeuron

sn.fit(X_train, Y_train, 10, 0.01, display_loss = True, initialise=True, loss_fn = 'mse')

Please help. I am just stuck.

@GokulNC @Ishvinder can anyone of you help please.

Thank you.

@manasshrm3

When you report your error, mentioning the error line is generally not sufficient.
You can take full screenshot of the error and share it here. The error description tells which line of code initiated the error. You can check some other posts to see how the screenshot look like for e.g check this post: example error screenshot.

For quick resolution, best option is to share colab file and temporary enable link access for all (if its not some super confidential program), like this (and remove sharing once problem is resolved):
Screenshot 2020-07-15 at 9.45.27 PM

Also, please change __int__ to __init__

Hi SanjayK,

Thank you sir. Below is the link.

https://colab.research.google.com/drive/1ACRqX57x57X63pvzzT73Z4krbLRQDFIK?usp=sharing

For creating a object of SigmoidNeuron class, please change

sn =SigmoidNeuron

to

sn =SigmoidNeuron()

It will solve your current error.

Thanks much sir. :slight_smile: