Same Output from model

def model(x):

a1=torch.matmul(x,weights1)+bias1

h1=a1.sigmoid()

a2=torch.matmul(h1,weights2)+bias2

h2=a2.exp()/a2.exp().sum(-1).unsqueeze(-1)

return h2

weights1=torch.randn(2,2)/math.sqrt(2)

weights1.requires_grad_()

bias1=torch.zeros(2,requires_grad=True)

weights2=torch.randn(2,2)/math.sqrt(2)

weights2.requires_grad_()

bias2=torch.zeros(2,requires_grad=True)

When I am trying to apply my input using the above model,

I am getting same output for each and every training example.

What should I do??

Please Help.

Hi @jay_kadiyala,
Can you also share you notebook (with everyone can access) link, so that we can have a better look?

I have not used any seed.

Please explain me the need for seed.

torch.manual_seed(0)

Thanks.

Hi, The random numbers generated using any machine are not actually random, but are dependent upon some intrinsic factors. Seed plays a role in generating random numbers.
If we fix a seed, and we use it for generating random numbers, we’ll get the same set of numbers every time we use that particular seed.

I tried keeping the seed but still I am getting the same output for each and every input.

If you can look through the code, that would help me

Not able to access your csv file, can you also share that ?

https://drive.google.com/file/d/1OsAcUVoIAupap-QmKphya91GX86E5iEm/view?usp=sharing

You can access the csv file from the above link.

Thank you.

For Regression type Problems,

in the output layer, is one neuron sufficient.

and we have to change the loss function.

Everthing else remains contant right?

If I am wrong please correct me.

Thanks

Yes, There should be one neuron in o/p layer, and take a look at the activation function for o/p neuron (We don’t need softmax).

I have attached the colab notebook and csv file.I am getting the same out put again.

Please take a look.

Thanks

1 Like