Regarding Loss value during regression

I was taking part in a Hackathon and I was supposed to do regression in task-1 of two tasks and I want some clarification about the loss while doing regression. The Problem I m facing is like this when i trained the model and did regression (FNN) my RMSE value was around 5000-6000 even after all that feature engineering and now I m thinking is there any baseline of acceptable loss while doing regression problems.

Like if I m doing a simple regression task (for values in the range 0-1000 with sufficient examples in this range to do regression using ANN) then RMSE value of 0.7 seems good.

But what if I m doing a more complex task ( values ranges from 0 to 10-20 lacs with with not much examples say 500-400) and example being inconsistent.

What is the baseline that i should look while training for such cases or do tell if I m doing something super stupid to get this much RMSE

Hi @gauryajur87,
Just to check, have you normalized the inputs and weights?

Yes I have normalized them

The first suggestion will be to make sure you’re tweaking the hyperparameters correctly.
Second, make sure that the model is fairly complex to generalize the data well.

ok thanks will give it a another go…
In between I was thinking like when we do image classification we look at the accuracy of classification as number of images correctly classified vs number of images classified incorrectly and we assume that if we get 95%+ accuracy we are doing just fine if we can’t do better than this .

How to do we go on about in regression tasks. Should we always try to make RMSE value 0 for the predicted values and output values … or we should look for a baseline , depending on the output range that if we get this much RMSE on a particular value it’s fine we are good to go or my this whole notion is wrong ?

Yes you’re right, we often refer MSE as a metrics in itself apart from other like mentioned here.
Note that a loss often varies from task to task.

Thanks a lot for that