PyTorch Constant Loss

I am trying to do multiclass-classification using simple ANN .
Dont know why my loss is not decreasing it remains constant

CODE-

class NET(nn.Module):
def __init__(self):
    super().__init__()
    self.model=nn.Sequential(
        nn.Linear(6,512),
        nn.ReLU(),
        nn.Linear(512,1024),
        nn.ReLU(),
        nn.Linear(1024,17),
        nn.Softmax()
    )
def forward(self ,x):
    return self.model(x)

net=NET().to(device)

opt=optim.Adam(net.parameters() , lr=0.01)
Loss_fn=nn.L1Loss()


%%time
epochs=10
loss_arr=[]

for epoch in range(epochs):
     opt.zero_grad()

     outputs=net(train_data)
     loss=Loss_fn(outputs ,train_out )
     loss.backward()

    opt.step()
    loss_arr.append(loss.item())
    print("Epochs : %d/%d ,loss :%f" % (epoch ,epochs ,loss))

I want Mean absolute error that’s why i am using nn.L1Loss()

OUTPUT -

Epochs : 0/10 ,loss :10.401250
Epochs : 1/10 ,loss :10.401250
Epochs : 2/10 ,loss :10.401250
Epochs : 3/10 ,loss :10.401250
Epochs : 4/10 ,loss :10.401250
Epochs : 5/10 ,loss :10.401250
Epochs : 6/10 ,loss :10.401250
Epochs : 7/10 ,loss :10.401250
Epochs : 8/10 ,loss :10.401250
Epochs : 9/10 ,loss :10.401250
Wall time: 1.62 s

Thanks!!

Can you mention more about the train_data here ?

Its a chess data containing positions of 6 chess pieces .Train data is tensor of size [20000 , 6]