I am doing a time series forecasting using encoder decoder models?

I had a doubt that if the decoder could be fed known inputs along with the input from the previous state?

And is there a discord server of PadhAI it would be great if someone creates it

What is `known inputs`

? What do you mean by `input from previous state`

?

Nope, we wanted our forum to be public and self-hosted, hence this

I have some date features like day of week, day of month, national holiday etc which i plan to concat with the inputs from previous state

Can you please briefly tell what are the inputs (features and sequences?) that can be provided and outputs (features and sequences?) that need to be predicted by the sequence model?

Item sales for 1913 days for around 3900 items is given and we have to predict the next 28 day sales for each item

eg

so the encoder takes in a 1913 length series and the decoder predicts/forecasts 28 day long series containing the sales

while encoding i am using the sale of the particular day and also date features(like day of week , holidays etc) as input

Can i do a similar thing for the decoder by concatenating the previous state output with the date features?

In most of the cases the input to the decoder is just the previous state output I was wondering if we could add additional information to the output and will such an architecture cause any problems.

Anyways I am trying it out lets see how it works out.

Competition link https://www.kaggle.com/c/m5-forecasting-accuracy/data

So, the train-set has 3900 item-sales per day and for 1913 days. That is,

Number of features per time step (day) = 3900

Let me know if I’m wrong.

So basically, the 1913 sequences that you have is the train-set. You are not supposed to just pass it fully to a sequence model.

What we generally do is, split the long train-set sequences into smaller sequences.

In this example, we could split 1913 sequences into smaller sequences, each of length 28 (days).

You can first generate a dataset like this by sliding a 1D window of width 28 over the 1913 time-steps.

Hence, you will have a total of 1913-28+1 = 1886 train sequences.

You can even create more chunked sequences of random variable length to make your sequence model robust.

And, finally, you do not need an encoder-decoder model for this I think.

You can just use LSTM (even multiple layers) directly to predict the output.

The output to be predicted is next day’s sales (that is, 3900 features, am I right?)

So basically after training, given 1 (or N) day’s sales, you can predict next 28 days’ sales by just passing the previous output as input to the RNN’s next time step (day) for 28 times (days).

I recommend you study on how we generally go about solving Time Series Forecasting using LSTM using some toy datasets first to get an idea

I have 1913 * 3900 sequences in the train set a different sequence/time series for each item. That’s why an encoder decoder model . I am familiar with the LSTM method you have suggested it will be impossible to create a separate model for each of the time series. Please checkout the competition link it will just take a minute to understand the data. I am still not sure if the encoder decoder model is the right approach but it was suggested in some notebooks.