You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to develop an RNN that can handle missing inputs by inferring these as outputs and inserting them into the input vector at the next time step. The set-up is as follows:
I have an input vector x_t. At each time point, some of the elements in x_t might be missing/unobserved, while the rest are observed. Let I_t be a binary vector with 1's for the missing elements and 0's for the observed elements.
I have a predicted output vector yhat_t of the same dimension as x_t. The actual/true output is y_t = x_t+1.
The objective is to predict y_t[0] based on y0[0], y1[0], ..., y_t-1[0], where yt[0] denotes the first element of the vector yt at time t. This entry is always observed, while the other elements may be unobserved at some time points.
The missing entries of the input x_t+1 where I_t+1=1 should be imputed with yhat_t from the previous time step (using readout input, I assume)
The loss is a function of only the first element of yt, not the full output vector (e.g. MAE or MSE). The missing values in the inputs will therefore be imputed in such a way as to minimise this loss, which may not necessarily correspond to the best predictions of the other features. This is fine.
I have seen the use of readout to feed the output from a previous time step into the input vector of the next time step. However, I am not sure how to include the vector I_t to determine which elements of the input to update, since I_t is itself not an input to the model.
Could you please advise if it is possible to set-up such a model and how best to go about it?
Thanks!
The text was updated successfully, but these errors were encountered:
I would like to develop an RNN that can handle missing inputs by inferring these as outputs and inserting them into the input vector at the next time step. The set-up is as follows:
I have an input vector x_t. At each time point, some of the elements in x_t might be missing/unobserved, while the rest are observed. Let I_t be a binary vector with 1's for the missing elements and 0's for the observed elements.
I have a predicted output vector yhat_t of the same dimension as x_t. The actual/true output is y_t = x_t+1.
The objective is to predict y_t[0] based on y0[0], y1[0], ..., y_t-1[0], where yt[0] denotes the first element of the vector yt at time t. This entry is always observed, while the other elements may be unobserved at some time points.
The missing entries of the input x_t+1 where I_t+1=1 should be imputed with yhat_t from the previous time step (using readout input, I assume)
The loss is a function of only the first element of yt, not the full output vector (e.g. MAE or MSE). The missing values in the inputs will therefore be imputed in such a way as to minimise this loss, which may not necessarily correspond to the best predictions of the other features. This is fine.
I have seen the use of readout to feed the output from a previous time step into the input vector of the next time step. However, I am not sure how to include the vector I_t to determine which elements of the input to update, since I_t is itself not an input to the model.
Could you please advise if it is possible to set-up such a model and how best to go about it?
Thanks!
The text was updated successfully, but these errors were encountered: