You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Any plans on adding state to the encoder/decoder? the idea is that you realistically you want to predict (answer_n | question_n, answer_n-1,question_n-1 ...) and not one by one as the original translation model is doing.
The text was updated successfully, but these errors were encountered:
That's an interesting idea,
how do we make this model remembering some facts from previous dialogue?
I guess neural Turing machine might be a good candidate.
There's many ways to maintain some memory of the sequence of inputs but the easiest
is just to keep the LSTM/GRU state between calls to model.step(), and not reset it.
Hi,
Any plans on adding state to the encoder/decoder? the idea is that you realistically you want to predict (answer_n | question_n, answer_n-1,question_n-1 ...) and not one by one as the original translation model is doing.
The text was updated successfully, but these errors were encountered: