Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Maintaining state between predictions #14

Open
gidim opened this issue Mar 16, 2017 · 2 comments
Open

Maintaining state between predictions #14

gidim opened this issue Mar 16, 2017 · 2 comments

Comments

@gidim
Copy link

gidim commented Mar 16, 2017

Hi,
Any plans on adding state to the encoder/decoder? the idea is that you realistically you want to predict (answer_n | question_n, answer_n-1,question_n-1 ...) and not one by one as the original translation model is doing.

@Marsan-Ma-zz
Copy link
Owner

That's an interesting idea,
how do we make this model remembering some facts from previous dialogue?
I guess neural Turing machine might be a good candidate.

@gidim
Copy link
Author

gidim commented Mar 25, 2017

There's many ways to maintain some memory of the sequence of inputs but the easiest
is just to keep the LSTM/GRU state between calls to model.step(), and not reset it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants