-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provider is not consuming the whole input stream request causing the connection abort by Jetty #108
Comments
Hmmh. Jackson not reading the whole content, just what it needs, is a feature (as a general note). Jackson 2.9 finally introduced feature But one challenge is whether this is safe to do in 2.9.8 patch or not.... |
We think we ran into this issue as well - with the default settings, you can (relatively rarely) observe It seems the fix is to enable @cowtowncoder , should |
Interesting. I'd be happy to help with a PR; otherwise may or may not have time to tackle this soon -- but with some help could make it in 2.15 release. Also needs to go in |
Thanks @cowtowncoder , please consider: |
Fixed by #170 as per @stevenschlansker 's awesome contribution. |
Ok. I think this works for most (common) cases. Although realized that in theory, since this is "fail fast", it does not necessarily read all content -- it will just either decode one token that follows, or throw exception encountering unexpected content. But I think in most cases there is at most white space after closing end marker; or, perhaps, empty chunk? If so, this will resolve the problem. |
As I understand it, when you read JSON from a JAX-RS resource, it is supposed to be the entire content. So the trailing data should be empty or maybe a newline if you have pretty-printing on. If there's anything else, an exception will be thrown, so the user will have to figure out whether they just turn off this feature or implement their own strategy for handling any trailing data. |
Agreed @stevenschlansker . Was only thinking of the case of low-level handling -- but then again, yes, such cases (where there is more content, non-whitespace) will become error cases so sub-optimal buffer handling is less of an issue. Since there is a problem to fix anyway. |
In case the chunked encoding is used and client has send the whole json in next to last message then provider is returning the parsed object immediately, without waiting for the rest of the message (which is the end of the message). This may cause that response is generated before the last chunk gets to the server, which is treated by the jetty as an erroneous situation and causes the connection abort.
In most cases response is delivered to the client before the connection is closed, but sometimes it happens before that and client reports:
The details of the issue can be found in the jetty tracker: jetty/jetty.project#3027
The text was updated successfully, but these errors were encountered: