-
-
Notifications
You must be signed in to change notification settings - Fork 796
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase default max allowed String value length from 20 megs to 1 gig #1082
Comments
The limits are configurable. I'm afraid you will need to configure StreamReadConstraints and use numbers that suit your use case. We can't just keep changing the limits in every release. |
I think your points are slightly contradictory; if users that have specific use cases configure the limit they would be unaffected by any change to the default limit in a subsequent release. Given that this limit is a breaking change, and there isn't an associated CVE, personally I don't think it should have been introduced in a minor release of the library in the first place. In a world where even web browsers can read over 100 MB of JSON (https://www.joshzeigler.com/technology/web-development/how-big-is-too-big-for-json) I remain of the opinion that having a backend library default to 20 MB is bizarre, and that you'll get spurious bug reports as users upgrade. |
Will not change to 1 gig, closing. If someone truly needs to have JSON STRING VALUES -- not JSON Document per se -- above 20 megs, they need to configure limits appropriately. |
Sorry I erroneously thought this was a limit on the length of the input JSON string. Does Jackson's performance degrade significantly more than would be expected with larger strings compared to many smaller strings? |
Jackson does not read the whole input document into memory at all, assuming input source is As to Big Strings: it's more a general JVM challenge as But why would large arrays be problematic? (over a set of smaller ones that add up to same memory amount). |
The severity of this issue still seems a bit fuzzy to me, compared to the more concrete fact that this was a breaking change in a minor release (technically even going to a 1 gig limit would still be breaking albeit only a fraction of the fraction of users that use JSON with large strings would be impacted). Personally I think this should have required a version bump to v3 given how fundamental JSON reading is to the library. Introducing new limits with a default value of unlimited avoids any backward compatability issues. It may be worth putting the default limits on https://github.com/FasterXML/jackson-core (or at least in the wiki) in the same way as https://github.com/ngs-doo/dsl-json does (which incidentally also has a string limit, albeit a higher one). |
There are all kinds of changes that turn out to be breaking changes to someone, somewhere. While I emphatize with breakages (and would not want cause any), I think this issue is overblown: most users, developers are not affected in any way. But ones who are always most vocal. I agree that perhaps this particular limit would have been safest to leave undefined -- in practice there is the 2 gig limit due to Now that limit has been imposed, however, I see no benefit from backpedaling. Issue exists with some 2.15 versions and 2.15.2 will represent the new normal. +1 for documentation changes. |
The expected out of the box behaviour of Jackson is that it can read JSON, and the JSON specification does not specify a maximum length. Violating this expectation should only be done with extremely good reason. (EDIT: For balance, RFC 7159 does say "An implementation may set limits on the size of texts that it accept")
The previous security related limits kick-in in situations where JSON is being used in odd ways e.g. extreme nesting depth, and hence there would typically be a more idiomatic way to write the JSON. In the case of having a large amount of data this will inevitably result in a correspondingly large JSON file
Having the ability to set a maximum string length is a good change, but it's not obvious to me why the default should differ from the JSON specification if there's not a specific security concern.
As a compromise I propose 1 gig, based rather arbritrarily on #1014 (comment) and the limit in MySQL https://dev.mysql.com/blog-archive/how-large-can-json-documents-be/ but I think a default limit of unlimited is defensible. For more than a gigabyte a user would almost certainly want to stream the JSON, although as computers continue to gain memory, the amount of JSON that can reasonable be read into an in-memory tree continues to increase.
This would make the limit more analogous to the other security limits in that it would only kick-in for what is an unusual use case. Loading 100 MB of JSON on a machine with over 10 gigabytes of ram to me doesn't seem so unusual...
The text was updated successfully, but these errors were encountered: