-
Notifications
You must be signed in to change notification settings - Fork 27.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve memory usage of the new backend #74740
Conversation
Stats from current PRDefault BuildGeneral
Client Bundles (main, webpack)
Legacy Client Bundles (polyfills)
Client Pages
Client Build Manifests
Rendered Page Sizes
Edge SSR bundle Size
Middleware size
Next Runtimes
build cache
Diff detailsDiff for main-HASH.jsDiff too large to display |
Tests Passed |
InnerStorage::Indexed { map, .. } => map.get(&key.index()), | ||
} | ||
fn get_map(&self, ty: CachedDataItemType) -> Option<&CachedDataItemStorage> { | ||
self.map.iter().find(|m| m.ty() == ty) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically this is still constant time, but it might be beneficial to not have to always iterate through the list here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We only have a maximum of 30 different types.
According to https://github.com/yegor256/micromap#benchmark using a Vec is faster. That's also the idea behind the AutoMap
we have.
It's also using less memory
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sokra As a micro-optimization, if we create the maps for the most common CachedDataItemType
s in InnerStorage::new
, we can ensure they always end up at the start of the list.
However, this may naturally tend to happen anyways (the most frequently accessed types are likely to get initialized first).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As follow-up PR I want to pull "common" types out of this vec and just give them a native field in InnerStorage. That would save about 240 bytes per task which is about 11% of total memory
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some clippy warnings
Merge activity
|
What?
Improves memory usage by changing the way data is stored in the new backend. Instead of keeping a map
enum CachedDataItemKey
->enum CachedDataItemValue
, it uses a Vecenum CachedDataItemStorage
which keeps a single storage for everyenum CachedDataItemType
. The storage is strongly types and e. g. stores a mapTaskId
->()
, which makes it more memory efficient while still not wasting memory for unused types.It also calls
shrink_to_fit
after task completion andshrink_amortized
after removing items, to reduce the unnecessary capacity.Closes PACK-3703