-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High memory allocation in prometheus.MakeLabelPair #1702
Comments
Hey @jkroepke! I'm here to help you with bugs, questions, and becoming a contributor. Let's tackle this issue together! The The function preallocates a slice for label pairs to minimize memory allocations during appends, but the creation of To optimize memory usage, consider the following approaches:
These strategies can help mitigate the memory allocation issues you're experiencing. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
Yup, that's a known problem for the At some point I tried to optimize cadvisor code to essentially cache a metrics and update them in place or so, which is not trivial for the bigger codebase: google/cadvisor#2974 There's definitely room to either make mutable caching layer for bigger exporters efficiency or make LabelPairs more efficient in the current NewConst* API. For the latter, we could also try a global (or local) set of
That's true the LabelPair struct itself could be smaller. However we DO support and intend to have some protobuf metric scrape, so that part has to work (marshalling/unmarshalling). Maybe we could get rid of unknown field support for this type. |
While investigate memory leak with windows_exporter, I figure out that there is a decent memory allocation rate in prometheus.MakeLabelPairs.
windows_exporter (like node_exporter) generate a high amount of ConstMetrics on each scrape call. The process collector expose around 6200 metrics on each call, resulting in 6200 MustNewConstMetric calls on each scrape.
I ask myself, if this can be optimized somehow. The heap dump is generate after 1 days of runtime, with 5 seconds scrape interval (to simulate a long running scenario)
the
dto.LabelPair
struct contains other struct related to protobuf. Not sure, if they needed. Prometheus 2.0 does not support protobuf metris scrape, it feels like an expensive leftover.Flame Graph:
Source View:
Heap Dump via pprof: memory_profile_windows.pb.gz
The text was updated successfully, but these errors were encountered: