Skip to content

Commit

Permalink
maint: call out Cache changes and add missing changelog entry
Browse files Browse the repository at this point in the history
  • Loading branch information
VinozzZ committed Dec 3, 2024
1 parent d570038 commit da6f2c9
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 0 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ See full details in [the Release Notes](./RELEASE_NOTES.md).
- feat: only redistribute traces when its ownership has changed (#1411) | [Yingrong Zhao](https://github.com/vinozzZ)
- feat: Add a way to specify the team key for config fetches (experimental) (#1410) | [Kent Quirk](https://github.com/kentquirk)
- feat: send drop decisions in batch (#1402) | [Yingrong Zhao](https://github.com/vinozzZ)
- feat: use priority queue to implement trace cache (#1399) | [Mike Goldsmith](https://github.com/MikeGoldsmith)
- feat: Update in-memory trace cache to use LRU instead of ring buffer (#1359) | [Mike Goldsmith](https://github.com/MikeGoldsmith)
- feat: Log response bodies when sending events to Honeycomb (#1386) | [Mike Goldsmith](https://github.com/MikeGoldsmith)
- feat: make collector health check timeout configurable (#1371) | [Yingrong Zhao](https://github.com/vinozzZ)
Expand Down
2 changes: 2 additions & 0 deletions RELEASE_NOTES.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ We’re hopeful this change to memory consumption will lead to a more reliable w

However, because Refinery concentrates all of the spans on a single trace to the same node, it’s possible that a very large trace can cause memory spikes on a single node in a way that is immune to horizontal scaling. If your service generates very large traces (10,000 spans or more) you should address that issue before attempting autoscaling.

These improvements deprecate the `CacheCapacity` configuration option and the associated metrics `collector_cache_capacity` and `collect_cache_buffer_overrun`. Refinery now dynamically manages memory based on the available resources. It uses `AvailableMemory` and `MaxMemoryPercentage` or `MaxAlloc` to determine the maximum memory it can consume, ensuring more efficient and adaptive memory utilization without relying on fixed capacity settings.

### Trace Locality Mode (Experimental)

Since early days, Refinery has allocated individual traces to specific Refinery instances using the span’s trace ID, with an equal portion allocated to each. Refinery forwards spans belonging to other instances so that all spans for a given trace ID are stored and handled by a single instance. This method, which is known as “Trace Locality” with a default configuration option of concentrated, has served Refinery well for a long time but has two major issues:
Expand Down

0 comments on commit da6f2c9

Please sign in to comment.