Skip to content

Latest commit

 

History

History
178 lines (108 loc) · 14.5 KB

CacheLevels.md

File metadata and controls

178 lines (108 loc) · 14.5 KB

FusionCache logo

🔀 Cache Levels: Primary and Secondary

⚡ TL;DR (quick version)
To ease cold starts and/or help with horizontal scalability (multiple nodes with their own local memory cache) it's possible to setup a 2nd level. At setup time, simply pass any implementation of IDistributedCache and a serializer: the existing code does not need to change, it all just works.

When our apps restarts and we are using only the 1st level (memory), the cache will need to be repopulated from scratch since the cached values are stored only in the memory space of the apps themselves.

This problem is known as cold start and it can generate a lot of requests to our database.

Cold Start

When our services need to handle more and more requests we can scale vertically, meaning we can make our servers bigger. This approach though can only go so far, and after a while what we need is to scale horizontally, meaning we'll add more nodes to split the traffic among.

Cold Start

But, when scaling horizontally and using only the 1st level (memory), each memory cache in each node needs to be populated indipendently, by asking the same data to the database, again generating more requests to the database.

As we can see both of these issues will generate more database pressure: this is something we need to handle accordingly.

Luckily, FusionCache can help us.

🔀 2nd Level

FusionCache allows us to have 2 caching levels, transparently handled by FusionCache for us:

  • 1️⃣ Primary (Memory): it's a memory cache and is used to have a very fast access to data in memory, with high data locality. You can give FusionCache any implementation of IMemoryCache or let FusionCache create one for you
  • 2️⃣ Secondary (Distributed): is an optional distributed cache and it serves the purpose of easing a cold start or sharing data with other nodes

Everything required to have the 2 levels communicate between them is handled transparently for us.

Since the 2nd level is distributed, and we know from the fallacies of distributed computing that stuff can bo bad, all the issues that may happend there can be automatically handled by FusionCache to not impact the overall application, all while (optionally) logging any detail of it for further investigation.

Any implementation of the standard IDistributedCache interface will work (see below for a list of the available ones), so we can pick Redis, Memcached or any technology we like.

Because a distributed cache talks in binary data (meaning byte[]) we also need to specify a serializer: since .NET does not have a generic interface representing a binary serializer, FusionCache defined one named IFusionCacheSerializer. We simply provide an implementation of that by picking one of the existing ones, which natively support formats like Json, MessagePack and Protobuf (see below) or create our own.

In the end this basically it boils down to 2 possible ways:

  • MEMORY ONLY (L1): FusionCache will act as a normal memory cache
  • MEMORY + DISTRIBUTED (L1+L2): if we also setup a 2nd level, FusionCache will automatically coordinate the 2 levels gracefully handling all edge cases to get a smooth experience

Of course in both cases you will also have at your disposal the added ability to enable extra features, like fail-safe, advanced timeouts and so on.

Also, if needed, we can use a different Duration specific for the distributed cache via the DistributedCacheDuration option: in this way updates to the distributed cache can be picked up more frequently, in case we don't want to use a backplane for some reason.

Finally we can even execute the distributed cache operations in the background, to make things even faster: we can read more on the related docs page.

📢 Backplane

When using a distributed 2nd level, each local memory cache may become out of sync with the other nodes after a change: to solve this, it is suggested to also use a backplane.

All the existing code will remain the same, it's just a 1 line change at setup time.

Read here for more.

🗃 Wire Format Versioning

When working with the memory cache, everything is easier: at every run of our apps or services everything starts clean, from scratch, so even if there's a change in the structure of the cache entries used by FusionCache there's no problem.

The distributed cache, instead, is a different beast: when saving a cache entry in there, that data is shared between different instances of the same applications, between different applications altogether and maybe even with different applications that are using a different version of FusionCache.

So when the structure of the cache entries need to change to evolve FusionCache, how can this be managed?

Easy, by using an additional cache key modifier for the distributed cache, so that if and when the version of the cache entry structure changes, there will be no issues serializing or deserializing different versions of the saved data.

In practice this means that, when saving something for the cache key "foo", in reality in the distributed cache it will be saved with the cache key "v0:foo".

This has been planned from the beginning, and is the way to manage changes in the wire format used in the distributed cache between updates: it has been designed in this way specifically to support FusionCache to be updated safely and transparently, without interruptions or problems.

So what happens when there are 2 versions of FusionCache running on the same distributed cache instance, for example when two different apps share the same distributed cache and one is updated and the other is not?

Since the old version will write to the distributed cache with a different cache key than the new version, this will not create conflicts during the update, and it means that we don't need to stop all the apps and services that works on it and wipe all the distributed cache data just to do the upgrade.

At the same time though, if we have different apps and services that use the same distributed cache shared between them, we need to understand that by updating only one app or service and not the others will mean that the ones updated will read/write using the new distributed cache keys, while the non updated ones will keep read/write using the old distributed cache keys.

Again, nothing catastrophic, but something to consider.

💾 Disk Cache

In certain situations we may like to have some of the benefits of a 2nd level like better cold starts (when the memory cache is initially empty) but at the same time we don't want to have a separate actual distributed cache to handle, or we simply cannot have it: a good example may be a mobile app, where everything should be self contained.

In those situations we may want a distributed cache that is "not really distributed", something like an implementation of IDistributedCache that reads and writes directly to one or more local files.

Is this possible?

Yes, totally, and there's a dedicated page to learn more.

↩️ Auto-Recovery

Since the distributed cache is a distributed component (just like the backplane), most of the transient errors that may occur on it are also covered by the Auto-Recovery feature.

We can read more on the related docs page.

📦 Packages

There are a variety of already existing IDistributedCache implementations available, just pick one:

Package Name License Version
Microsoft.Extensions.Caching.StackExchangeRedis
The official Microsoft implementation for Redis
MIT NuGet
Microsoft.Extensions.Caching.SqlServer
The official Microsoft implementation for SqlServer
MIT NuGet
Microsoft.Extensions.Caching.Cosmos
The official Microsoft implementation for Cosmos DB
MIT NuGet
MongoDbCache
An implementation for MongoDB
MIT NuGet
MarkCBB.Extensions.Caching.MongoDB
Another implementation for MongoDB
Apache v2 NuGet
EnyimMemcachedCore
An implementation for Memcached
Apache v2 NuGet
NeoSmart.Caching.Sqlite
An implementation for SQLite
MIT NuGet
AWS.AspNetCore.DistributedCacheProvider
An implementation for AWS DynamoDB
Apache v2 NuGet
Microsoft.Extensions.Caching.Memory
An in-memory implementation
MIT NuGet

As for an implementation of IFusionCacheSerializer, pick one of these:

Package Name License Version
ZiggyCreatures.FusionCache.Serialization.NewtonsoftJson
A serializer, based on Newtonsoft Json.NET
MIT NuGet
ZiggyCreatures.FusionCache.Serialization.SystemTextJson
A serializer, based on the new System.Text.Json
MIT NuGet
ZiggyCreatures.FusionCache.Serialization.NeueccMessagePack
A MessagePack serializer, based on the most used MessagePack serializer on .NET
MIT NuGet
ZiggyCreatures.FusionCache.Serialization.ProtoBufNet
A Protobuf serializer, based on one of the most used protobuf-net serializer on .NET
MIT NuGet
ZiggyCreatures.FusionCache.Serialization.CysharpMemoryPack
A serializer based on the uber fast new serializer by Neuecc, MemoryPack
MIT NuGet
ZiggyCreatures.FusionCache.Serialization.ServiceStackJson
A serializer based on the ServiceStack JSON serializer
MIT NuGet

👩‍💻 Example

As an example let's use FusionCache with Redis as a distributed cache and Newtonsoft Json.NET as the serializer:

PM> Install-Package ZiggyCreatures.FusionCache
PM> Install-Package ZiggyCreatures.FusionCache.Serialization.NewtonsoftJson
PM> Install-Package Microsoft.Extensions.Caching.StackExchangeRedis

Then, to create and setup the cache manually, we can do this:

// INSTANTIATE A REDIS DISTRIBUTED CACHE
var redis = new RedisCache(new RedisCacheOptions() { Configuration = "CONNECTION STRING" });

// INSTANTIATE THE FUSION CACHE SERIALIZER
var serializer = new FusionCacheNewtonsoftJsonSerializer();

// INSTANTIATE FUSION CACHE
var cache = new FusionCache(new FusionCacheOptions());

// SETUP THE DISTRIBUTED 2ND LEVEL
cache.SetupDistributedCache(redis, serializer);

If instead we prefer a DI (Dependency Injection) approach, we should simply do this:

services.AddFusionCache()
    .WithSerializer(
        new FusionCacheNewtonsoftJsonSerializer()
    )
    .WithDistributedCache(
        new RedisCache(new RedisCacheOptions { Configuration = "CONNECTION STRING" })
    )
;

Easy peasy.