This continues on the previous post for the modus operandi of Azure Cache for Redis:
Specifically, we called out the following:
The Basic tier is a single node system with no data replication and no SLA, so use standard or premium tier.
Data loss is expected because it is an in-memory store and patching or failovers might occur.
The eviction policy of volatile-lru affects only keys with a TTL value. This is the default.
There is a performance tool available called Redis-benchmark.exe. This is recommended to be run on the Dv2 VM series.
There are statistics to show the total number of expired keys. the number of keys with timeouts and an average timeout value.
If all the keys are lost, it probably occurs due to one of three reasons: The keys have been purged manually, the azure cache for Redis is set to use a non-default database, or the Redis server is unavailable.
Traffic is always routed to the designated primary, backed by a virtual machine that hosts the Redis server. Container and cluster-based scale-out of Redis servers are not entertained. Even if there are multiple servers, only one is primary and the others are replica. Clustered caches have many shards each with distinct primary and replica nodes.
Failover occurs when the primary goes offline and another becomes primary. Clients handle failover effects with retry and backoff.
We continue next with connection multiplexing. The RedisCache object that clients interact with to get and set cache entries requires a connection to the cache. If each instance of the object opens a new connection, the server resources could be depleted real fast to the point of Denial of Service. Therefore some economical use of the connections is needed and one approach to handle it requires the multiplexing of connections.
No comments:
Post a Comment