We continue with the post from day before yesterday where we were discussing the Azure cache with Redis
The Basic tier is a single node system with no data replication and no SLA, so use standard or premium tier.
Data loss is expected because it is an in-memory store and patching or failovers might occur.
The eviction policy of volatile-lru affects only keys with a TTL value. This is the default.
There is a performance tool available called Redis-benchmark.exe. This is recommended to be run on the Dv2 VM series.
There are statistics to show the total number of expired keys. the number of keys with timeouts and an average timeout value.
If all the keys are lost, it probably occurs due to one of three reasons: The keys have been purged manually, the azure cache for Redis is set to use a non-default database, or the Redis server is unavailable.
The local RedisCache wrapper connection uses connection multiplexing. The RedisCache object that clients interact with to get and set cache entries requires a connection to the cache. If each instance of the object opens a new connection, the server resources could be depleted real fast to the point of Denial of Service. Therefore some economical use of the connections is needed and one approach to handle it requires the multiplexing of connections.
The sizes of the cache can vary from 250MB to 120GB
The replication speed across regions occurs at about 63 GB in 5-10 minutes
A planned failover and swapping between primary and secondary takes 1 second
An unplanned failover with similar operation takes 10 seconds.
The persistence option an be AOF for last updates and replay or RDB for snapshots
Cache can also be hosted on a cluster with a shards on different nodes
There are options for private network, firewall and update schedules.
No comments:
Post a Comment