Tuesday, June 1, 2021

 Using Azure Cache for Redis: 

Introduction: This article demonstrates a sample program to use the Azure Cache for Redis with a linked server for replication which helps with high availability. 

Sample code: 

using System; 

using System.Collections.Generic; 

using System.Text; 

using Azure; 

using Azure.Identity; 

using Microsoft.Azure.CognitiveServices.Language.LUIS.Runtime; 

using Microsoft.Azure.Management.Redis; 

using Microsoft.Azure.Management.Redis.Models; 

using Microsoft.Azure.Management.ResourceManager; 

using Microsoft.Azure.Management.ResourceManager.Models; 

using Microsoft.Azure.Management.ServiceFabric; 

using Microsoft.Azure.Management.ServiceFabric.Models; 

using Microsoft.Azure.Services.AppAuthentication; 

using Microsoft.Extensions.Configuration; 

using Microsoft.IdentityModel.Clients.ActiveDirectory; 

using Microsoft.Rest; 

  

namespace Redis 

{ 

    class Program 

    { 

        public static String subscriptionKey = ""; 

        public static String subscriptionId = ""; 

        public static String tenantId = "";  

        public static String cachePrimary = ""; 

        public static String cacheLink = ""; 

        public static String resourceGroup = ""; 

        public static String region = ""; 

        public static RedisManagementClient client; 

  

        static async System.Threading.Tasks.Task Main(string[args) 

        { 

            IConfigurationBuilder builder = new ConfigurationBuilder().AddJsonFile("appsettings.json"); 

            IConfigurationRoot config = builder.Build(); 

            tenantId = config["tenantId"]; 

            subscriptionId = config["subscriptionId"]; 

            subscriptionKey = config["subscriptionKey"]; 

            resourceGroup = config["resourceGroupName"]; 

            cachePrimary = config["cachePrimary"]; 

            cacheLink = config["cacheLink"]; 

            region = config["region"]; 

  

  

            var credentials = new ApiKeyServiceClientCredentials(subscriptionKey); 

            var azureServiceTokenProvider = new AzureServiceTokenProvider(); 

            var token = await azureServiceTokenProvider.GetAccessTokenAsync("https://management.azure.com", tenantId); 

            TokenCredentials tokenCredentials = new TokenCredentials(token); 

  

            client = new RedisManagementClient(tokenCredentials); 

            if (client == null) { throw new Exception("Unauthorized User."); } 

            client.SubscriptionId = subscriptionId; 

            RedisResource primaryResource = client.Redis.Get(resourceGroupcachePrimary); 

            if (primaryResource == null) { throw new Exception("Invalid parameter: " + cachePrimary); } 

  

            LinkResources(cachePrimarycacheLink); 

            // UnlinkResources(resourceGroupcachePrimarycacheLink); 

             

            RedisResource linkedResource = client.Redis.Get(resourceGroupcacheLink); 

            if (linkedResource == null) { throw new Exception("Invalid parameter: " + cacheLink); } 

            printResources(primaryResourcelinkedResource); 

        } 

        protected static void LinkResources(String cachePrimary, String cacheLink) 

        { 

            RedisResource primaryResource = client.Redis.Get(resourceGroupcachePrimary); 

            RedisLinkedServerCreateParameters redisLinkedServerCreateParameters = new RedisLinkedServerCreateParameters() 

            { 

                LinkedRedisCacheId = primaryResource.Id.Replace(cachePrimarycacheLink), 

                LinkedRedisCacheLocation = primaryResource.Location, 

                ServerRole = ReplicationRole.Secondary 

            }; 

            RedisLinkedServerWithProperties linked = client.LinkedServer.Create(resourceGroup, cachePrimary, cacheLink, redisLinkedServerCreateParameters); 

            if (linked == null) { throw new Exception("Invalid Parameter: " + cacheLink); } 

            Console.WriteLine("Id={linked.Id}"); 

            while (linked.ProvisioningState != "Succeeded") 

            { 

                Console.WriteLine("Not provisioned yet, sleeping 5 seconds ..."); 

                System.Threading.Thread.Sleep(5000); 

            } 

            Console.WriteLine("{" + linked.LinkedRedisCacheId + "} : provisioned at " + linked.LinkedRedisCacheLocation + " by name:" + linked.Name); 

        } 

        protected static void UnlinkResources(String resourceGroup, String cachePrimary, String cacheLink) 

        { 

            client.LinkedServer.Delete(resourceGroup, cachePrimary, cacheLink); 

        } 

        protected static void printResources(RedisResource primaryResource, RedisResource linkedResource) 

        { 

            List<RedisResource> resources = new List<RedisResource>(); 

            resources.Add(primaryResource); 

            resources.Add(linkedResource); 

            for (int i = 0; i < resources.Count; i++) 

            { 

                printResource(resources[i]); 

            } 

        } 

        protected static void printResource(RedisResource resource) 

        { 

            StringBuilder sb = new StringBuilder(); 

            sb.Append("------- BEGIN Resource Description --------\r\n"); 

            sb.Append("Name:     " + resource.Name + "\r\n"); 

            sb.Append("Id:       " + resource.Id + "\r\n"); 

            sb.Append("Count:    " + resource.LinkedServers.Count + "\r\n"); 

            sb.Append("RO:       " + resource.LinkedServers.IsReadOnly + "\r\n"); 

            sb.Append("Location: " + resource.Location + "\r\n"); 

            sb.Append("HostName: " + resource.HostName + "\r\n"); 

            sb.Append("State:    " + resource.ProvisioningState.ToString() + "\r\n"); 

            sb.Append("Shard#:   " + resource.ShardCount + "\r\n"); 

            if (resource.Zones != null) 

            { 

                sb.Append("Zones:    " + resource.Zones.ToString() + "\r\n"); 

                sb.Append("Zones#:   " + resource.Zones.Count + "\r\n"); 

            } 

            sb.Append("------- END Resource Description --------\r\n\n\n"); 

            Console.Write(sb.ToString()); 

        } 

        protected static void ManualFailover(RedisManagementClient client, String cachePrimary, String cacheLink) 

        { 

            RedisResource primaryResource = client.Redis.Get(resourceGroup, cachePrimary); 

            RedisResource linkedResource = client.Redis.Get(resourceGroup, cacheLink); 

            client.LinkedServer.Delete(resourceGroup, cachePrimary, cacheLink); 

            client.Redis.ExportData(resourceGroup, cacheLink, new ExportRDBParameters() { Container = "", Format = "", Prefix = "Failover: " }); 

            client.Redis.ForceReboot(resourceGroup, cacheLink, new RedisRebootParameters() { RebootType = RebootType.AllNodes, ShardId = 0 }); 

        } 

    } 

} 

 

Monday, May 31, 2021

We continue with the post from day before yesterday where we were discussing the Azure cache with Redis

 The Basic tier is a single node system with no data replication and no SLA, so use standard or premium tier.

Data loss is expected because it is an in-memory store and patching or failovers might occur.

The eviction policy of volatile-lru affects only keys with a TTL value. This is the default.

There is a performance tool available called Redis-benchmark.exe. This is recommended to be run on the Dv2 VM series.

There are statistics to show the total number of expired keys. the number of keys with timeouts and an average timeout value.

If all the keys are lost, it probably occurs due to one of three reasons: The keys have been purged manually, the azure cache for Redis is set to use a non-default database, or the Redis server is unavailable.

The local RedisCache wrapper connection uses connection multiplexing. The RedisCache object that clients interact with to get and set cache entries requires a connection to the cache. If each instance of the object opens a new connection, the server resources could be depleted real fast to the point of Denial of Service.  Therefore some economical use of the connections is needed and one approach to handle it requires the multiplexing of connections. 

The sizes of the cache can vary from 250MB to 120GB

The replication speed across regions occurs at about 63 GB in 5-10 minutes

A planned failover and swapping between primary and secondary takes 1 second

An unplanned failover with similar operation takes 10 seconds.

The persistence option an be AOF for last updates and replay or RDB for snapshots

Cache can also be hosted on a cluster with a shards on different nodes

There are options for private network, firewall and update schedules. 

Saturday, May 29, 2021

 

Given a wire grid of size N * N with N-1 horizontal edges and N-1 vertical edges along the X and Y axis respectively, and a wire burning out every instant as per the given order using three matrices A, B, C such that the wire that burns is

(A[T], B[T] + 1), if C[T] = 0 or
(A[T] + 1, B[T]), if C[T] = 1

Determine the instant after which the circuit is broken

     public static boolean checkConnections(int[] h, int[] v, int N) {

        boolean[][] visited = new boolean[N][N];

        dfs(h, v, visited,0,0);

        return visited[N-1][N-1];

    }

    public static void dfs(int[]h, int[]v, boolean[][] visited, int i, int j) {

        int N = visited.length;

        if (i < N && j < N && i>= 0 && j >= 0 && !visited[i][j]) {

            visited[i][j] = true;

            if (v[i * (N-1) + j] == 1) {

                dfs(h, v, visited, i, j+1);

            }

            if (h[i * (N-1) + j] == 1) {

                dfs(h, v, visited, i+1, j);

            }

            if (i > 0 && h[(i-1)*(N-1) + j] == 1) {

                dfs(h,v, visited, i-1, j);

            }

            if (j > 0 && h[(i * (N-1) + (j-1))] == 1) {

                dfs(h,v, visited, i, j-1);

            }

        }

    }

    public static int burnout(int N, int[] A, int[] B, int[] C) {

        int[] h = new int[N*N];

        int[] v = new int[N*N];

        for (int i = 0; i < N*N; i++) { h[i] = 1; v[i] = 1; }

        for (int i = 0; i < N; i++) {

            h[(i * (N)) + N - 1] = 0;

            v[(N-1) * (N) + i] = 0;

        }

        System.out.println(printArray(h));

        System.out.println(printArray(v));

        for (int i = 0; i < A.length; i++) {

            if (C[i] == 0) {

                v[A[i] * (N-1) + B[i]] = 0;

            } else {

                h[A[i] * (N-1) + B[i]] = 0;

            }

            if (!checkConnections(h,v, N)) {

                return i+1;

            }

        }

        return -1;

    }

        int[] A = new int[9];

        int[] B = new int[9];

        int[] C = new int[9];

        A[0] = 0;    B [0] = 0;    C[0] = 0;

        A[1] = 1;    B [1] = 1;    C[1] = 1;

        A[2] = 1;    B [2] = 1;    C[2] = 0;

        A[3] = 2;    B [3] = 1;    C[3] = 0;

        A[4] = 3;    B [4] = 2;    C[4] = 0;

        A[5] = 2;    B [5] = 2;    C[5] = 1;

        A[6] = 1;    B [6] = 3;    C[6] = 1;

        A[7] = 0;    B [7] = 1;    C[7] = 0;

        A[8] = 0;    B [8] = 0;    C[8] = 1;

        System.out.println(burnout(9, A, B, C));

1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0

8
Alternatively,

    public static boolean burnWiresAtT(int N, int[] A, int[] B, int[] C, int t) {

        int[] h = new int[N*N];

        int[] v = new int[N*N];

        for (int i = 0; i < N*N; i++) { h[i] = 1; v[i] = 1; }

        for (int i = 0; i < N; i++) {

            h[(i * (N)) + N - 1] = 0;

            v[(N-1) * (N) + i] = 0;

        }

        System.out.println(printArray(h));

        System.out.println(printArray(v));

        for (int i = 0; i < t; i++) {

            if (C[i] == 0) {

                v[A[i] * (N-1) + B[i]] = 0;

            } else {

                h[A[i] * (N-1) + B[i]] = 0;

            }

        }

        return checkConnections(h, v, N);

    }

    public static int binarySearch(int N, int[] A, int[] B, int[] C, int start, int end) {

        if (start == end) {

            if (!burnWiresAtT(N, A, B, C, end)){

                return end;

            }

            return  -1;

        } else {

            int mid = (start + end)/2;

            if (burnWiresAtT(N, A, B, C, mid)) {

                return binarySearch(N, A, B, C, mid + 1, end);

            } else {

                return binarySearch(N, A, B, C, start, mid);

            }

        }

    }

1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0

8

Friday, May 28, 2021

 This continues on the previous post for the modus operandi of Azure Cache for Redis:

Specifically, we called out the following:

 The Basic tier is a single node system with no data replication and no SLA, so use standard or premium tier.

Data loss is expected because it is an in-memory store and patching or failovers might occur.

The eviction policy of volatile-lru affects only keys with a TTL value. This is the default.

There is a performance tool available called Redis-benchmark.exe. This is recommended to be run on the Dv2 VM series.

There are statistics to show the total number of expired keys. the number of keys with timeouts and an average timeout value.

If all the keys are lost, it probably occurs due to one of three reasons: The keys have been purged manually, the azure cache for Redis is set to use a non-default database, or the Redis server is unavailable.

Traffic is always routed to the designated primary, backed by a virtual machine that hosts the Redis server. Container and cluster-based scale-out of Redis servers are not entertained. Even if there are multiple servers, only one is primary and the others are replica. Clustered caches have many shards each with distinct primary and replica nodes.

Failover occurs when the primary goes offline and another becomes primary. Clients handle failover effects with retry and backoff. 

We continue next with connection multiplexing. The RedisCache object that clients interact with to get and set cache entries requires a connection to the cache. If each instance of the object opens a new connection, the server resources could be depleted real fast to the point of Denial of Service.  Therefore some economical use of the connections is needed and one approach to handle it requires the multiplexing of connections.  

Thursday, May 27, 2021

 The modus-operandi for the use of Azure Cache with Redis.

Performance and cost-effective use of Azure Cache for Redis instance result from following best practices. These are:

 The Basic tier is a single node system with no data replication and no SLA, so use standard or premium tier.

Data loss is expected because it is an in-memory store and patching or failovers might occur.

Use a connect timeout of at least 15 seconds.

The default eviction policy is volatile-lru, which means that only keys that have a TTL value set will be eligible for eviction. If no keys have a TTL value, then the system won't evict any keys. If we want to stretch the eviction to all keys, use allkeys-lru policy. Keys can also have an expiration value set.

There is a performance tool available called redis-benchmark.exe. This is recommended to be run on Dv2 VM series.

The stats section shows the total number of expired keys. The keyspace section provides more information about the number of keys with timeouts and an average time-out value. The number of evicted keys can be monitored using the info command.

If all the keys are lost, it probably occurs due to one of three reasons: The keys have been purged manually, the azure cache for Redis is set to use a non-default database, or the Redis server is unavailable.

Redis is an in-memory data store.  It is hosted on a single VM in a basic tier. If that VM is down, all data in the cache is lost. Caches in the standard or premium tier offer much higher resiliency against data loss by using two VMs in a replicated configuration. These VMs are located on separate domains for faults and updates, to minimize the chance of both becoming unavailable simultaneously.  If a major datacenter outage happens, however, the VMs might still go down together. Data persistence and geo-replication is used to protect data against failures.

A cache is constructed of multiple virtual machines with separate, private IP addresses. Each virtual machine, also known as a node, is connected to a shared load balancer with a single virtual IP address. Each node runs the Redis server process and is accessible by means of the hostname and the Redis ports. Each node is considered either a primary or a replica node. When a client application connects to a cache, its traffic goes through this load balancer and is automatically routed to the primary node.

A basic cache has a single node which is always primary. Standard or premium cache has two nodes – one primary and the other replica. Clustered caches have many shards each with distinct primary and replica nodes.

Failover occurs when the primary goes offline and another becomes primary. Both notice changes. The old one sees the new primary and becomes a replica. Then it connects with the primary to synchronize data.

A planned failover takes place during system updates. The nodes receive advance notice and can swap roles and update the load-balancer. It finishes in less than 1 second.

 An unplanned failover might happen because of hardware failure or unexpected outages. The replica node promotes itself to primary but the process is longer because it must first detect that the primary is offline and that the failover is not unnecessary. This lasts 10 to 15 seconds.

Patching involves failover which can be synchronized by the management service.

Clients handle failover effects with retry and backoff. If errors persist for longer than a preconfigured amount of time, the connection object should be recreated. Recreating the connection without restarting the application can be accomplished by using a Lazy<T> pattern.

Reboot and scheduled updates can be tossed in to test a client’s resiliency and the mitigations by the retry and backoff technique.