Tuesday, June 16, 2015

Today's blogpost is about answers to coding problems:
Print concentric squares of a given square matrix.

void PrintConcentricSquare(int [,] m, int n) //params: matrix and dimension
{
  if (n%2 == 1)
  {
   int center =n/2;
   int numSquares = center + 1;
   for (int i =0; i < numSquares; i++)
   {
         for (int x=center-i;x<=center+i;x++)
           for(int y=center-i;y<=center+i;y++)
              Console.Write('{0} ',m[x,y]);
         Console.Writeline();
   }
   }
   else
   {
       int center = n/2;
       int numSquares = center;
       for (int i =0; i < numSquares; i++)
       {
         for (int x=center-i-1;x<center+i+1;x++)
           for(int y=center-i-1;y<center+i+1;y++)
              Console.Write('{0} ', m[x,y]);
         Console.WriteLine();
       }
   }
}
The above algorithm can be modified to print just the boundary of the concentric squares. When x is not at the lower or upper limit and y is not at the lower or upper limit, escape printing the cell value.


#codingexercise 
Double  GetNthRootProductEachTermRaisedPMinusQ (Double [] A,Double  p, Double q)
{

If ( A== null) return 0;

Return A.NthRootProductEachTermRaisedPMinusQ(p, q);

}

Monday, June 15, 2015

Some coding question and answers: 

Permutations of  a string: 
Void Permute (String a, StringBuilder b, bool[] used) 
{ 
  If ( b.Length == a.Length { print b.ToString(); return;} 
  For (int I = 0; I < a.Length ; i++) 
  { 
    If (used[i]) continue; 
     used[i] = true; 
     B += A[i]; 
     Permute(a, b, used); 
     B [B.Length – 1] = ‘/0’; 
     used[i] = false; 
  } 
} 

Combinations of  a string 
Void Combine (String a, StringBuilder b, int start, int level) 
{ 
   For (int I  = start ; I < a.Lengthi++) 
   {  
       If (b.Length == a.Length) print b; 
       b[level] = a[i]; 
       Print b.ToString(); 
       If (I < a.Length) 
            Combine(a,b, start+1,level+1) 
       B[level] = ‘/0’; 
   } 
} 

Void ReverseAlternateChunks InSinglyLinkedList(Node root, int chunksize) 
{ 
  Node ret = root; 
  Node prev = NULL; 
  Node cur = root; 
  Bool reverse = false; 
  Int chunkCount = 0; 
  While (cur != null) 
  { 
If (reverse) 
{ 
       cur = Reverse(cur, chunkSize); 
       If (prev   
          Prev.next = cur; 
       Else 
          Ret = cur; 
        Reverse = false; 
} 
For (int i = 0; I < chunkSizei++) 
{ 
      Prev = cur; 
      Cur = cur.next; 
      If ( cur == NULL) break; 
} 
chunkCount++; 
If (chunkCount %2 ==0) { reverse = true}         
} 
Return ret; 
} 

GetZigZagBinaryTree(Node root) // ZigZag is the same level wise printing of nodes except alternate levels of Node(s) are reversed
{
if (root == NULL) return;
var Q = new Queue();
Q.Enqueue(root);
Q.Dequeue(NULL);
int level = 0;
var S = new Stack();
while(Q.Empty() == false)
{
    var cur = Q.Dequeue();
    if (cur)
    {
         if (level %2 == 0) {S.Push(cur);}
         else
         {
               while (S.empty() == false) print S.pop().data;
               print cur.data;
          }
         if (cur.left) Q.Enqueue(cur.left);
         if (cur.right)Q.Enqueue(cur.right);
    }
    else
    {
          Q.Enqueue(NULL);
          level ++
    }
}
while (S.empty() == false) print S.pop().data;
}

#codingexercise 
Double  GetNthRootProductEachTermRaisedPTimesQ (Double [] A,Double  p, Double q)
{

If ( A== null) return 0;

Return A.NthRootProductEachTermRaisedPTimesQ(p, q);

}

Sunday, June 14, 2015

Hash tables are very popular data structures. They store values that can be looked up by keys in constant time. When the number of keys is smaller than the total number of possible keys, this becomes very effective for lookup, insertion and deletion. If we are able to allocate an arraywhere is there one position for every possible key, we have what we call direct addressing. When the array index is computed from the key, we may have collisions which we solve by chaining values against the same key. Open addressing is another way to deal with collisions. When the search for a value takes constant time even in worst case, we have perfect hashing. The set of keys don't change once stored. Fixed hash functions are vulnerable  to malicious attacks where no keys hash to the same slot and degrading the performance to O (n) for lookups. This is resolved with universal hashing which picks out a hashing function at random from a set of hash functions at the beginning of the execution.  One such family is given by (((ak+b) mod p) mod m) where p is a prime number to denote the number of keys and is much larger than m the number of slots. Next, with open addressing the elements are not chained and they are stored in empty slots of the hash table itself. The sequence of positions probeddepend on the keys because they are associated with a permutation of the slots. We  make the assumption that each key is just as likely as the other based on the permutation. This is called uniform hashing. In practice the distribution of keys is not uniform and keys may not even be independent of each other.Three techniques are commonly used to compute the probe sequence required open addressing. These are linear, quadratic and double hashing. Double hashing gives the maximum number of probe sequences and hence better results. It takes theform h (k,i) = (h1 (k) +ih2 (k)) mod m. On order to improve the hashing to the constant time even for worst case such as for perfect hashing, we could use two levels of hash tables.

#codingexercise 
Double  GetNthRootProductEachTermRaisedPDividedQ (Double [] A,Double  p, Double q)
{

If ( A== null) return 0;

Return A.NthRootProductEachTermRaisedPDividedQ(p, q);

}

Saturday, June 13, 2015

Huffman codes enable compression. It is a technique that can result in savings of 20 to 90% depending on the data being compressed. It scans the text to populate a table of frequencies of the occurrence of characters and then builds an optimal way of representing them. Let us say that the entire text was formed of three characters and that the first character appears the majority of the time, then there can be significant savings if there were fewer bits to represent the first character. Thus this technique uses a greedy strategy.
The characters can be represented by fixed-length codes or variable length codes. The variable-length codes will have more savings. That we know so far but how do we decode such compressed data. We make the assumption that the codes are such that no codeword is also a prefix of some other codeword. Such codes are called prefix codes. When reading a prefix code, we can now decipher the first character unambiguously.  Then we can repeat the decoding.
The representation of the prefix code is possible with a binary tree where the characters are the leaves of the trees and the path from the root to leaf is the prefix code. The prefix is appended 0 for going to the left child and 1 to go to the right child.  These are not Binary Search Trees because they need not be sorted and the internal ones don’t have character keys. Trees correspond to the coding schemes. The optimal code is always represented by a full binary tree.
Now that we know the prefix codes can be obtained by constructing the tree, we build it up in a bottom-up manner. We take a set of C leaves and perform a sequence of C-1 merging operations to create the final tree. A min priority queue Q, keyed on f, is used to identify the two least-frequent objects to merge together. The result of the merger of two objects is a new object whose frequency is the sum of those merged.
HUFFMAN(c)
N <- C
Q <-C
For I in range (n-1):
      Do allocate a new node z
       Left[z] = x = EXTRACT-MIN(Q) // removes from Q
       Right[z] = y = EXTRACT-MIN(Q)
       f[z] = f[x] + f[y]
      INSERT(Q, Z)
Return EXTRACT-MIN(Q) // root of the tree

Friday, June 12, 2015

In today's post, we briefly look at the longest common subsequence problem:
A longest common subsequence (LCS) is defined as a problem where two sequences are given X = (x1, x2, ... xm) and Y = (y1, y2, ... yn) and we wish to find a maximum length common subsequence of X and Y. A common subsequence is one that is a subsequence of both X and Y.
This is a standard textbook problem for dynamic programming. We solve this with the optimal substructure of an LCS and recursion.
We define the optimal substructure this way:
1. if xm = yn, then zk = xm = yn and Zk-1 is an LCS of Xm-1 and Yn-1
2. if xm != yn, then zk != xm implies that Z is an LCS of Xm-1 and Y
3. if xm != yn, then zk != yn implies that Z is an LCS of X and Yn-1
In other words, we are defining the solution in terms of the subproblems.
This yields a recursive solution in terms of the length of the longest common subsequence c[i,j] as :
1. 0 if i = 0 or j = 0
2. c[i-1, j-1] + 1if i,j > 0 and xi = yj
3. max(c[i,j-1], c[i-1, j]) if i, j  > 0 and xi != yj
Condition number 2 limits which sub problems we should consider.
Also, there are overlapping sub-problems encountered something for which we could use bottom-up solving or memorization.
LCS-LENGTH(X, Y)
m = X.length
n = Y.length
initialize b[m,n] and c[m,n] as new tables
for i = 1 to m
     c[i, 0] = 0
for j = 0 to n
     c[0,j] = 0
for i = 1 to m
      for j = 1 to n
            if xi == yi
               c[i,j] = c[i -1, j-1] + 1
               b[i,j] = up-left
            elseif c[i-1,j]  >= c[i, j-1]
               c[i,j] = c[i-1, j]
               b[i,j] = up
            else c[i,j] = c[i,j-1]
               b[i,j] = left
return c and b

and we print the LCS as
PRINT-LCS(b, X, i, j)
      if i == 0 or j == 0
        return
      if b[i,j] == up-left
         PRINT-LCS(b, X, i-1, j-1)
         print xi
      elseif b[i,j] == up
        PRINT-LCS(b, X, i-1, j)
      else PRINT-LCS(b,X, i,j-1)

#codingexercise


Double  GetNthRootProductOddRaisedPDividedQAndEvenRaisedPTimesQ (Double [] A,Double  p, Double q)


{



If ( A== null) return 0;



Return A.NthRootProductOddRaisedPDividedQAndEvenRaisedPTimesQ(p, q);



}

Thursday, June 11, 2015

Today we continue to read the Riverbed optimization systems.  RiOS supports standby between primary and backup devices. Even Active/Active configuration is supported where each appliance is serving as a primary for some traffic and as a backup for some other appliance, with full data store synchronization. In addition, RiOS permits a variety of in-path, virtual in-path or out of path configurations both parallel and clustered.
Next we look at end to end security. Security and performance are often at loggerheads for WAN traffic optimizations. Either accepting sub-par performance or lowering the security bar is needed. Data has to be secured both in-motion and at rest. Here RiOS's SSL acceleration module allows customers to securely accelerate SSL encrypted traffic without scattering certifications and keys. It works along SSL offload or load balancing devices.Server IP auto discovery simplifies this set of features. RiOS also support optional encryption of inner channel traffic.
To protect data at-rest, RiOS encrypts its datastore with AES which also satisfies security or compliance regulations. Since the data in the data store constitutes of segments and proprietary references, it is difficult to put together a file from the shredded data by a hacker. The optional encryption of the data store provides an additional level of security and compliance over and on top of already secure datastore.
Since SSL connections are certificate based, RiOS devices have strong authentication securing the ends of the encryption traffic. Moreover, management functions for security are streamlined through the console and controller as we discussed in the previous posts.
The design of the WAN Optimization techniques for RiOS stems from the three issues that plague WAN traffic :

  • WAN bandwidth limitations
  • transport protocol inefficiencies
  • and application protocol chattiness

As a summary,
RIOS provides the following optimizations
Data Streamlining:

  • Memory only basic compression
  • Disk-based data reduction
  • QoS marking
  • QoS enforcement
  • Hierarchical QoS 

Transport Streamlining:

  • Basic TCP optimization
  • Advanced TCP optimization
  • HS-TCP/MX-TCP
  • 3 WAN visibility modes

Application Level optimizations

  • CIFS protocol
  • NFS protocol
  • Local file storage
  • MAPI
  • HTTP
  • HTTP Enhancements
  • MS-SQL
  • SSL traffic optimizations
  • Oracle 11i
  • Disaster Recovery Acceleration





























Wednesday, June 10, 2015

Today we continue to read the Riverbed optimization systems. We started discussing the clustering options. We saw that even the series connected appliances can have a benefit. They make use of the of the feature of RiOS to pass traffic through unoptimized when the appliance is over limit. The next appliance therefore gets the chance to optimize it.
Moreover a single RiOS device can be plugged in-path on multiple network links. It can support simultaneous in-path deployment on 6 copper links or 6 fiber links, or even a mixture. In addition, the separation of server side and client side connection forwarding enables RiOS devices to co-operate across multiple redundant links when there are too many links for a single device or multiple links that are too far apart for a single device.
The deployment can also be out of path for redundancy and scale. In this mode, a pool of devices work together to handle incoming requests. When a device fails, the requests are failed over to the next device which handles the requests. This kind of deployment can be done using a Layer 4 switch, WCCP or PBR.
We now look at the notion of an interceptor. This is an optional role for Riverbed devices. It functions as a specialized connection distribution device for a bank of RiOS devices. In addition to functioning as an L4 switch, it can perform asymmetric routing. Besides it eliminates the need for WCCP or PBR which are difficult to configure. The interceptor is typically used for large Datacenter deployments where it can be deployed without requiring any static route configuration. The interceptor can  handle upto 1 million  concurrent connections. Moreover it can scale up to 12Gbps and maintain uniform performance over extended duration by monitoring the peers.
RiOS also supports warm standby between designated primary and backup devices. Using automated data store synchronization, both the data segments and the references created via Data Streamlining are copied from primary to backup appliance. On failovers, the backup functions as a hot data store and can begin serving immediately.