Monday, February 27, 2017

We continue with a detailed study of Microsoft Azure stack as inferred from an introduction of Azure by Microsoft. We reviewed some more features of Azure storage. We were discussing the replication flow in Windows Azure Storage service. We now look at sealing operation among extent nodes.
 Sealing the extent means that the commit length does not change again. To do this, all the three ENs are asked for their current commit length. If one or more of the replicas have a different length, the minimum commit length is chosen. This does not cause data loss because the primary EN does not return success unless all the replicas have been written to disk. This means the minimum will have the complete data. Additionally, the minimum may even have more blocks that were not acknowledged to the client. This is fine because the client deals with timeouts and duplicate records. All of the commit records have a sequence number and the duplicate records have a sequence number.
If the SM could not talk to an EN but the client is able to, then this may cause a discrepancy to the client.  The partition layer has two different read patterns to handle this.  If the read records are at known locations, the partition layer reads at a specific offset for a specific length.  This is true for data streams which are only of two types row and blob. The location information is available from a previous successful append at the stream layer.  The append was successfully committed only if it made it to all the three replicas.
The other type of read pattern is when the stream is read sequentially from the start to the finish.  This operation occurs for metadata and commit logs. These are the only streams that the partition layer will read sequentially from a starting point to the very last record of a stream. During the partition load time, the partition layer prevents any appends to these two streams  At the start of the load, a check for commit length is requested from the primary EN of the last extent of these two streams. This checks whether all the replicas are available and that they all have the same length. If not, the extent is sealed and the reads are only performed during partition load against a replica sealed by the SM. This works for repeated loads.
It may be interesting to note that the two types of access are determined by the nature of the stream being read and not merely by their frequencies. The access pattern for metadata and logs differ from data. the latter is more frequent and arbitrary. 


#codingexercise
Print all nodes in a binary tree with k leaves.

int GetNodeWithKLeaves(Node root, int k, ref List<Node> result)
{
if (root == null) return 0;
if (root.left == null && root.right == null) return 1;
int left = GetNodeWithKLeaves(root.left, k, ref result);
int right = GetNodeWithKLeaves(root.right, k, ref result);
if (left + right == k)
{
 result.Add(root);
}
return left + right;
}

Sunday, February 26, 2017

We continue with a detailed study of Microsoft Azure stack as inferred from an introduction of Azure by Microsoft. We reviewed some more features of Azure storage.
The IaaS offerings from Azure storage services include disks and files where as the PaaS offerings from Azure storage services include objects, tables and queues. The storage offerings are built on a unified distributed storage system with guarantees for durability, encryption at Rest, strongly consistent replication, fault tolerance and auto load balancing.
 The redundancy features are handled by storage stamps and a location service. A storage stamp is a cluster of N racks of storage nodes, where each rack is built out as a separate fault domain with redundant networking and power.  Each storage stamp is located by a VIP  and served by layers of Front-Ends, partition layer and stream layer that provides intra stamp replication.The intra stamp provides synchronous replication in the stream layer and ensures that the data is made durable within the storage stamp. The interstamp replication provides asynchronous replication and is focused on replicating data across stamps. We were looking at Blocks, Extents and Streams, the Stream Manager and the Extent Nodes.

We were discussing the replication flow, let us summarize it. First a stream is created. It has three replicas, a primary and two secondary.  They are distributed across fault and upgrade domains. The SM says which replica is primary and the client gets to know which extent node has the locations of the replicas. This state is maintained in the SM cache.  Writes go to the primary and from the primary to the secondary. The locations of the replicas don't change until the extent is sealed. No leases are used to represent the primary EN for an extent. When one extent gets sealed, the process repeats for another extent. The client can read the information from any replica including the last block. The primary decides the offset of the append in the extent.It orders the appends for concurrent requests. It relays the chosen offset to the secondary. It returns success to the client after all three appends.  The sequence of success returned is in the order of append offsets. As appends commit in order for a replica, the last append position is considered to be the current commit length of the replica. This is helpful to determine the integrity of the replicas because the primary EN does not change, it sequences the appends and seals the extent.  
When the process is repeated such as on failures, a write failure is returned to the client. The client then contacts the SM which then seals the extent at the current commit length. A new extent is allocated on a different  ENs and the informations is relayed to the client.

#codingexercise
Determine if two arrays represent same BST without manipulating the arrays.
If the arrays A and B could be manipulated, we know we could check
A.sort() == B.sort() to see that they are the same BST.
or we could 
MakeBST(A) == MakeBST(B)
where MakeBST(A) = 
{
Node BST-A = null;
for (int i = 0; i < A.Length; i++)
      Insert(BST-A, A[i])
return BST-A;
}
If the two BSTs are identical then each node will occupy the same level and have the same subtree for both arrays. Consequently we could try to check the same for pairwise match of elements in both trees. However, this means we do n-1 comparisions for each of the n elements and is much inefficient than rendering the BST with insertions. Moreover this kind of checking will requiring making BSTS for separate ranges of items - those that are lower than the root and those that are higher than the root. The check also is very inefficient because to keep track of level and siblings, we require the tree. Finally contrast this with just sorting the elements of the array to compare against each other for equality to say BSTs are identical. That kind of check would be most efficient. 

Saturday, February 25, 2017

We continue with a detailed study of Microsoft Azure stack as inferred from an introduction of Azure by Microsoft. We reviewed some more features of Azure storage.
The IaaS offerings from Azure storage services include disks and files where as the PaaS offerings from Azure storage services include objects, tables and queues. The storage offerings are built on a unified distributed storage system with guarantees for durability, encryption at Rest, strongly consistent replication, fault tolerance and auto load balancing.
 The redundancy features are handled by storage stamps and a location service. A storage stamp is a cluster of N racks of storage nodes, where each rack is built out as a separate fault domain with redundant networking and power.  Each storage stamp is located by a VIP  and served by layers of Front-Ends, partition layer and stream layer that provides intra stamp replication.The intra stamp provides synchronous replication in the stream layer and ensures that the data is made durable within the storage stamp. The interstamp replication provides asynchronous replication and is focused on replicating data across stamps. We were looking at Blocks, Extents and Streams, the Stream Manager and the Extent Nodes.
The Stream Layer is protected from malicious threats by the data center, Fabric Controller and Windows Azure Storage WAS service. Faults ranging from disk and node errors to power failures, network issues, bit flip and random hardware-failures and software bugs all may result in data corruption. They are prevented by the same block checksums mentioned earlier. 
The replication flow proceeds like this. The Stream Manager assigns three replicas for the first extent (one primary and two secondary) to three extent nodes. They are chosen at random to spread the replicas across different fault and ugprade domains while considering extent node usage for load balancing. The primary is the main replica. All writes go to the primary. The primary then co-ordinates the writes to the secondary.  The primary EN and the location of the three replicas don't change when it is appended and until sealed. When the SM allocates the extent, the extent locations is sent back to the clients. When the last extent becomes sealed, another now becomes the last extent and all new appends become the last extent in the stream. 
If there are multiple outstanding appends to the same extent, the primary EN will respond success in the order of their offset to the clients. The last append position is considered to be the current commit length of the replica.
#codingexercise
Convert a binary tree to a binary search tree
Node ToBSTFromBT(Node root)
{
if (root == null) return root;
var items = ToList(root);
items.sort()
return ToBSTFromInOrder(items);
}
Node toList(node root)
{
If (root == null) return root;
If (root.left != null)
{
var left = toList(root.left);
for( ; left.right != null; left = left.right);
Left.right = root;
}
If (root.right != null)
{
var right = toList(root);
for(; right.left != null; right = right.left);
root.right = right;
}
Return root;
}

Node SortedListToBST(List<int> num, int start, int end, Node root)
{

int mid;

if ( (start + end) %2 == 0)
     mid = (start + end ) / 2 + 1;
else
    mid = (start + end) / 2

var node = new Node();
node.data = num[mid];

if (root != null)
if (num[mid] < root.data)
     root.left = node;
else
    root.right = node

node.left = SortedListToBST(num, start, mid-1, node);

node.right = SortedListToBST(num, mid + 1, end, node);

return node;


};

Friday, February 24, 2017

We continue with a detailed study of Microsoft Azure stack as inferred from an introduction of Azure by Microsoft. We reviewed some more features of Azure storage.
The IaaS offerings from Azure storage services include disks and files where as the PaaS offerings from Azure storage services include objects, tables and queues. The storage offerings are built on a unified distributed storage system with guarantees for durability, encryption at Rest, strongly consistent replication, fault tolerance and auto load balancing.
 The redundancy features are handled by storage stamps and a location service. A storage stamp is a cluster of N racks of storage nodes, where each rack is built out as a separate fault domain with redundant networking and power.  Each storage stamp is located by a VIP  and served by layers of Front-Ends, partition layer and stream layer that provides intra stamp replication.The intra stamp provides synchronous replication in the stream layer and ensures that the data is made durable within the storage stamp. The interstamp replication provides asynchronous replication and is focused on replicating data across stamps. We were looking at Blocks, Extents and Streams.
The Stream Manager keeps track of the extents in each stream and the extent allocation across the Extent Nodes. The Extent Nodes maintain the storage for a set of extent replicas assigned to it by the SM. Each Extent Node maintains a view about the extent it owns and where the peer replicas are for the given extent. The streams can only be appended, existing data cannot be modified  Appends can only be at the block level. Multiple blocks may be appended at once. Block append operations are atomic.  The partition layer and the stream layer maintain that if a request fails, it is retried or the block is sealed. The partition layer deals with duplicate records in two ways. An extent has a target size specified by the partition layer. Once an extent is sealed, it cannot be appended until that time, it may acquire arbitrary size.
Both the layers are built upon a set of guarantees as follows from the stream layer. If the data is appended and acknowledged, the data cannot be modified so repeated reads see the same value. If the extent is sealed, it is again immutable and repeated reads from anywhere see the same value.

It is interesting that user requests are not labeled that can propagate to their streams. This seems to be because the granularity is one to one between a request and a stream. If groups were to be allowed, then the labels could be put to more use such as in : https://1drv.ms/w/s!Ashlm-Nw-wnWrw3incI2gCY3Jds3 #codingexercise
Given n appointments, find all conflicting appointments. 
This problem can be solved as the merge interval exercise earlier.
Instead we use the BST to solve this with the method as GetCount(Node root, int min, int max)
int getConflicts(List<Range> app)
{
Node root = null;
int conflicts = 0;
for (int i = 0; i < app.Count; i++)
{
 root = Insert(app[i], ref root); // based on the min of the range
 conflicts += GetCount(root, app[i].min, app[i].max); // with a comparision on the range instead of int
}
return conflicts;
}
Find the successor in a BST
Node TreeSuccessor(Node x)
{
If (x == null) return null;
If (x.right) return TreeMinimum(x.right);
Node y = x.parent;
While(y != null && y.right == x){
    X = y;
    Y = y.p;
}
Return y;

}

Thursday, February 23, 2017

We continue with a detailed study of Microsoft Azure stack as inferred from an introduction of Azure by Microsoft. We reviewed some more features of Azure storage.
The IaaS offerings from Azure storage services include disks and files where as the PaaS offerings from Azure storage services include objects, tables and queues. The storage offerings are built on a unified distributed storage system with guarantees for durability, encryption at Rest, strongly consistent replication, fault tolerance and auto load balancing.
 The redundancy features are handled by storage stamps and a location service. A storage stamp is a cluster of N racks of storage nodes, where each rack is built out as a separate fault domain with redundant networking and power.  Each storage stamp is located by a VIP  and served by layers of Front-Ends, partition layer and stream layer that provides intra stamp replication.The intra stamp provides synchronous replication in the stream layer and ensures that the data is made durable within the storage stamp. The interstamp replication provides asynchronous replication and is focused on replicating data across stamps. We were looking at the differences between the interstamp and intrastamp replication.
The Stream layer operates in terms of Block, Extent and Streams.  These are units of organization for reading, replication and naming respectively.   Stream Manager and Extent Nodes play the dominant roles in the stream layer.  The Stream manager keeps track of the stream namespace, what extents are in each stream, and the extent across the Extent Nodes.  The Stream Manager is responsible for maintaining the streaming namespace and the state of all active streams, monitoring the health of Extent Nodes, creating and assigning extent to Extent Nodes, performing the lazy replication of lost extent replicas, garbage collecting extents that are no longer pointed to any stream and scheduling the erasure coding of extent data according to stream policy. In other words, the stream manager does all those activities that do not affect the critical path of the client requests. The SM queries the extent nodes to see what extents they store. Any extent that does not have a quorum is lazily restarted. ENs are randomly selected. The SM is agnostic about blocks. This helps it to keep and fit all the state about the extents in its memory.  The Stream layer is used only by the partition layer and the partition layer and stream layer  are co-designed so that they will not use more than fifty million extents and no more than hundred thousand streams for a single storage stamp given our current stamp size.
The Extent Nodes maintain the storage for a set of extent replicas assigned to it by the SM. It does not look at the layer above and knows nothing about streams. Every extent on the disk is a file, which holds data blocks and their checksums and an index that maps extent offsets to blocks. The EN maintains a view of its own extents and those of the peer replicas  in its cache Its the SM that garbage collects the extent and notifies the ENs to reclaim the space.
#codingexercise
Get count of nodes not sub-trees in a BST that lie within a given range with optimizations.
int getCount(Node root, int min, int max)
{
if (root == null) return 0;
if (root.data < min && root.right == null) return 0;
if (root.data > max && root.left == null) return 0;
int left = getCount(root.left, min, max);
int right = getCount(root.right, min, max);
if (root.data >= min && root.data <= max)
   return left + right +1;
return left + right;
}
an alternative would be to list the nodes of the BST in inorder and select only those that fall within the range.
ToInorder(root).Select(x => min <= x && x >= max).ToList();

Wednesday, February 22, 2017

We continue with a detailed study of Microsoft Azure stack as inferred from an introduction of Azure by Microsoft. We reviewed some more features of Azure storage.
The IaaS offerings from Azure storage services include disks and files where as the PaaS offerings from Azure storage services include objects, tables and queues. The storage offerings are built on a unified distributed storage system with guarantees for durability, encryption at Rest, strongly consistent replication, fault tolerance and auto load balancing.
 The redundancy features are handled by storage stamps and a location service. A storage stamp is a cluster of N racks of storage nodes, where each rack is built out as a separate fault domain with redundant networking and power.  Each storage stamp is located by a VIP  and served by layers of Front-Ends, partition layer and stream layer that provides intra stamp replication.The intra stamp provides synchronous replication in the stream layer and ensures that the data is made durable within the storage stamp. The interstamp replication provides asynchronous replication and is focused on replicating data across stamps. The Inter stamp replication is done in the background so that it does not conflict with the critical path of the customer's request.The intra stamp replication provides durability against hardware failure which is more frequent in large scale systems. The inter-stamp replication provides geo-redundancy against geo-disasters which are relatively rare. The intra stamp replication must occur with low latency because it is on the critical path. The inter-stamp replication on the other hand is concerned with the optimal use of network bandwidth between stamps within an acceptable level of replication delay.   The Stream layer stores the bits on disk and handles distribution and replication of the data across many servers to keep the data durable within a storage stamp.  It is called a stream layer because it handles ordered lists of large storage chunks called extents and an extent is a sequence of append blocks. Only the last extent in a stream can be appended to.  A block is the minimum unit of data for reading and writing. A block can be upto N bytes say 4MB. Data is written to as one or more concatenated blocks to an extent where blocks do not have to be the same size. A client read gets an offset to a stream or extent and the stream layer reads as many blocks as needed at the offset to fulfill the length of the read. The checksums are stored at the block level, one checksum per block. and all the checksums are validated once every few days. Extents are the units of replication Each extent is stored in an NTFS file.  The Stream Manager keeps track of streams and extents not blocks or block appends and therefore stays away from critical path. Since stream and extents are tracked within a single stamp, the stream manager can keep the state in the memory. The stream manager may not use more than fifty million extents and no more than hundred thousand streams for a single storage stamp and their state can fit into 32GB memory for the storage stamp. 
#codingexercise
Get count of nodes not sub-trees in a BST that lie within a given range
int getCount(Node root, int min, int max)
{
if (root == null) return 0;
int left = getCount(root.left, min, max);
int right = getCount(root.right, min, max);
if (root.data >= min && root.data <= max)
   return left + right +1;
return left + right;
}
we could further optimize this by ignoring subtrees that are exclusively smaller or exclusively larger such as when one of the siblings is null and the other sibling continues with the invalidity of the root. The same applies to node with sentinel values of range

Tuesday, February 21, 2017

We continue with a detailed study of Microsoft Azure stack as inferred from an introduction of Azure by Microsoft. We reviewed some more features of Azure networking and started discussing Azure storage.
The IaaS offerings from Azure storage services include disks and files where as the PaaS offerings from Azure storage services include objects, tables and queues. The storage offerings are built on a unified distributed storage system with guarantees for durability, encryption at Rest, strongly consistent replication, fault tolerance and auto load balancing. The blob storage is tiered.  There are two tiers - hot tier and cold tier.  Hot is used for commonly used data and cold for rarely used data. The redundancy features are handled by storage stamps and a location service. A storage stamp is a cluster of N racks of storage nodes, where each rack is built out as a separate fault domain with redundant networking and power.  Each storage stamp is located by a VIP  and served by layers of Front-Ends, partition layer and stream layer that provides intra stamp replication.
There are three layers within a storage stamp - Stream layer as the foundation, the partition layer to handle objects and the front-end to handle user requests.  The Stream layer stores the bits on disk and handles distribution and replication of the data across many servers to keep the data durable within a storage stamp.  It is called a stream layer because it handles ordered lists of large storage chunks called extents and these streams behave the same way as files in terms of storage and replication without the onus of management interface of file objects. The partition layer collocated with the stream layer handles the latter.  The front-end layer comprises of stateless servers handling incoming requests- authenticating, authorizing and routing the request to the a partition server in the partition layer using a partition map.
There are two replication engines - Intra stamp replication and Inter stamp replication. The intra stamp provides synchronous replication in the stream layer and ensures that the data is made durable within the storage stamp. The interstamp replication provides asynchronous replication and is focused on replicating data across stamps. The Inter stamp replication is done in the background so that it does not conflict with the critical path of the customer's request. This is not the only difference. A variety of other reasons suggested this design.  The intra stamp replication provides durability against hardware failure which is more frequent in large scale systems. The inter-stamp replication provides geo-redundancy against geo-disasters which are relatively rare. The intra stamp replication must occur with low latency because it is on the critical path. The inter-stamp replication on the other hand is concerned with the optimal use of network bandwidth between stamps within an acceptable level of replication delay.  The namespace is also different. The metadata stored for intra stamp is scoped down by the size of a single storage stamp which allows it to be cached in memory. This also enables the Windows Azure Storage service to provide fast replication with strong consistency by quickly committing transactions within a single stamp. The partition layer combined with the location service manages the global object namespace across stamps which efficiently replicates object state across data centers.
As an aside, private cloud and pseudo-private cloud often have to struggle with these kind of considerations.  For example, they may provide storage shares out of integrated and dedicate storage clusters such as OneFS based clusters with geo-distribution and even backup but don't seem to lay emphasis on replication outside the storage clusters and enabling more than one copy.  SyncIQ does replication across one or more clusters asynchronously but OneFS is an on-premise cluster file system not a cloud storage service. In other words, while OneFS may work very well as a complete integrated solution for a single storage cluster, it is not a substitute for storage stamps because there are far more layers and inefficiencies introduced by linking different storage clusters. When we see storage not as files but as streams we take a step forward. While streams do have a file system like namespace and API, all writes are append only. While the public cloud is busy streamlining storage services for the cloud scale, the private cloud mindset could easily toss an object storage vendor and appliance into the mix and translate file storage as object storage and go home early! This does provide staging for migration of user data in a private datacenter which allows us to lower the TCO and do away with in house file storage hardware but we would not get far when we go with another private storage vendor.
#codingexercise
Count BST subtrees that lie in a given range
For example,
    10
   5    50
1       40  100 has only one subtree rooted at 40 within the range 5,45
One way to do this would be to print the inorder list for each subtree in the BST and to check if all the elements of the inorder of each subtree are within the range or we could cherrypick the nodes during traversal itself:
bool GetCountBSTInRangeHelper(Node root, int min, int max, ref List<Node> result)
{
if (root == null) return true;
int left = GetCountBSTInRangeHelper(root.left, min, max, ref result);
int right = GetCountBSTInRangeHelper(root.right, min ,max, ref result);
if (root.data >= min && root.data <=max && left && right)
{
    result.Add(root);
    return true;
}
return false;
}
The other approach is as follows:
void GetCountBSTInRangeHelper(Node root, int min, int max, ref List<Node> result)
{
if (root == null) return;
var in = new List<Node>();
ToInOrder(root, ref in);
if (in.All(x =>  min <=x && x <= max)) result.AddRange(in.Except(result).ToList());
GetCountBSTInRangeHelper(root.left, min, max, ref result);
GetCountBSTInRangeHelper(root.right, min, max, ref result);
}
result.Count is the total number of subtrees requested in either case.
Also note that we should not eliminate subtrees based on the root node value because siblings may still conform.