Friday, February 24, 2017

We continue with a detailed study of Microsoft Azure stack as inferred from an introduction of Azure by Microsoft. We reviewed some more features of Azure storage.
The IaaS offerings from Azure storage services include disks and files where as the PaaS offerings from Azure storage services include objects, tables and queues. The storage offerings are built on a unified distributed storage system with guarantees for durability, encryption at Rest, strongly consistent replication, fault tolerance and auto load balancing.
 The redundancy features are handled by storage stamps and a location service. A storage stamp is a cluster of N racks of storage nodes, where each rack is built out as a separate fault domain with redundant networking and power.  Each storage stamp is located by a VIP  and served by layers of Front-Ends, partition layer and stream layer that provides intra stamp replication.The intra stamp provides synchronous replication in the stream layer and ensures that the data is made durable within the storage stamp. The interstamp replication provides asynchronous replication and is focused on replicating data across stamps. We were looking at Blocks, Extents and Streams.
The Stream Manager keeps track of the extents in each stream and the extent allocation across the Extent Nodes. The Extent Nodes maintain the storage for a set of extent replicas assigned to it by the SM. Each Extent Node maintains a view about the extent it owns and where the peer replicas are for the given extent. The streams can only be appended, existing data cannot be modified  Appends can only be at the block level. Multiple blocks may be appended at once. Block append operations are atomic.  The partition layer and the stream layer maintain that if a request fails, it is retried or the block is sealed. The partition layer deals with duplicate records in two ways. An extent has a target size specified by the partition layer. Once an extent is sealed, it cannot be appended until that time, it may acquire arbitrary size.
Both the layers are built upon a set of guarantees as follows from the stream layer. If the data is appended and acknowledged, the data cannot be modified so repeated reads see the same value. If the extent is sealed, it is again immutable and repeated reads from anywhere see the same value.

It is interesting that user requests are not labeled that can propagate to their streams. This seems to be because the granularity is one to one between a request and a stream. If groups were to be allowed, then the labels could be put to more use such as in : https://1drv.ms/w/s!Ashlm-Nw-wnWrw3incI2gCY3Jds3 #codingexercise
Given n appointments, find all conflicting appointments. 
This problem can be solved as the merge interval exercise earlier.
Instead we use the BST to solve this with the method as GetCount(Node root, int min, int max)
int getConflicts(List<Range> app)
{
Node root = null;
int conflicts = 0;
for (int i = 0; i < app.Count; i++)
{
 root = Insert(app[i], ref root); // based on the min of the range
 conflicts += GetCount(root, app[i].min, app[i].max); // with a comparision on the range instead of int
}
return conflicts;
}
Find the successor in a BST
Node TreeSuccessor(Node x)
{
If (x == null) return null;
If (x.right) return TreeMinimum(x.right);
Node y = x.parent;
While(y != null && y.right == x){
    X = y;
    Y = y.p;
}
Return y;

}

Thursday, February 23, 2017

We continue with a detailed study of Microsoft Azure stack as inferred from an introduction of Azure by Microsoft. We reviewed some more features of Azure storage.
The IaaS offerings from Azure storage services include disks and files where as the PaaS offerings from Azure storage services include objects, tables and queues. The storage offerings are built on a unified distributed storage system with guarantees for durability, encryption at Rest, strongly consistent replication, fault tolerance and auto load balancing.
 The redundancy features are handled by storage stamps and a location service. A storage stamp is a cluster of N racks of storage nodes, where each rack is built out as a separate fault domain with redundant networking and power.  Each storage stamp is located by a VIP  and served by layers of Front-Ends, partition layer and stream layer that provides intra stamp replication.The intra stamp provides synchronous replication in the stream layer and ensures that the data is made durable within the storage stamp. The interstamp replication provides asynchronous replication and is focused on replicating data across stamps. We were looking at the differences between the interstamp and intrastamp replication.
The Stream layer operates in terms of Block, Extent and Streams.  These are units of organization for reading, replication and naming respectively.   Stream Manager and Extent Nodes play the dominant roles in the stream layer.  The Stream manager keeps track of the stream namespace, what extents are in each stream, and the extent across the Extent Nodes.  The Stream Manager is responsible for maintaining the streaming namespace and the state of all active streams, monitoring the health of Extent Nodes, creating and assigning extent to Extent Nodes, performing the lazy replication of lost extent replicas, garbage collecting extents that are no longer pointed to any stream and scheduling the erasure coding of extent data according to stream policy. In other words, the stream manager does all those activities that do not affect the critical path of the client requests. The SM queries the extent nodes to see what extents they store. Any extent that does not have a quorum is lazily restarted. ENs are randomly selected. The SM is agnostic about blocks. This helps it to keep and fit all the state about the extents in its memory.  The Stream layer is used only by the partition layer and the partition layer and stream layer  are co-designed so that they will not use more than fifty million extents and no more than hundred thousand streams for a single storage stamp given our current stamp size.
The Extent Nodes maintain the storage for a set of extent replicas assigned to it by the SM. It does not look at the layer above and knows nothing about streams. Every extent on the disk is a file, which holds data blocks and their checksums and an index that maps extent offsets to blocks. The EN maintains a view of its own extents and those of the peer replicas  in its cache Its the SM that garbage collects the extent and notifies the ENs to reclaim the space.
#codingexercise
Get count of nodes not sub-trees in a BST that lie within a given range with optimizations.
int getCount(Node root, int min, int max)
{
if (root == null) return 0;
if (root.data < min && root.right == null) return 0;
if (root.data > max && root.left == null) return 0;
int left = getCount(root.left, min, max);
int right = getCount(root.right, min, max);
if (root.data >= min && root.data <= max)
   return left + right +1;
return left + right;
}
an alternative would be to list the nodes of the BST in inorder and select only those that fall within the range.
ToInorder(root).Select(x => min <= x && x >= max).ToList();

Wednesday, February 22, 2017

We continue with a detailed study of Microsoft Azure stack as inferred from an introduction of Azure by Microsoft. We reviewed some more features of Azure storage.
The IaaS offerings from Azure storage services include disks and files where as the PaaS offerings from Azure storage services include objects, tables and queues. The storage offerings are built on a unified distributed storage system with guarantees for durability, encryption at Rest, strongly consistent replication, fault tolerance and auto load balancing.
 The redundancy features are handled by storage stamps and a location service. A storage stamp is a cluster of N racks of storage nodes, where each rack is built out as a separate fault domain with redundant networking and power.  Each storage stamp is located by a VIP  and served by layers of Front-Ends, partition layer and stream layer that provides intra stamp replication.The intra stamp provides synchronous replication in the stream layer and ensures that the data is made durable within the storage stamp. The interstamp replication provides asynchronous replication and is focused on replicating data across stamps. The Inter stamp replication is done in the background so that it does not conflict with the critical path of the customer's request.The intra stamp replication provides durability against hardware failure which is more frequent in large scale systems. The inter-stamp replication provides geo-redundancy against geo-disasters which are relatively rare. The intra stamp replication must occur with low latency because it is on the critical path. The inter-stamp replication on the other hand is concerned with the optimal use of network bandwidth between stamps within an acceptable level of replication delay.   The Stream layer stores the bits on disk and handles distribution and replication of the data across many servers to keep the data durable within a storage stamp.  It is called a stream layer because it handles ordered lists of large storage chunks called extents and an extent is a sequence of append blocks. Only the last extent in a stream can be appended to.  A block is the minimum unit of data for reading and writing. A block can be upto N bytes say 4MB. Data is written to as one or more concatenated blocks to an extent where blocks do not have to be the same size. A client read gets an offset to a stream or extent and the stream layer reads as many blocks as needed at the offset to fulfill the length of the read. The checksums are stored at the block level, one checksum per block. and all the checksums are validated once every few days. Extents are the units of replication Each extent is stored in an NTFS file.  The Stream Manager keeps track of streams and extents not blocks or block appends and therefore stays away from critical path. Since stream and extents are tracked within a single stamp, the stream manager can keep the state in the memory. The stream manager may not use more than fifty million extents and no more than hundred thousand streams for a single storage stamp and their state can fit into 32GB memory for the storage stamp. 
#codingexercise
Get count of nodes not sub-trees in a BST that lie within a given range
int getCount(Node root, int min, int max)
{
if (root == null) return 0;
int left = getCount(root.left, min, max);
int right = getCount(root.right, min, max);
if (root.data >= min && root.data <= max)
   return left + right +1;
return left + right;
}
we could further optimize this by ignoring subtrees that are exclusively smaller or exclusively larger such as when one of the siblings is null and the other sibling continues with the invalidity of the root. The same applies to node with sentinel values of range

Tuesday, February 21, 2017

We continue with a detailed study of Microsoft Azure stack as inferred from an introduction of Azure by Microsoft. We reviewed some more features of Azure networking and started discussing Azure storage.
The IaaS offerings from Azure storage services include disks and files where as the PaaS offerings from Azure storage services include objects, tables and queues. The storage offerings are built on a unified distributed storage system with guarantees for durability, encryption at Rest, strongly consistent replication, fault tolerance and auto load balancing. The blob storage is tiered.  There are two tiers - hot tier and cold tier.  Hot is used for commonly used data and cold for rarely used data. The redundancy features are handled by storage stamps and a location service. A storage stamp is a cluster of N racks of storage nodes, where each rack is built out as a separate fault domain with redundant networking and power.  Each storage stamp is located by a VIP  and served by layers of Front-Ends, partition layer and stream layer that provides intra stamp replication.
There are three layers within a storage stamp - Stream layer as the foundation, the partition layer to handle objects and the front-end to handle user requests.  The Stream layer stores the bits on disk and handles distribution and replication of the data across many servers to keep the data durable within a storage stamp.  It is called a stream layer because it handles ordered lists of large storage chunks called extents and these streams behave the same way as files in terms of storage and replication without the onus of management interface of file objects. The partition layer collocated with the stream layer handles the latter.  The front-end layer comprises of stateless servers handling incoming requests- authenticating, authorizing and routing the request to the a partition server in the partition layer using a partition map.
There are two replication engines - Intra stamp replication and Inter stamp replication. The intra stamp provides synchronous replication in the stream layer and ensures that the data is made durable within the storage stamp. The interstamp replication provides asynchronous replication and is focused on replicating data across stamps. The Inter stamp replication is done in the background so that it does not conflict with the critical path of the customer's request. This is not the only difference. A variety of other reasons suggested this design.  The intra stamp replication provides durability against hardware failure which is more frequent in large scale systems. The inter-stamp replication provides geo-redundancy against geo-disasters which are relatively rare. The intra stamp replication must occur with low latency because it is on the critical path. The inter-stamp replication on the other hand is concerned with the optimal use of network bandwidth between stamps within an acceptable level of replication delay.  The namespace is also different. The metadata stored for intra stamp is scoped down by the size of a single storage stamp which allows it to be cached in memory. This also enables the Windows Azure Storage service to provide fast replication with strong consistency by quickly committing transactions within a single stamp. The partition layer combined with the location service manages the global object namespace across stamps which efficiently replicates object state across data centers.
As an aside, private cloud and pseudo-private cloud often have to struggle with these kind of considerations.  For example, they may provide storage shares out of integrated and dedicate storage clusters such as OneFS based clusters with geo-distribution and even backup but don't seem to lay emphasis on replication outside the storage clusters and enabling more than one copy.  SyncIQ does replication across one or more clusters asynchronously but OneFS is an on-premise cluster file system not a cloud storage service. In other words, while OneFS may work very well as a complete integrated solution for a single storage cluster, it is not a substitute for storage stamps because there are far more layers and inefficiencies introduced by linking different storage clusters. When we see storage not as files but as streams we take a step forward. While streams do have a file system like namespace and API, all writes are append only. While the public cloud is busy streamlining storage services for the cloud scale, the private cloud mindset could easily toss an object storage vendor and appliance into the mix and translate file storage as object storage and go home early! This does provide staging for migration of user data in a private datacenter which allows us to lower the TCO and do away with in house file storage hardware but we would not get far when we go with another private storage vendor.
#codingexercise
Count BST subtrees that lie in a given range
For example,
    10
   5    50
1       40  100 has only one subtree rooted at 40 within the range 5,45
One way to do this would be to print the inorder list for each subtree in the BST and to check if all the elements of the inorder of each subtree are within the range or we could cherrypick the nodes during traversal itself:
bool GetCountBSTInRangeHelper(Node root, int min, int max, ref List<Node> result)
{
if (root == null) return true;
int left = GetCountBSTInRangeHelper(root.left, min, max, ref result);
int right = GetCountBSTInRangeHelper(root.right, min ,max, ref result);
if (root.data >= min && root.data <=max && left && right)
{
    result.Add(root);
    return true;
}
return false;
}
The other approach is as follows:
void GetCountBSTInRangeHelper(Node root, int min, int max, ref List<Node> result)
{
if (root == null) return;
var in = new List<Node>();
ToInOrder(root, ref in);
if (in.All(x =>  min <=x && x <= max)) result.AddRange(in.Except(result).ToList());
GetCountBSTInRangeHelper(root.left, min, max, ref result);
GetCountBSTInRangeHelper(root.right, min, max, ref result);
}
result.Count is the total number of subtrees requested in either case.
Also note that we should not eliminate subtrees based on the root node value because siblings may still conform.

Monday, February 20, 2017

We continue with a detailed study of Microsoft Azure stack as inferred from an introduction of Azure by Microsoft. We reviewed some more features of Azure networking and started discussing Azure storage.
The IaaS offerings from Azure storage services include disks and files where as the PaaS offerings from Azure storage services include objects, tables and queues. The storage offerings are built on a unified distributed storage system with guarantees for durability, encryption at Rest, strongly consistent replication, fault tolerance and auto load balancing.  Disks are offered with server side encryption at rest and Azure disk encryption with BitLocker/DMCrypt. In addition, disks come with blob cache technology, enterprise grade durability with three replicas, snapshot for backup, ability to expand disks, and REST interface for developers. 
Azure files is a fully managed cloud file storage for use with IaaS and on-premise instances. Azure blobs are of three types - Block blobs, Append blobs, Page blobs.
The blob storage is tiered.  There are two tiers - hot tier and cold tier.  Hot is used for commonly used data and cold for rarely used data. The redundancy features are handled by storage stamps and a location service.
A storage stamp is a cluster of N racks of storage nodes, where each rack is built out as a separate fault domain with redundant networking and power.  Each storage stamp is located by a VIP  and served by layers of Front-Ends, partition layer and stream layer that provides intra stamp replication
The stream layer stores bits on the disk and is in charge of distributing and replicating the data across many servers to keep data durable within a storage stamp. It behaves like a distributed file system except files are called streams but the functionality is essentially just for storage and replication. The partition layer is built for managing and understanding the higher level data abstractions (Blob, table and queue), providing a scalable object namespace,  providing transaction ordering and strong consistency of objects and storing object data on top of the stream layer and caching object data to reduce disk I/O.
The front-end layer is a collection of stateless servers that handle incoming requests.
There are two replication engines within the system and they have separate responsibilities.  These are intra-stamp replication which is handled by the stream layer and the inter-stamp replication which is handled by the partition layer. Interstamp replicates objects and the transactions applied to those objects, whereas intra-stamp replication replicates blocks of disk-storage that are used to make up the objects.
#codingexercise
Find the number of inversions in an unsorted array. For example: 8,4,2,1 has an inversion count of 6 and inversions as (8,4), (8,2), (8,1),(4,2),(4,1),(2,1)
// partial example
int GetInversionCount(List<int> A)
{
Node root = null;
int result = 0;
for(int I =0; I < A.Count; i++)
     root = insert(int A[i], ref root, ref result);
return result;
}
Node insert(int key, ref Node root, ref int result)
{
if (root == null) return new Node(key);
if (key < root.key)
{
     root.left = insert(key, ref root.left, ref result);
     result = result + treesize(root.right) + 1;
}else{
     root.right  = insert(key, ref root.right, ref result);
}
node.height = max(treeheight(root.left), treeheight(root.right)) + 1;
node.size = treesize(root.left)+treesize(root.right)+1;
// AVL Balance the tree at root as shown earlier
:
:

// if the tree is right heavy and the node is less than the left then right rotate to compensate
// if the tree is left heavy and the node is greater than the right, then left rotate to compensate
// if the tree is right heavy and the node is greater than the left, then left rotate the left sibling then right rotate the root to compensate
// if the tree is left heavy and the node is greater than the right, then right rotate the right subtree and then left rotate the root
// essentially we maintain the balance and the BST rooted at the middle of the length of the sorted sequence and prgoress towards the finish where the finished sequence is also balanced. The invariants during the progress are the sorted sequence for inorder and the balance maintained by the BST and the AVL respectively.

return result;
}
}

Sunday, February 19, 2017

We continue with a detailed study of Microsoft Azure stack as inferred from an introduction of Azure by Microsoft. We reviewed some more features of Azure networking and started discussing Azure storage.The IaaS offerings from Azure storage services include disks and files where as the PaaS offerings from Azure storage services include objects, tables and queues. The storage offerings are built on a unified distributed storage system with guarantees for durability, encryption at Rest, strongly consistent replication, fault tolerance and auto load balancing.  Disks are of two types in Azure  - Premium disks (SSD) and Standard Disks (HDD) and are backed by page blobs in Azure storage Disks are offered with server side encryption at rest and Azure disk encryption with BitLocker/DMCrypt. In addition, disks come with blob cache technology, enterprise grade durability with three replicas, snapshot for backup, ability to expand disks, and REST interface for developers. Azure files is a fully managed cloud file storage for use with IaaS and on-premise instances. The scenarios cover include lift and shift, host high availability workload data and backup and disaster recovery. 
Azure blobs are of three types - Block blobs, Append blobs, Page blobs. The blob storage is tiered.  There are two tiers - hot tier and cold tier.  Hot is used for commonly used data and cold for rarely used data. The redundancy options include Locally Redundant Storage aka LRS, Geo Redundant Storage aka GRS and Read Access - Geo Redundant Storage aka RA-GRS.
The redundancy features are handled by storage stamps and a location service.
A storage stamp is a cluster of N racks of storage nodes, where each rack is built out as a separate fault domain with redundant networking and power.  CEach storage stamp is located by a VIP  and served by layers of Front-Ends, partition layer and stream layer that provides intra stamp replication.
The stream layer stores bits on the disk and is in charge of distributing and replicating the data across many servers to keep data durable within a storage stamp. It behaves like a distributed file system except files are called streams but the functionality is essentially just for storage and replication. 
The data is stored in the stream layer but is accessible from partition layer.  The partition servers and stream servers are usually together in the same storage stamp. 
Since the stream layer does not understand the higher level constructs, the partition layer is built for (a) managing and understanding the higher level data abstractions (Blob, table and queue), (b) providing a scalable object namespace, (c) providing transaction ordering and strong consistency of objects and (d) storing object data on top of the stream layer and (e) caching object data to reduce disk I/O.
The front-end layer is a collection of stateless servers that handle incoming requests.

#design_idea : https://1drv.ms/w/s!Ashlm-Nw-wnWrwbnnkOk10IVata4 
#codingexercise
Print common elements of two binary trees:
List<Node> GetCommonElements(Node root1, Node root2)
{
var list1 = new List<Node>();
var list2 = new List<Node>();
ToInOrder(root1, ref list1);
ToInOrder(root2, ref list2);
return list1.Intersect(list2).ToList();
}

We can also count inversions in an array using a BST. An inversion is an unsorted pair of elements. The idea is that by inserting into an AVL BST we keep track of the larger elements so we know how many inversions there are when we count the nodes from root to leaf that are higher than the node to be inserted.

Saturday, February 18, 2017

We continue with a detailed study of Microsoft Azure stack as inferred from an introduction of Azure by Microsoft. We reviewed some more features of Azure networking and started discussing Azure storage.The IaaS offerings from Azure storage services include disks and files where as the PaaS offerings from Azure storage services include objects, tables and queues. The storage offerings are built on a unified distributed storage system with guarantees for durability, encryption at Rest, strongly consistent replication, fault tolerance and auto load balancing.  Disks are of two types in Azure  - Premium disks (SSD) and Standard Disks (HDD) and are backed by page blobs in Azure storage Disks are offered out of about 26 Azure regions with server side encryption at rest and Azure disk encryption with BitLocker/DMCrypt. In addition, disks come with blob cache technology, enterprise grade durability with three replicas, snapshot for backup, ability to expand disks, and REST interface for developers.
Azure files is a fully managed cloud file storage for use with IaaS and on-premise instances. The scenarios cover include lift and shift, host high availability workload data and backup and disaster recovery. 
Azure blobs are of three types - Block blobs, Append blobs, Page blobs. Block blobs are used for document, images, video etc. Append blobs are used for multi-writer append only scenarios such as logging and big data analytics output. Page blobs are used for page aligned random reads and writes IaaS Disks, Event Hub, Block level backup. The blob storage is tiered.  There are two tiers - hot tier and cold tier.  Hot is used for commonly used data and cold for rarely used data.  The former is roughly 2.4cents per GB per month and the cold is one cent per GB per month. There is no charge for hot to cool switch.  Generally the rule of thumb is to pick hot tier when we have frequent use and to pick the cold tier when we have high volume. Both tiers are manageable with API that is identical and offers similar throughput and latency. The redundancy options include Locally Redundant Storage aka LRS, Geo Redundant Storage aka GRS and Read Access - Geo Redundant Storage aka RA-GRS.
In LRS, all data in the storage account is made durable by replicating transactions synchronously to three different storage nodes within the same region.
GRS is the default option for redundancy when a storage account is created. In addition to what LRS does, GRS also queues up asynchronous replication to another secondary region where another three more storage nodes are made available.
The RA-GRS gives read only access to a storage account's data in the secondary region from the GRS redundancy. Since the secondary region is used asynchronously, it will eventually have a consistent version of the data. LRS costs less than GRS and has higher throughput than GRS and is especially suited for applications that have their own geo replication strategy. 
The paper on Windows Azure Storage says that the system consists of storage stamps and location service.
A storage stamp is a cluster of N racks of storage nodes, where each rack is built out as a separate fault domain with redundant networking and power.  Clusters typically range from 10 to 20 racks with 18 disk heavy storage nodes per rack. Each storage stamp is located by a VIP  and served by layers of Front-Ends, partition layer and stream layer that provides intra stamp replication. Generally a storage stamps is kept 70% utilized in terms of capacity, transaction and bandwidth.  When a storage stamp reaches 70% utilization, the location service migrates accounts to different stamps using inter-stamp replication. 


#codingexercise 
Convert a BST into a Min Heap 
Node ToMinHeap(Node root) 
{ 
if (root == null) return null; 
var sorted = new List<Node>(); 
ToInOrderList(root, ref sorted); 
var heap = ToMinHeap(sorted); 
return heap; 
} 
void ToInOrderList(Node root, ref List<node> all) 
{ 
if (root == null) return; 
ToInOrderList(root.left, ref all); 
all.Add(root); 
ToInOrderList(root.right, ref all); 
} 
Void ToMinHeap(Node root, ref List<Node> sorted) 
{           
Sorted.ForEach(x = > {x.left = null; x.right = null;}); 
For (int I = 1; I < sorted.Count()/2; I++) 
                MinHeapify(sorted, i) 
} 
Void MinHeapify(List<Node> sorted, int i) 
{ 
Sorted[i-1].left =  ( 2 x i <= sorted.count ) ? sorted[2xi-1] : null; 
Sorted[i-1].right = (2 x I + 1 <= sorted.count) ? sorted[2xi+1-1] : null; 
} 
eg: 1 2 3 4 7 8 9 10 14 16
To heapify an unsorted list:
void Min_Heapify(List<Node> A, int i) // 1-based
{
int l = 2 * i;
int r = 2 * i + 1;
int smallest  = i;
if (l <= A.Count and A[l] < A[i])
    smallest = l;
else
   smallest = i;
if (r <= A.Count and A[r] < A[smallest])
   smallest = r;
if (smallest != i)
    Swap(A[i], A[smallest]);
    Min_Heapify(A, smallest);

}