Saturday, March 11, 2017

We continue with a detailed study of Microsoft Azure stack as inferred from an introduction of Azure by Microsoft. We reviewed some more features of Azure storage. We saw how erasure coding works in Windows Azure Storage aka WAS and how its different from Pelican. We saw what steps WAS takes to improve durability. We also saw the benefits of journaling. We reviewed the partition layer which translates objects such as blob, table or queue to storage. We saw how the partition manager maintains a massive Object Table that is sharded on different partition servers which maintain their consistency across concurrent transactions using a lock service. We read the partition layer data model and the supported data types and operations, the partition layer architecture and the range partition data structures. we reviewed the load balancing, split and merge operations. Then we reviewed the intrastamp and interstamp replication. We also took stock of TCO, performance, application profiles and their comparision study
We reviewed some of their design choices. They separated nodes dedicated to computation from nodes dedicated to storage. They decided to use range based partitioning /indexing instead of hash based indexing for the partition layer's object table. They used throttling or isolation which proved very useful when accounts are not well behaved. They built in automatic load balancing  on range based partitioning approach and the account based throttling. They used separate log streams for each RangePartition to isolate the load time of each RangePartition. They used journaling to overcome the read/writes contention on the same drive. This also helps optimize small writes which is the norm when the log files are separated per RangePartition. 
Arguably the single most beneficial design choice was the appendoly system.  The data is never overwritten once committed to the replica. If there are failures the extent is immediately sealed. Since the data doesn't change, its commit length can be used to enforce consistency across all the replicas. Another application of the append-only system is in its ability to provide snapshot or versioning features at little or no cost. With versions, erasure coding can also be done. Moreover, append only system has been a tremendous benefit for diagnosing issues as well as repairing/recovering the system on failures. With versioning, snapshots and history of the changes, we can now have tools to repair and recover corrupted state to a prior consistent state. The way this works is that  the original is untouched and a copy is made for every write and appended after the previous such record. Since all changes are consistent, we can have enough performance for scaling while providing diagnosis and troubleshooting.
An append only system comes with certain costs. Since each copy takes more space, an efficient garbage collection is required to keep the space overhead low.  Similarly end to end checksum is a hallmark for this kind of service so that w keep the critical information in a safe place.
#codingexercise
To count total number of N digit numbers whose difference between sum of even and odd digits is equal, we modify it as follows:
long GetCountEqualf(int digits, int even, int odd, int n, bool isOdd)
{
if (digits == n) return even == odd;
long count = 0;
if (isOdd){
    for (int i = 0; i <= 9; i++)
           count += GetCountEqual(digits+1, even, odd+i, n, false);
}else{
    for (int i = 0; i <= 9; i++)
           count+= GetCountEqual(digits+1, even+i, odd, n, true);
}
return count;


}

No comments:

Post a Comment