We continue with a detailed study of Microsoft Azure stack as inferred from an introduction of Azure by Microsoft. We reviewed some more features of Azure storage. We were discussing the replication flow in Windows Azure Storage service and we looked at sealing operation among extent nodes.
Sealing the extent means that the commit length does not change again. To do this, all the three ENs are asked for their current commit length. If one or more of the replicas have a different length, the minimum commit length is chosen. The partition layer has two different read patterns to handle failures - offset and length for data and sequential for metadata and logs. At the start of the partition load, a check for commit length is requested from the primary EN of the last extent of these two streams. This checks whether all the replicas are available and that they all have the same length. If not, the extent is sealed and the reads are only performed during partition load against a replica sealed by the SM.
The extents that are sealed are erasure coded. The Reed-Solomon erasure code algorithm is used for the Windows Azure Storage WAS service. Note that this is not the same as Pelican which was using a different erasure coding scheme for the exa scale cold storage. In this case for n equal sized chunks of data usually at block level boundaries, there are m error correcting chunks and as long as the loss is no more than m, the extent can be recreated. Erasure coding does not conflict with replication. In fact erasure coding increases the durability of the data.
In order to be efficient on read, a deadline is added to the read requests. if the deadline cannot be met, the read requests is failed in which case the client selects a different EN to read the data from Since there are three replicas, this gives some load balancing. This method is also used with erasure coded data.
#codingexercise
Get the max sum level in a binary tree
int GetMaxLevel(Node root, Node delimiter)
{
if (root == null) return -1;
var q = new Queue<Node>();
q.Enqueue(root);
q.Enqueue(delimiter);
var node = q.Dequeue();
while (node)
{
if (node == delimiter) {q.Enqueue(delimiter); }
else{
if (node.left)
q.Enqueue(node.left) ;
if (node.right)
q.Enqueue(node.right);
}
node = q.Dequeue();
}
var levels = q.split(delimiter);
int max = INT_MIN;
int result = 0;
for (int level = 0; level < levels.Count; level++)
if (levels[level].Sum > max)
result = level;
return result;
}
List<List<Node>> static split(this Queue<Node> q, Node delimiter)
{
var result = new List<List<Node>>();
if (q.empty()) return result;
var node = q.Dequeue();
var current = null;
while (node)
{
if (node == delimiter) {
if (current)
result.Add(current);
current = new List<Node>();
}else{
if (current)
current.Add(Node);
}
node = q.Dequeue();
}
return result;
}
Sealing the extent means that the commit length does not change again. To do this, all the three ENs are asked for their current commit length. If one or more of the replicas have a different length, the minimum commit length is chosen. The partition layer has two different read patterns to handle failures - offset and length for data and sequential for metadata and logs. At the start of the partition load, a check for commit length is requested from the primary EN of the last extent of these two streams. This checks whether all the replicas are available and that they all have the same length. If not, the extent is sealed and the reads are only performed during partition load against a replica sealed by the SM.
The extents that are sealed are erasure coded. The Reed-Solomon erasure code algorithm is used for the Windows Azure Storage WAS service. Note that this is not the same as Pelican which was using a different erasure coding scheme for the exa scale cold storage. In this case for n equal sized chunks of data usually at block level boundaries, there are m error correcting chunks and as long as the loss is no more than m, the extent can be recreated. Erasure coding does not conflict with replication. In fact erasure coding increases the durability of the data.
In order to be efficient on read, a deadline is added to the read requests. if the deadline cannot be met, the read requests is failed in which case the client selects a different EN to read the data from Since there are three replicas, this gives some load balancing. This method is also used with erasure coded data.
#codingexercise
Get the max sum level in a binary tree
int GetMaxLevel(Node root, Node delimiter)
{
if (root == null) return -1;
var q = new Queue<Node>();
q.Enqueue(root);
q.Enqueue(delimiter);
var node = q.Dequeue();
while (node)
{
if (node == delimiter) {q.Enqueue(delimiter); }
else{
if (node.left)
q.Enqueue(node.left) ;
if (node.right)
q.Enqueue(node.right);
}
node = q.Dequeue();
}
var levels = q.split(delimiter);
int max = INT_MIN;
int result = 0;
for (int level = 0; level < levels.Count; level++)
if (levels[level].Sum > max)
result = level;
return result;
}
List<List<Node>> static split(this Queue<Node> q, Node delimiter)
{
var result = new List<List<Node>>();
if (q.empty()) return result;
var node = q.Dequeue();
var current = null;
while (node)
{
if (node == delimiter) {
if (current)
result.Add(current);
current = new List<Node>();
}else{
if (current)
current.Add(Node);
}
node = q.Dequeue();
}
return result;
}
No comments:
Post a Comment