We continue discussing the ZooKeeper. It is a co-ordination service with elements from group messaging, shared registers and distributed lock services. It provides a interface that guarantees wait-free property and FIFO execution of requests from each client. Requests across all clients are also linearized.
We were discussing the components of ZooKeeper - the request processor, the atomic broadcast and the replicated database. The request processor is one which prepares the request for processing that also includes co-ordination activities among servers for write requests. If the request processing involves co-ordination, it is handled by an agreement protocol which is an implementation of atomic broadcast. The changes are committed in ZooKeeper database that is replicated across all servers. This database is periodically snapshot because replaying all the messages in order would take too long to recover state. During the snapshot, the ZooKeeper state is not locked so the snapshots don't really reflect the state of the ZooKeeper at any point of time. However since the transactions are idempotent, snapshots allow the state to be restored because the changes can be applied more than once and they are in the same order as in replay.
Writes and reads for clients are now handled easily with the above design Since the server processes writes in order and there is no concurrency for writes, even the notifications are strictly ordered. On every write, the notifications are sent out and cleared. Only the server that the client connects to, sends out notifications to the client. Read requests are also handled locally. Read requests are also directly served from the in-memory database and do not involve the request processor or the atomic broadcast. Such fast reads however however do not guarantee a precedence order for the reads which means they may read state values even though more recent changes have been committed. To alleviate this a 'sync' primitive is provided that ensures global consistency by performing the equivalent of flushing all writes. There is no broadcast required for sync, instead it is placed at the end of the queue of requests between the leader and the server executing the call to sync. The follower must be sure that the leader has not changed. This can be ascertained by looking at the pending queue. if the queue is not empty, then the leader is the same. If the queue is empty, the leader needs to issue a null transaction and orders the sync after that transaction. A null transaction is not issued then the leader realizes it is not a leader before the follower abandon the leader.
#codingexercise
int binary_search(String input, int start, int end, char val)
{
int mid = (start + end)/2;
if (input[mid] == val) return mid;
if (start == end && input[mid] != val) return -1;
if (input[mid] < val)
return binary_search(nums, mid+1, end, val);
else
return binary_search(nums, start, mid, val);
}
int GetCount(String input, char val)
{
int current = binary_search(input, 0, input.Length - 1, val);
if (current == -1) return 0;
int mid = current;
while (current >= 0 && input[current] == val) current--;
current++;
int start = current;
current = mid;
while (current <= input.Length-1 && input[current] == val)
current++;
current--;
int end = current;
return end-start+1;
}
We were discussing the components of ZooKeeper - the request processor, the atomic broadcast and the replicated database. The request processor is one which prepares the request for processing that also includes co-ordination activities among servers for write requests. If the request processing involves co-ordination, it is handled by an agreement protocol which is an implementation of atomic broadcast. The changes are committed in ZooKeeper database that is replicated across all servers. This database is periodically snapshot because replaying all the messages in order would take too long to recover state. During the snapshot, the ZooKeeper state is not locked so the snapshots don't really reflect the state of the ZooKeeper at any point of time. However since the transactions are idempotent, snapshots allow the state to be restored because the changes can be applied more than once and they are in the same order as in replay.
Writes and reads for clients are now handled easily with the above design Since the server processes writes in order and there is no concurrency for writes, even the notifications are strictly ordered. On every write, the notifications are sent out and cleared. Only the server that the client connects to, sends out notifications to the client. Read requests are also handled locally. Read requests are also directly served from the in-memory database and do not involve the request processor or the atomic broadcast. Such fast reads however however do not guarantee a precedence order for the reads which means they may read state values even though more recent changes have been committed. To alleviate this a 'sync' primitive is provided that ensures global consistency by performing the equivalent of flushing all writes. There is no broadcast required for sync, instead it is placed at the end of the queue of requests between the leader and the server executing the call to sync. The follower must be sure that the leader has not changed. This can be ascertained by looking at the pending queue. if the queue is not empty, then the leader is the same. If the queue is empty, the leader needs to issue a null transaction and orders the sync after that transaction. A null transaction is not issued then the leader realizes it is not a leader before the follower abandon the leader.
#codingexercise
int binary_search(String input, int start, int end, char val)
{
int mid = (start + end)/2;
if (input[mid] == val) return mid;
if (start == end && input[mid] != val) return -1;
if (input[mid] < val)
return binary_search(nums, mid+1, end, val);
else
return binary_search(nums, start, mid, val);
}
int GetCount(String input, char val)
{
int current = binary_search(input, 0, input.Length - 1, val);
if (current == -1) return 0;
int mid = current;
while (current >= 0 && input[current] == val) current--;
current++;
int start = current;
current = mid;
while (current <= input.Length-1 && input[current] == val)
current++;
current--;
int end = current;
return end-start+1;
}
No comments:
Post a Comment