Tuesday, November 27, 2018




 Today we continue discussing the best practice from storage engineering: 

106) CAP theorem states that a system cannot have availability, consistency, and partition tolerance at the same time. However, it is possible to work around this when we use layering and design the system around a specific fault model. The append only layer provides high availability. The partitioning layer provides strong consistency guarantees. Together they can handle specific set of faults with high availability and strong consistency.
107) Workload profiles: Every storage engineering product will have data I/O and one of the ways to try out the product is to use a set of workload profiles with varying patterns of data access.
108) Intra/Inter: Since data I/O crosses multiple layers, a lower layer may perform operations that are similarly applicable to artifacts in another layer at a higher scope. For example, replication may be between copies of objects within a single PUT operation and may also be equally applicable to objects spanning sites designated for content replication.  This not only emphasizes reusability but also provides a way to check the architecture for consistency.
109) Local/Remote: While many components within the storage server take the disk operations to be local there are certain components that gather information across disks and components directly writing to it. In such case, even if the disk is local, it would prove consistent to access local via a loopback and simplify the logic to assuming every such operation as remote.
110) Resource consumption: We referred to performance engineering for improving the data operations. However, the number of resources used per request was not called out because it may have been perfectly acceptable if the elapsed time was within bounds. However, resource conservation has a lot to do with reducing interactions which in turn leads to efficiency.

No comments:

Post a Comment