Today we continue discussing the best practice from storage engineering:
275) Workloads that are not well-behaved may be throttled till they are well-behaved. A workload with high request rate is more likely to be throttled. The opposite is also true.
276) Serializability of objects enables reconstruction on the remote destination. It is more than a protocol for data packing and unpacking on the wire. It includes constraints that enable data validation and helps prevent failures down the line. If the serialization includes encryption, it becomes tamper proof.
277) Serializability is also the notion of correctness when simultaneous updates happen to a resource. When multiple transactions commit their actions, their result can correspond the one from a serial execution of some transactions. This is very helpful to eliminate inconsistencies across transactions. Serializability differs from isolation only in that the latter tries to do the same from the point of view of a single transaction.
278) Databases were veritable storage systems that guaranteed transactions. Two-phase locking was introduced with transactions where a shared lock was acquired before read and an exclusive lock before write. The two-phase referred to intent and acquisition. With transactions blocking on a wait queue, this was a way to enforce serializability
279) Transaction locking and logging proved onerous and complicated. Multi-Version Concurrency control was brought in for the purpose of not acquiring locks. With consistent view of data at some points of tie in the past, we no longer need to keep track of every change made since the latest such point of time
280) Optimistic concurrency control was introduced to allow each transaction to maintain histories of reads and writes so that those causing isolation conflicts can be rolled back.
275) Workloads that are not well-behaved may be throttled till they are well-behaved. A workload with high request rate is more likely to be throttled. The opposite is also true.
276) Serializability of objects enables reconstruction on the remote destination. It is more than a protocol for data packing and unpacking on the wire. It includes constraints that enable data validation and helps prevent failures down the line. If the serialization includes encryption, it becomes tamper proof.
277) Serializability is also the notion of correctness when simultaneous updates happen to a resource. When multiple transactions commit their actions, their result can correspond the one from a serial execution of some transactions. This is very helpful to eliminate inconsistencies across transactions. Serializability differs from isolation only in that the latter tries to do the same from the point of view of a single transaction.
278) Databases were veritable storage systems that guaranteed transactions. Two-phase locking was introduced with transactions where a shared lock was acquired before read and an exclusive lock before write. The two-phase referred to intent and acquisition. With transactions blocking on a wait queue, this was a way to enforce serializability
279) Transaction locking and logging proved onerous and complicated. Multi-Version Concurrency control was brought in for the purpose of not acquiring locks. With consistent view of data at some points of tie in the past, we no longer need to keep track of every change made since the latest such point of time
280) Optimistic concurrency control was introduced to allow each transaction to maintain histories of reads and writes so that those causing isolation conflicts can be rolled back.
No comments:
Post a Comment