This is a continuation of the earlier posts starting with this one: http://ravinote.blogspot.com/2020/09/best-practice-from-networking.html
Disks compete not only with other disks but also with other forms of storage such as Solid-State Drives. Consequently, disks tend to become cheaper, capable, and smarter in their operations with added value in emerging and traditional usages. Cloud Storage costs have been said to follow a trend that asymptotically reaches zero with the current price today at about 1c per GigaByte per month for cold storage. The average cost per GigaByte per drive has come down by half from 4c per GigaByte between 2013 and 2018.
Solid State Drives are considered replacements for memory, L1, and L2 cache with added benefits. This may not necessarily be true. It is storage even if it does wear out. Consequently, programming should be more mindful of the reads and writes to data and if they are random, store those data structures on the SSD drives.
The use of sequential data structures is very common in storage engineering. While some components go to great length to make their read and write access sequential, other components may simplify their design by storing on SSD.
Reads and writes are aligned on page size on Solid-State Drives while erasures are on a block level. Consequently, data organized in data structures can leverage these criteria for reading some or all at once. If we are writing less than a page more frequently, we are not making good usage of the SSD. We can use buffering to aggregate writes.
The internal caching and readahead mechanism in the SSD controller prefer long continuous reads and writes rather than simultaneous multiple reads and writes and performs them in one large chunk. This means we open up iterations and aggregate reads and writes to do them all together
No comments:
Post a Comment