Today we continue discussing the best practice from storage engineering :
606) We use data structures to keep the information we want to access in a convenient form. When this is persisted, it mitigates faults in the processing. However each such artifact brings in additional chores and maintenance. On the other hand, it is cheaper to execute the logic and the logic can be versioned. Therefore when there is a trade-off between compute and storage for numerous small and cheap artifacts, it is better to generate them dynamically
607) The above has far reaching impact when there are a number of layers involved and ac cost incurred in the lower layer bubbles up to the top layer.
608) Compute tends to be distributed in nature while storage tends to be local. They can be mutually exclusive in this regard.
609) Compute oriented processing can scale up or out while storage has to scale out.
610) Compute oriented processing can get priority but storage tends to remain in a class
611) Background tasks may sometimes need to catch up with the current activities. In order to accommodate the delay, they may either be run upfront so that changes to be processed are incremental or they can increase in number to divide up the work.
612) The results from the background tasks mentioned above might also take a long time to accumulate. They can be made available as they appear or batched.
613) The load balancer works very well to enable background tasks to catch up by not overloading a single task and distributing the online activities to ensure that the background task has light load
No comments:
Post a Comment