Monday, August 22, 2022

This is a continuation of a series of articles on hosting solutions and services on Azure public cloud with the most recent discussion on Multitenancy here. The previous articles introduced virtual SAN but this one delves into the operational efficiencies.

vSAN brings additional efficiencies in the form of deduplication, compression, and erasure coding. Processors and SSD provide the performance to run these data reduction technologies. These features improve the storage utilization rate which means less physical storage is required to store the same amount of data.

The delivery of maximum value and flexibility occurs only when the data center is completely software defined and it should not be tied to a specific hardware platform. This can be done in two ways:

First, allow the flexibility via choices in the commodity hardware used for the hypervisor and vSAN in terms of the x86 servers and their vendor.

Second, allow a fast-track of the HCI through turnkey appliances.

In addition to flexibility, a vSAN must be designed for nondisruptive and seamless scaling. Usually this is not a problem when the server additions do not affect the initial lot but it does get impacted if the hypervisor and vSAN must be reconfigured over the adjusted base. Recent improvements in cluster technologies have provided easier scalability options via addition of nodes without impacting the memberships of existing nodes in the ensemble. In such a case, the cluster provides the infrastructure while the datacenter becomes more like a cloud service provider. It must be stressed that a vSAN must be allowed to both scale up and scale out otherwise infrastructure management via platforms like Kubernetes are poised to take on more of the infrastructure management routines from the storage technologies. Most businesses investing in vSAN are keen to pursue a “grow-as-you-go” model.

vSAN’s focus is on storage not infrastructure.  It is more disk oriented than servers on which the storage is virtualized. It must be configurable as an all-flash or hybrid storage. In the hybrid mode, it pools HDDs and SSDs to create a distributed shared datastore even though it might internally prefer to use Flash as a read cache/write buffer to boost performance and HDD for data persistence. While flash prices are declining rapidly allowing more possibilities, organizations cannot retire their existing inventory. Hybrid approach is often a necessity even when workloads can be segregated to take advantage of new versus existing. 

No operational model is complete without unified management. IT operations have been dogged by swivel chair operation where the operator must jump from one interface to another to complete all aspects of a workflow. An HCI has the potential to provide a single pane of glass management instead of managing compute, storage and networking individually. Not al HCI provide this and some even come up with a mashup of underlying technology management but  seasoned HCI will  generally have a unified management. When it comes to HCI, system monitoring is critical to aid management. Although custom dashboards and historical trends might be available from metrics providers, a built-in monitoring system for an HCI goes hand in hand with its management portal. End-to-end monitoring and the whole picture of the software and its environment not helps with proactive monitoring but also troubleshooting and reducing costs.


No comments:

Post a Comment