Both persistent volume and network accessible storage refer to disk storage for Kubernetes which is a portable, extensible, open-source platform for managing containerized workloads and services. It is often a strategic decision for any company because it improves business value of their offering while relieving their business logic from the chores of the hosts so that the same logic can work elsewhere with minimal disruption to its use.
Kubernetes provides a familiar notion of shared storage system with the help of VolumeMounts accessible from each container. A volume mount is a shared file system which may be considered local to the container and reused across containers. The file system protocols have always facilitated the local and remote file storage with their support for distributed file systems. This allowed for databases, configurations and secrets to be available on disk across containers and provide single point of maintenance. Most storage regardless of which storage access protocol – file system protocols, http(s), block or stream are essentially moving data to storage so there is a transfer and latency involved.
The only question has been what latency, and I/O throughput is acceptable for the application and this has guided the decisions for the storage systems, appliances and their integrations. When the storage is tightly coupled with the compute such as between a database server and a database file, all the reads and writes incurred from performance benchmarks require careful arrangement of bytes, their packing, organization, index, checksums and error codes. But most applications hosted on Kubernetes don’t have the same requirements as a database server.
This design and relaxation of performance requirements from applications hosted on Kubernetes facilitates different connectors not just volume mounts. The notion that the same data can be sent to a destination regardless of the destination has been successfully demonstrated by log appenders which publish logs to a variety of destinations. Connectors, too, can help persist data written from the application to a variety of storage providers using consolidators, queues, cache and mechanisms that know how and when to write the data.
The native Kubernetes API does not support any other forms of storage connectors other than the VolumeMount but it does allow services to be written in the form of Kubernetes applications that can accept the data published over http(s) just like a time series database server accepts all kinds of events over the web protocol. The configuration of the endpoint, the binding of the service and the contract associated with the service for the connector definition may vary from destination to destination in the same data publishing application. This may call for the application to become a consolidator that can provide different storage class and support different data workload profiles. Appenders and connectors are popular design patterns that get re-used often and justify their business value.
The shared data volume can be made read-only and accessible only to the pods. This facilitates access restrictions. While authentication, authorization and audit can be enabled for storage connectors, they will still require RBAC access. Therefore, service accounted become necessary with storage connectors. A side-benefit of this security is that the accesses can now be monitored and alerted.
Kubernetes provides a familiar notion of shared storage system with the help of VolumeMounts accessible from each container. A volume mount is a shared file system which may be considered local to the container and reused across containers. The file system protocols have always facilitated the local and remote file storage with their support for distributed file systems. This allowed for databases, configurations and secrets to be available on disk across containers and provide single point of maintenance. Most storage regardless of which storage access protocol – file system protocols, http(s), block or stream are essentially moving data to storage so there is a transfer and latency involved.
The only question has been what latency, and I/O throughput is acceptable for the application and this has guided the decisions for the storage systems, appliances and their integrations. When the storage is tightly coupled with the compute such as between a database server and a database file, all the reads and writes incurred from performance benchmarks require careful arrangement of bytes, their packing, organization, index, checksums and error codes. But most applications hosted on Kubernetes don’t have the same requirements as a database server.
This design and relaxation of performance requirements from applications hosted on Kubernetes facilitates different connectors not just volume mounts. The notion that the same data can be sent to a destination regardless of the destination has been successfully demonstrated by log appenders which publish logs to a variety of destinations. Connectors, too, can help persist data written from the application to a variety of storage providers using consolidators, queues, cache and mechanisms that know how and when to write the data.
The native Kubernetes API does not support any other forms of storage connectors other than the VolumeMount but it does allow services to be written in the form of Kubernetes applications that can accept the data published over http(s) just like a time series database server accepts all kinds of events over the web protocol. The configuration of the endpoint, the binding of the service and the contract associated with the service for the connector definition may vary from destination to destination in the same data publishing application. This may call for the application to become a consolidator that can provide different storage class and support different data workload profiles. Appenders and connectors are popular design patterns that get re-used often and justify their business value.
The shared data volume can be made read-only and accessible only to the pods. This facilitates access restrictions. While authentication, authorization and audit can be enabled for storage connectors, they will still require RBAC access. Therefore, service accounted become necessary with storage connectors. A side-benefit of this security is that the accesses can now be monitored and alerted.
No comments:
Post a Comment