We continue with our discussion on backup of Kubernetes resources
is also a difference in the results of the Velero tool versus custom configuration generated using the scripts above. For example, there is no knowledge of the product and the logic pertaining to the reconcilation of the operator states in built into the output of the tool. The custom configuration on the other hand, leverages the product specific knowledge to make the export and import of user resources all the more efficient, streamlined and conformant with the product.
The above is particularly true for custom resources and their definitions. The custom resources have a two fold utility:
1) they are a bigger scope than the native Kubernetes resources and translate export and import to simpler instructions and
2) they provide the opportunity to offload all maintenance to the reconciliation logic in-built into their corresponding operators which may even have their own assembling and disassembling in terms of native Kubernetes resources.
The overall suggestion is that scope and actions can become more granular to help with their export-import usability.
One of the challenges in registering resources is passing ip addresses for pod, host and cluster regardless of the technique used to export and import. These are dynamic values that are obtained as the import proceeds and are not available beforehand. Although it is easy to write a query to retrieve the ip address, even those queries have parameters such as pod names which do not necessarily have a pattern. This chains yet another query to retrieve the parameter. If this was limited to a few levels, it would have been easy to repeat. However not all resources are like pods, so the parameters for each type of resource have their own logic.
Similarly, another criteria is the determination of uid for the resource itself or its parent or that of the cluster. Again the determination of this parameter varies by resource whose uid is needed or the determination of the owner that may require a lookup table.
Certain values for ip and uid can be “None” but they can always be the case.
Also, the charts deploy hard-coded definitions and resources. They invoke scripts only during certain events. Each definition and its corresponding resource can be provisioned with the given values beforehand as long as we are talking about flat native K8s resources but the same resources in user namespaces may become sophisticated and complex in scope with hierarchy which requires those dependencies to be followed for each registration and deregistration.
Sample resources and definition files have been generated with the shell script shown above and they have been repeatedly modified and their import attempted to be automated to come to the enumerations above. It is very easy to tweak the scripts for a given user namespace after a few trials and use the script as a template for creating and populating say namespaces. The use of schema or auxiliary data structures to store and each every resource types and their import logic and order seems on overkill as a general purpose solution
is also a difference in the results of the Velero tool versus custom configuration generated using the scripts above. For example, there is no knowledge of the product and the logic pertaining to the reconcilation of the operator states in built into the output of the tool. The custom configuration on the other hand, leverages the product specific knowledge to make the export and import of user resources all the more efficient, streamlined and conformant with the product.
The above is particularly true for custom resources and their definitions. The custom resources have a two fold utility:
1) they are a bigger scope than the native Kubernetes resources and translate export and import to simpler instructions and
2) they provide the opportunity to offload all maintenance to the reconciliation logic in-built into their corresponding operators which may even have their own assembling and disassembling in terms of native Kubernetes resources.
The overall suggestion is that scope and actions can become more granular to help with their export-import usability.
One of the challenges in registering resources is passing ip addresses for pod, host and cluster regardless of the technique used to export and import. These are dynamic values that are obtained as the import proceeds and are not available beforehand. Although it is easy to write a query to retrieve the ip address, even those queries have parameters such as pod names which do not necessarily have a pattern. This chains yet another query to retrieve the parameter. If this was limited to a few levels, it would have been easy to repeat. However not all resources are like pods, so the parameters for each type of resource have their own logic.
Similarly, another criteria is the determination of uid for the resource itself or its parent or that of the cluster. Again the determination of this parameter varies by resource whose uid is needed or the determination of the owner that may require a lookup table.
Certain values for ip and uid can be “None” but they can always be the case.
Also, the charts deploy hard-coded definitions and resources. They invoke scripts only during certain events. Each definition and its corresponding resource can be provisioned with the given values beforehand as long as we are talking about flat native K8s resources but the same resources in user namespaces may become sophisticated and complex in scope with hierarchy which requires those dependencies to be followed for each registration and deregistration.
Sample resources and definition files have been generated with the shell script shown above and they have been repeatedly modified and their import attempted to be automated to come to the enumerations above. It is very easy to tweak the scripts for a given user namespace after a few trials and use the script as a template for creating and populating say namespaces. The use of schema or auxiliary data structures to store and each every resource types and their import logic and order seems on overkill as a general purpose solution
Thank you for sharing wonderful information with us to get some idea about that content.
ReplyDeleteDocker Training in Hyderabad
Kubernetes Training in Hyderabad
Docker and Kubernetes Training
Docker and Kubernetes Online Training