Some solutions like the Heptio Ark or Velero perform this nicely for some of the applications but the collection of all resources and objects determine a fully functional backup and restore.
The customizations performed over the backup and restore from the tool above should be minimal and just enough to support the packing and unpacking of user resources in a bundle to be exported from source and imported at destination.
These include:
- Registrations in the catalog
- Custom resources being created: for example, FlinkApplications, FlinkSavepoints and FlinkClusters
- External logic saved as maven artifacts or other files on disk aside from custom resources
- Metrics data
- Events for all the user resources
- Logs for all the containers
- User accounts and roles
- Store connection info and containers
- Proprietary Data migration for each container in etcd, databases, blobs, files and stream store.
The methods for backup and restore of project artifacts created by the user must be based on retry loops. This logic would look something like this
While ((count_of_resources_to_be_collected > 0 ))
Do
Result=$(kubectl get $custom-resource-definitions –n user-namespace -o yaml > crds.yaml)
Count=$(cat crds.yaml | grep -I “kind” | wc –l)
If (($count > 0)); then
count_of_resources_to_be_collected = ( expr $count_of_resources_to_be_collected- $count)
// adjust $custom-resource_definitions to not include those that were read
fi
done
Similarly, the resources to be restore on another cluster must be re-entrant and repeatable:
While ((count_of_resources_to_be_collected > 0 ))
Do
Result=$(kubectl create –f crds.yaml -n user-namespace)
Count=$(get $custom-resource-definitions –n user-namespace -o yaml >)
If (($count > 0)); then
count_of_resources_to_be_collected = ( expr $count_of_resources_to_be_collected- $count)
// adjust crds.yaml and count_of_resources_to_be_collected to not include those that were read
fi
done
No comments:
Post a Comment