Monday, April 10, 2023

 

This is a continuation of the previous posts on Azure Data Platform and discusses the considerations for a specific scenario of moving data from an on-premises IBM object storage to Azure storage.

Organization of storage assets to support governance, operational management and accounting requirements is necessary for data migration to the cloud. Well-defined naming and metadata tagging conventions help to quickly locate and manage resources. These conventions also help to associate cloud usage costs with business teams via chargeback and showback accounting mechanisms.

Proper naming is essential for security purposes but tagging improves metadata. They serve different purposes.  The naming includes parts that indicate the business unit or project owner. Names could also have parts for workload, application, environment, criticality and other such information. Tags could also include these names but enhance the metadata that do not necessarily need to be reflected in the name.

An effective naming strategy includes one with:

<resourceType>-<workload/application>-<environment>-<region>-<instance>.

For example,

rg-projectx-nonprod-centralus-001

is an example of a good naming convention for a resource group.

The projectx here refers to the name of the project or business capability but it could comprise multiple parts for hierarchical information such as contoso-fin-navigator for <organization>-<department>-<service>

Similarly, the <instance> could comprise of <role>-<instanceSuffix> for example vm-transactional-nonprod-centralus-db-001.

For storage accounts, the project name can contain spaces, but these could be removed. There might be no option to substitute the space character with a hyphen. Container names permit hyphens. A container name must be between 3-63 characters long and all lowercase.

Existing naming conventions can be used, if they have been serving adequately and are compatible with the public cloud naming restrictions.

It is preferable to introduce containers for buckets and storage accounts for namespaces between destination and source mapping on a one-on-one basis.

The current limits for an on-premises IBM object storage include the following:

100 buckets per object storage instance.

10 TB max per object

Unlimited number of objects in an instance.

1024-character key length

Storage classes at bucket level.

Changing storage class requires manually copying data from one bucket to another.

Archive independently of storage class.

IBM COS is accessible via S3 protocol.

And these are well within the limits of Azure storage account

No comments:

Post a Comment