Saturday, August 13, 2022

 A string S containing only the letters "A", "B" and "C" is given. The string can be transformed by removing one occurrence of "AA", "BB" or "CC".

Transformation of the string is the process of removing letters from it, based on the rules described above. As long as at least one rule can be applied, the process should be repeated. If more than one rule can be used, any one of them could be chosen. 

Write a function: 

class Solution { public String solution(String S); } 

that, given a string S consisting of N characters, returns any string that can result from a sequence of transformations as described above. 


For example, given string S = "ACCAABBC" the function may return "AC", because one of the possible sequences of transformations is as follows: 

Also, given string S = "ABCBBCBA" the function may return "", because one possible sequence of transformations is: 

Finally, for string S = "BABABA" the function must return "BABABA", because no rules can be applied to string S. 

Write an efficient algorithm for the following assumptions: 

the length of string S is within the range [0..50,000]; 

string S consists only of the following characters: "A", "B" and/or "C". 

string getReduced(string prefix, string suffix) 

{ 

bool fix = true; 

while(fix) 

{ 

if (string.IsNullOrWhitespace(suffix)) 

{fix = false;  break;} 

int I = 0; 

while (i+1 < suffix.length && suffix[i] == suffix[i+1]) 

 { 

     suffix = suffix.substring(I+2);  

     Fix = true; 

   } 

If (fix) continue; 

    While( prefix.length > 0 && prefix.last() == suffix.first()) 

    { 

        prefix = prefix.length-1; 

        suffix = suffix.substring(1, suffix.length-1); 

 .       Fix = true; 

    }  

    If(fix) continue; 

Prefix = prefix + suffix.first(); 

Suffix = suffix.substring(1, suffix.length-1); 

Fix = true; 

} 

} 

return prefix+suffix; 

} 

Friday, August 12, 2022

This is a continuation of a series of articles on hosting solutions and services on Azure public cloud with the most recent discussion on Multitenancy here This article continues to discuss troubleshooting the Azure Arc instance with data collection and reporting

Data transmitted from the Azure Arc data services can be tremendously helpful to management of resources. The ones used by Azure Arc enabled services may include: SQL MI – Azure Arc, PostgreSQL HyperScale – Azure Arc, Azure Data Studio, Azure CLI (az) and Azure Data CLI (azdata). When a cluster is configured to be directly connected to Azure, some data is automatically transmitted to Microsoft. Operational data from Metrics and Logs is automatically uploaded. Billing and Inventory data such as number of instances, and usages such as vCores consumed is automatically sent to Microsoft and is required from instances. Diagnostics information for troubleshooting purposes is not automatically sent. They must be sent on-demand. But Customer Experience Improvement Program (CEIP) Summary is automatically sent but it must be opted.

When a cluster is not configured to be directly connected to Azure, it does not automatically transmit operational, or billing and inventory data to Microsoft. Data can be transmitted to Microsoft when it is configured to be exported. The data and mechanisms are similar to that in the directly connected mode. CEIP summary, if allowed can be automatically transmitted.

Metrics include performance and capacity related metrics, which are collected to an InfluxDB provided as part of Azure-Arc enabled data services and these can be viewed on a Grafana dashboard. This is customary for many Kubernetes products.

Logs emitted by all components are collected to an ElasticSearch database also provided as part of Azure Arc enabled data services. These logs can be viewed on the Kibana dashboard.

If the data is sent to Azure Monitor or Log Analytics, the destination region/zone can be specified and access to view can be granted to other regions.

Billing data is used for all the resources which can be categorized into three types: Azure Arc enabled SQL managed instances, PostgreSQL Hyperscale server group, SQL Server on Azure Arc enabled servers and Data controller. Every database instance and the data controller itself will be reflected in Azure as an Azure resource in the Azure Resource Manager.

The JSON data pertaining to a resource has attributes such as customObjectName, uid, instanceName, instanceNamespace, instanceType, location, resourceGroupName, subscriptionId, isDeleted, externalEndpoint, vCores, createTimestamp, and updateTimestamp.

Diagnostic data has attributes such as Error Logs which include log files capturing errors and these are restricted and shared by user. Attributes also include DMVs which can contain query and query plans but are restricted and shared by users, Views that can contain customer data but are restricted and shared by only users, Crash dumps involving customer data which has a maximum of 30 day retention of crash dumps, and statistics objects and crash dumps involving personal data which has machine names, login names, emails, locations and other identifiable information.


Thursday, August 11, 2022

 This is a continuation of a series of articles on hosting solutions and services on Azure public cloud with the most recent discussion on Multitenancy here This article continues to discuss troubleshooting the Azure Arc instance with a few cases for data services. 

Tbe troubleshooting of Azure Arc data services is similar to that for the resource bridge.  

Logs can be collected for further investigation, and this is probably the foremost resolution techniques. 

Errors pertaining to Logs upload can stem from missing Log Analytics workspace credentials. This is generally the case for Azure Arc data controllers that are deployed in the direct connectivity mode using kubectl, and the logs upload error message reads “spec.settings.azure.autoUploadLogs is true, but failed to get log-workspace-secret secret.” Creating a secret with the Log Analytics workspace credentials containing the WorkspaceID and SharedAccessKey resolves this error. 

Similarly, metrics upload also might cause errors in direct connected mode. The permissions needed for the MSI must be properly granted otherwise the error message will indicate “AuthorizationFailed This can be resolved by retrieving the MSI for Azure Arc data controller extension and granting the required roles such as Monitoring Metrics Publisher. Automatic upload of metrics can be set up with a command as: az arcdata dc update --name arcdc --resource-group <myresourcegroup> --auto-upload-metrics true. 

Usage information is different from logs and metrics but the technique for resolving errors say for metrics is the same for usage information. The typical error is Authorization failed even though when the Azure Arc data controller is setup in the direct connected mode, the permissions for uploading usage information is automatically granted. Resolving the permissions issue requires retrieving the MSI and granting the required roles. 

Errors pertaining to upgrades usually come from incorrect image tags. The error message encountered reads as follows: “Failed to await bootstrap job complete after retrying for two minutes” The bootstrap job status reflects “ErrImagePull” and the pod description reads “Failed to pull image”. The version log will have the correct image tag. Running the upgrade command with the correct image tag resolves this error. 

If there are no errors but the upgrade job runs for longer than fifteen minutes, it is likely that there was a connection failure to the repository or registry.  When we view the pods description, it will usually say “failed to resolve reference” in this case. If the image was deployed from a private registry, the yaml file used for upgrade might be referring to mcr.microsoft.com instead of the private registry. Correctly specifying the registry and the repository in the yaml file will resolve this error. 

Upgrade jobs can also run long if there are not enough resources.  Looking for a pod that shows only some of the containers as ready is a good symptom for this trouble. Viewing the events or the logs can point to the root cause as insufficient cpu or memory. More nodes can be added to the Kubernetes cluster or more resources can be assigned to the existing nodes to overcome this error. 


#codingexercise

void ToInOrderList(Node root, ref List<node> all)

{

if (root == null) return;

ToInOrderList(root.left, ref all);

all.Add(root);

ToInOrderList(root.right, ref all);

}

Wednesday, August 10, 2022

This is a continuation of a series of articles on hosting solutions and services on Azure public cloud with the most recent discussion on Multitenancy here This article continues to discuss troubleshooting the Azure Arc resource bridge with a few more cases.

The resource bridge is designed to host other Azure Arc services. It supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in on-premises environment. It comes with a management Kubernetes cluster that requires no user management. In this sense, it is a virtual appliance.

Logs can be collected for further investigation, and this is probably the foremost resolution techniques. The collection is done with the az arcappliance logs command which must be run from the client machine from which the Azure arc resource bridge was deployed. The path to the kubeconfig must be provided.

Networking issues manifest when the resource bridge is unreachable. The resource bridge runs a Kubernetes cluster and its control plane requires a static ip address which is specified in the infra.yaml file. Rebooting an Azure arc resource bridge or VM can trigger an IP address change, resulting in failing services but rebooting the Azure arc resource bridge VM should help recover its IP address.

 Updating the Azure Arc resource bridge requires deleting it and redeploying it because all the parameters are specified at the time of creation. If the wrong location or subscription is specified during the resource creation, it fails. Recreating the resource without redeploying leaves it is a suspended state. Deleting and then updating the appliance yaml file followed by redeploying the resource bridge is recommended.

During the createConfig or run command execution, there will be an interactive experience which shows the list of VMWare entites where the user can select to deploy the virtual appliance. This list shows the user created resource pools along with the cluster resource pools but the default host resource pools are not listed. When the appliance is deployed to a host resource pool, there is no high availability if the hardware fails. It is therefore recommended that the appliance not be redeployed to a host resource pool.

Azure Arc must be configured for proxy to connect with the Azure services. The configuration is handled automatically but proxy configuration of the client machine is not handled by the resource bridge. There are two certificates required to redeploy the Azure Arc resource bridge behind an SSL proxy. First, the SSL certificate for the proxy so that the host and guest can handshake with it and second the SSL certificate of the Microsoft downloaded servers.  Adding the certificate to the trust stores is required for the clients to connect.

Authentication handshake failures may not always occur due to certificates.  Certificates-signed-by-an-unknown-authority error can be displayed when the arcappliance commands are run using a remote PowerShell. Since it is not supported by the Azure Arc resource bridge, those commands must be run locally on a node in the cluster. The RDP or the console session will help with these.


Tuesday, August 9, 2022

 

This is a continuation of a series of articles on hosting solutions and services on Azure public cloud with the most recent discussion on Multitenancy here This article continues to discuss Azure Arc enabled servers, their sizing guidance and operational considerations when increasing the numbers but discusses troubleshooting the resource bridge.

The resource bridge is designed to host other Azure Arc services. It supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in on-premises environment. It comes with a management Kubernetes cluster that requires no user management. In this sense, it is a virtual appliance.

Issues encountered with the Azure Arc resource bridge can be diverse but the techniques to mitigate them typically involve the following:

Logs can be collected for further investigation, and this is probably the foremost resolution techniques. The collection is done with the az arcappliance logs command which must be run from the client machine from which the Azure arc resource bridge was deployed. The path to the kubeconfig must be provided.

These cli commands for Azure Arc Resource Bridge are best not to be specified via the remote PowerShell because that can lead to extraneous issues. For example, there might be an EOF error when using the logs command. When such an error occurs, it is most likely that the logs command is running in an interactive mode and prompts the user for parameters. It can be avoided by using the remote desktop protocol or a console session to sign directly into the node and running the command locally.  Avoiding the prompt by pre-populating the values is also possible.

If an arc resource bridge deployment fails, subsequent deployments may fail due to residual cached folders remaining on the machine. These previous deployment failures can be prevented from interference by running the az arcappliance delete command after a failed deployment. If the failed deployment is not successfully removed, folders can be deleted manually but it is best to follow it up with the delete command again.

Another common error is the token refresh error. It manifests with the error message that the refresh token has expired or is invalid due to sign-in frequency checks by conditional access. These errors occur because when we sign in to Azure, the token has a maximum lifetime and after exceeding that period, it must be refreshed. The az login command can help with this.

Networking issues manifest when the resource bridge is unreachable. The resource bridge runs a Kubernetes cluster and its control plane requires a static ip address which is specified in the infra.yaml file. Rebooting an Azure arc resource bridge or VM can trigger an IP address change, resulting in failing services but rebooting the Azure arc resource bridge VM should help recover its IP address.

 

Monday, August 8, 2022

 This is a continuation of a series of articles on hosting solutions and services on Azure public cloud with the most recent discussion on Multitenancy here This article continues to discuss Azure Arc enabled servers, their sizing guidance, operational considerations when increasing the numbers and the overall planning required for Azure arc enabled data services deployment. It discusses specifically the migration of on-premises or other cloud based Azure arc enabled server to cloud.

The transitioning must be managed for both Azure Arc enabled servers based on the supported VM extensions and the Azure services based on its Arc server resource identity.  Azure Migrate can help with understanding requirements for on-premises machines to be migrated to Azure.

The tasks to be undertaken include the following: 1. Inventory and remove VM extensions, 2. Review access rights, 3. Uninstall the Azure Connected Machine agent, 4. Install the Azure Guest Agent, 5. Migrate server machine to Azure, 6. Deploy Azure VM extensions.

Step 1 can be done with the listing from Azure CLI or with  Azure PowerShell,  The VM extensions can be identified with using the az connectedmachine extension list command. After identifying these extensions, they can be removed using the Azure Portal. If the extensions were installed with Azure Policy and the VM Insights initiative, it is necessary to create an exclusion to prevent the re-evaluation and deployment of the extensions on the Azure Arc enabled server before the migration is complete.

Step 2 can be done by reviewing the access rights. The role assignments for the Azure Arc enabled servers must be done using the PowerShell or with other PowerShell code and the results can be exported to CSV or other format.

Step 3 can be done by following the guidance to uninstall the agent from the server. The extensions must be checked that they are removed by disconnecting the agent.

Step 4 can be done by installing the Azure Guest agent. Migrated VMs do not have guest agents installed. Manually installing them is required.

Step 5. Servers and machines can be migrated to Azure in this step.  The migration options based on the environment must be carefully reviewed prior to the migration.

Step 6: The migration and completion of all post-migration configuration steps can be done by deploying the Azure VM extensions based on the VM extensions originally installed on the Azure Arc enabled server.

Finally, the audit settings inside a machine must be resumed with guest configuration policy definitions.

 
Reference: Multitenancy: https://1drv.ms/w/s!Ashlm-Nw-wnWhLMfc6pdJbQZ6XiPWA?e=fBoKcN