Monday, December 12, 2022

 # old dog, old tricks 

Write a function to find the longest common prefix string amongst an array of strings.

If there is no common prefix, return an empty string "".

 

Example 1:

Input: strs = ["flower","flow","flight"]

Output: "fl"

Example 2:

Input: strs = ["dog","racecar","car"]

Output: ""

Explanation: There is no common prefix among the input strings.

 

Constraints:

1 <= strs.length <= 200

0 <= strs[i].length <= 200

strs[i] consists of only lowercase English letters.

class Solution {

    public String longestCommonPrefix(String[] strs) {

        String prefix = "";

        if (strs == null || strs.length == 0)

        {

            return prefix;

        }

        if (strs.length == 1)

        {

            return strs[0];

        }

        prefix = getPrefixByTrie(strs);

        return prefix;

    }



    private static final int MAX_CHARACTERS = 26;

    private static class TrieNode

    {

        TrieNode[] children = new TrieNode[MAX_CHARACTERS];

        boolean isLeaf;

        public TrieNode()

        {

            isLeaf = false;

            for (int i = 0; i < MAX_CHARACTERS; i++)

                children[i] = null;

        }

    }

   

    private static TrieNode root;

    private static int indexes;

    private static void insert(String key)

    {

        int index;

        TrieNode current = root;

        for (int level = 0; level < key.length(); level++)

        {

            index = key.charAt(level) - 'a';

            if (current.children[index] == null)

            {

                current.children[index] = new TrieNode();

            }



            current = current.children[index];

        }

       

        current.isLeaf = true;

    }

    private static int countChildren(TrieNode current)

    {

        int count = 0;

        for (int i = 0; i < MAX_CHARACTERS; i++)

        {

            if (current.children[i] != null)

            {

                count++;

                indexes = i;

            }

        }

        return count;

    }

    private static String walkTrie()

    {

        TrieNode current = root;

        indexes = 0;

        String prefix = "";

        while (countChildren(current) == 1 && current.isLeaf == false)

        {

            current = current.children[indexes];

            prefix += (char)('a' + indexes);

        }

        return prefix;

    }



    private static void constructTrie(String[] strs)

    {

        for (int i = 0; i < strs.length; i++)

        {

            insert(strs[i]);

        }

        return;

    }



    private static String getPrefixByTrie(String[] strs)

    {

        root = new TrieNode();

        constructTrie(strs);

        return walkTrie();

    }

}


Sunday, December 11, 2022

 

This is a survey of modernizing applications with the Azure public cloud. Previous articles focused on trade and tools but this dives into one of the major public cloud usages for the purposes of application modernization.

The pandemic has increased the demand for the application modernization efforts across the board. That said, the approaches have varied among clients.  Many companies have withdrawn from using a single end-to-end platform and focused on specific business cases and dedicated technologies. This shift comes amid the rise of public cloud service portfolio. Application modernization is a critical component of digital transformation and companies are expanding its traditional significance to include rehosting on-premises applications to the cloud with almost no changes, “replatforming” them so they can leverage basic cloud platform service such as autoscaling, refactoring their architecture to gain many more cloud-related benefits; the use of cloud development tools, from code editors to full DevOps toolchains and finally a complete cloud-native rewrite of an application, both to provide new functionality and help retire legacy assets.

Enterprises have many choices when it comes to application modernization tools, services and platforms. Azure can be credited to making some of the specialties to be the best in the industry. One of the things that Azure has not helped clients overcome is their often-significant challenge concerning IT culture change, upskilling and costs. If there were more literature, education, and evangelism of how these staff can embrace modernization along with case studies, best practices and modernization models this would become at par with any of its existing service portfolio.

Application and data modernization journey can be a long journey with a predetermined start but not necessarily a well-known finish state. Inertia and complexity involved in legacy applications have caused a drag on the otherwise well-perceived benefits of a thriving culture. On the other hand, security and reliability and cost optimization have become the two top drivers, with improved customer experience and time to market.

There are several approaches to application modernization, but each comes with trade-offs.

1.       Rehosting, also called “lift-and-shift” is fast and aimed at lowering reliance on private datacenters.

2.       Replatforming an application so it can take advantage of cloud platforming capabilities such as autoscaling

3.       Refactoring applications written with aging and rigid architectural patterns such as three-tier to take advantage of new approaches including microservices and serverless.

4.       A full application rewrite: This gives an enterprise the most flexibility in terms of application functionality. But like refactoring, it is costly and complex.

5.       Replacing an application completely, such as with a new SaaS application from an ISV.

All these approaches also include deployment considerations. For example, should a project leverage virtual machines, given the technologies broad familiarity, stability, and security or use containers for greater agility.

In terms of public cloud services used for this purpose, analytics, data integration, databases and PaaS show a healthy adoption followed by Networking, Storage, AI, mobile, security, DevOps, Hybrid, and Identity.

.Net Core 3.1 has increasingly been deprecated in favor of .Net 5.0. Developers are also increasing the use of low code platform services to accelerate development of new applications. Similarly, MySQL and Oracle are databases that were left behind in favor of cloud databases.  One of the motivating factors for application modernization is that the decision is usually made by the C-suite executives who welcome this.

The urgency to meet customer demands and competitive landscapes propels application modernization efforts. Clients also repeatedly stressed the importance of skills and internal cultural change.

Saturday, December 10, 2022

 

Developing an authorizer for a serverless application

Problem statement: Many applications struggle with integrating third-party OIDC and OAuth2 providers. It’s relatively easy for development teams to come up with a solution to serve the business functionality but when it comes to writing the authentication systems they feel like a fish out-of-water. This is primarily because writing an authentication system that reads or writes passwords is difficult to build. Most cloud providers have their own well-established IAM systems that work well with identity providers. This document describes adding a user pool authorizer to an API gateway that sits in front of a serverless application and fetches both the JWT token as well as temporary IAM credentials for the serverless application to admit the request.

Solution: This solution assumes that AWS public cloud was used to create a user pool with users and groups by completing the form displayed on the management console. Then, the user pool identifier and the client are specified to the web application as follows:

In the webapp.ts, add the following line:

import * as cognito from '@aws-cdk/aws-cognito';

In the interface properties, add the following lines:

interface WebAppProps {

:

  userPool: cognito.IUserPool;

  userPoolClient: cognito.IUserPoolClient;

}

In the web app config, specify the following:

export class WebApp extends cdk.Construct {

:

 

    new cwt.WebAppConfig(this, 'WebAppConfig', {

      bucket: props.hostingBucket,

      key: 'config.js',

      configData: {

        apiEndpoint: props.httpApi.apiEndpoint,

        userPoolId: props.userPool.userPoolId,

        userPoolWebClientId: props.userPoolClient.userPoolClientId,

      },

      globalVariableName: 'appConfig'

    }).node.addDependency(deployment);

}

The config data is exactly the same as what Amplify would expect which enables it to integrate with the backend. The userPool and the userPool client are instantiated using the corresponding Cognito classes in an auth.ts typescript and passed as parameters to the webapp at startup.

Friday, December 9, 2022

Sample Deployment Template for AWS Lambda with S3 access:

 Problem Statement: Deploy a serverless application for uploading files to S3 storage. Additionally, create a template to help with CI/CD of such an application.

Solution: One of the ways this problem can be solved is by leveraging AWS cloud resources such as AWS Lambda, AWS API gateway, and S3 storage. The CloudFormationTemplate for this would appear as shown below. The bucket name would be a parameter. The lambda is charged in the increments of 100ms usage. It’s size is determined by its memory. The Lambda Integrations are indicated by the keyword proxy. The code is uploaded to a bucket indicated by the codeUri. The invocation handler is also indicated by the properties.

AWSTemplateFormatVersion: '2010-09-09'

Transform: AWS::Serverless-2016-10-31

Description: Serverless web application for uploading files to S3

Globals:

  Api:

    BinaryMediaTypes:

    - '*~1*'

Resources:

  uploader:

    Type: AWS::Serverless::Function

    Properties:

      Description: Serverless web application for uploading files to S3

      Handler: src/index.handler

      Runtime: nodejs12.x

      CodeUri:

        Bucket: awsserverlessrepo-changesets-1f9ifp952i9h0

        Key: 536706842180/arn:aws:serverlessrepo:us-east-1:233054207705:applications-uploader-versions-1.1.0/5176d06e-2d79-4e66-8f0c-a3bccf9084e5

      MemorySize: 1536

      Policies:

      - S3CrudPolicy:

          BucketName:

            Ref: destBucket

      Timeout: 60

      Events:

        root:

          Type: Api

          Properties:

            Path: /

            Method: get

        getProxy:

          Type: Api

          Properties:

            Path: /{proxy+}

            Method: get

        postProxy:

          Type: Api

          Properties:

            Path: /{proxy+}

            Method: post

      Environment:

        Variables:

          DEST_BUCKET:

            Ref: destBucket

Parameters:

  destBucket:

    Type: String

    Description: Name of the S3 Bucket to put uploaded files into (must exist prior to deployment)

 

 

Thursday, December 8, 2022

Problem Statement: An Application intends to make use of S3 for storing and retrieving documents that are uploaded by users who are not yet onboarded to the application. An external Identity provider can confirm the validity of a user but the serverless function must authenticate and authorize their requests prior to upload and download. 

Solution:  

The solution revolves around the creation of a user pool to integrate with a third-party identity provider.  This allows a high level of flexibility to choose appropriate access management for an Application Gateway that can be used to onboard existing users, allow robust operational support (troubleshooting), and improve agility in the development of the serverless capability.  There are two options for this authentication module pilot: 

  1. AWS Cognito User Pools Authorizer for Lambda running on Application Gateway, as an IDP-agnostic option, using tools the team is already familiar with.  Here, the benefit is consolidating all the serverless access via Application Gateway, allowing the team to focus on building serverless capabilities via Lambda functions with little overhead from an operational perspective. 
     

  1. AWS Custom Lambda Authorizer is a custom authorizer to setup a user pool for accessing serverless functions via Application Gateway. Here, the benefit is greater control over the issue of identity and access tokens but increasing the maintainability considerations for the team. 

Given these choices, the Cognito user pool authorizer is preferred for the following reasons.  

  • S3 access using AWS technologies such as Lambda and Application Gateway integrates well with Cognito that supports external Identity providers and works with both identity and access tokens. The less overhead and maintenance the development team has, the more the focus on the serverless and S3 accesses. 

  • Cognito user pool authorizers provide smoother onboarding of existing and new user pools. 

  • The proposed user pool authorizer will help with third-party OIDC and OAuth2 providers seamlessly with little overhead 

  • Overall, Cognito user pool authorizer is an out-of-box technology, with extensive documentation, examples, and community support. 

However, both authorizers are strong contenders and offer many of the same benefits and are superior to any ad hoc implementation of an authorization module by virtue of both being AWS core technologies.  Specifically, both offer: 

  • Ways to authorize identity and access tokens 

  • Both can be used for enabling access to S3  

  • Come with extensive documentation and community support 

  • Strong integrations with applications using REST APIs 

 

Wednesday, December 7, 2022

 CloudFormation versus Terraform – a choice 

 

Infrastructure-as-a-code is a declarative paradigm that is a language for describing infrastructure and the state that it must achieve. The service that understands this language supports tags, RBAC, declarative syntax, locks, policies, and logs for the resources and their create, update, and delete operations which can be exposed via the command-line interface, scripts, web requests, and the user interface. Declarative style also helps to boost agility, productivity, and quality of work within the organizations. 

Such a service for AWS public cloud is called AWS CloudFormation. Terraform is the open-source equivalent that helps the users with the task of setting up and provisioning datacenter infrastructure independent of clouds. These cloud configuration files can be shared among team members, treated as code, edited, reviewed and versioned. 

AWS CloudFormation has a certain appeal for being AWS native with a common language to model and provision AWS and third-party resources. It abstracts the nuances in managing AWS resources and their dependencies making it easier for creating and deleting resources in a predictable manner. It makes versioning and iterating of the infrastructure more accessible. It supports iterative testing as well as rollback. 

Terraform’s appeal is that it can be used for multi-cloud deployment. For example, it deploys serverless functions with AWS Lambda, manage Microsoft Azure Active Directory resources, and provision a load balancer in Google cloud. 

Both facilitate state management. With CloudFormation, users can perform drift detection on all of their assets and get notifications when something changes. It also determines dependencies and performs certain validations before a delete command is honored. Terraform stores the state of the infrastructure on the provisioning computer, or in a remote site in proprietary JSON which serves to describe and configure the resources. The state management is automatically done with no user involvement by CloudFormation whereas Terraform requires you to specify the remote store or fallback to local disk to save state. 

Both have their unique ways for addressing flexibility for changing requirements. Terraform has modules which are containers for multiple resources that are used together and CloudFormation utilizes a system called “nested stacks” where templates can be called from within templates. A benefit of Terraform is increased flexibility over CloudFormation regarding modularity. 

They also differ in how they handle configuration and parameters. Terraform uses provider specific data sources. The implementation is modular allowing data to be fetched and reused. CloudFormation uses up to 60 parameters per template that must be of a type that CloudFormation understands. They must be declared or retrieved from the System Manager parameter store and used within the template. 

Both are powerful cloud infrastructure management tools, but one is more favorable for cloud-agnostic support. It also ties in very well with DevOps automations such as GitLab. Finally, having an abstraction over cloud lock-ins might also be beneficial to the organization in the long run. 

 

 

Tuesday, December 6, 2022

 

The Architecture Driven Modernization process comprises of two main steps: the Knowledge Discovery Metamodel (KDM) model extraction and metric report generation. The process can be walked through in this manner. The source code is converted into an Abstract Syntax Tree model using a Code-to-Model transformation. The Abstract Syntax Tree model is converted into a KDM model using a model-to-model transformation. The KDM model converts to a metrics model using another model-to-model transformation. Finally, a metrics report is generated from the metrics model using a model-to-text transformation.


If we take the example of a set of SQL statements converted to Pl/SQL Abstract Syntax Tree Metamodel, it will consist of definitions like RDBTableDefinition, RDBColumnDefinition and such others, primitive types which consist of RDBTableType, RDBColumnType, RDBDatabaseType and such others, statements that comprise RDBSelectStatement, RDBModifyStatement, RDBInsertStatement and such others, BinaryExpressions such as RDBSelectExpression, RDBHostVariableExpression and such others.

When the models are extracted from the GPL code, the main task is the collecting scattered information for creating the model elements from source code. The scattering occurs due to the references between the elements. When such references are explicit in the models, they are implicitly established in the source code with the use of identifiers such as the reference between a variable and its declaration. Transforming an identifier-based reference into an explicit reference involves looking for the identified element in the source code. Dedicated parsers result from this challenge. This scattering problem requires complex processing to locate the correspondences between source code and the model elements. A powerful XPath like language specially built for resolving references can help here.


If we take the example of a set of SQL statements converted to Pl/SQL Abstract Syntax Tree Metamodel, it will consist of definitions like RDBTableDefinition, RDBColumnDefinition and such others, primitive types which consist of RDBTableType, RDBColumnType, RDBDatabaseType and such others, statements that comprise RDBSelectStatement, RDBModifyStatement, RDBInsertStatement and such others, BinaryExpressions such as RDBSelectExpression, RDBHostVariableExpression and such others.

With the popularity of machine learning techniques and softmax classification, extracting domain classes according to syntax tree meta-model and semantic graphical information has become more meaningful. The two-step process of parsing to yield Abstract Syntax Tree Meta-model and restructuring to express Abstract Knowledge Discovery Model becomes enhanced with collocation and dependency information. This results in classifications at code organization units that were previously omitted. For example, code organization and call graphs can be used for such learning as shown in reference 1. The discovery of KDM and SMM can also be broken down into independent learning mechanisms with the Dependency Complexity being one of them.