Saturday, January 28, 2023

 

Application Modernization Questions:

One of the shifts in thinking about modernizing versus migrating an application is in terms of workload.  A workload here is a collection of software systems called components that deliver a business value. In a monolithic system, the components might be tightly integrated.  A well architected framework allows them to evolve independently. Evolution is incremental release subject to development and test. If the architecture involves independent microservices, then it is easy to test them independently and at different levels of a multi-tier microservice. When the changes are continuously incremental and delivered via a pipeline that follows a continuous integration (CI)/continuous deployment (CD), then every release is guaranteed to have little or no regressions allowing components of the overall workload to change without affecting the others. This facilitates removing pain points in the original monolithic software and a transition towards hosting them in the cloud.

Describing a well-architected framework almost always involves the five pillars conceptually regardless of the cloud to which the application is destined for. These five pillars are:  Reliability (REL), Security (SEC), Cost Optimization (COST), Operational Excellence (OPS), Performance efficiency (PERF). The elements that support these pillars are a review, a cost and optimization advisor, documentation, patterns-support-and-service offers, reference architectures and design principles. 

Each pillar contains questions for which the answers relate to technical and organizational decisions that are not directly related to the features the software to be deployed. For example, a software that allows people to post comments must honor use cases where some people can write and others can read. But the system developed must also be safe and sound enough to handle all the traffic and should incur reasonable cost. 

Since the most crucial pillars are OPS and SEC, they should never be traded in to get more out of the other pillars. 

The security pillar consists of Identity and access management, detective controls, infrastructure protection, data protection and incident response. Three questions are routinely asked for this pillar: 

1.       How is the access controlled for the serverless api? 

2.       How are the security boundaries managed for the serverless application? 

3.       How is the application security implemented for the workload? 

The operational excellence pillar is made up of four parts: organization, preparation, operation, and evolution. The questions that drive the decisions for this pillar include: 

1.       How is the health of the serverless application known? 

2.       How is the application lifecycle management approached? 

The reliability pillar is made of three parts: foundations, change management, and failure management. The questions asked for this pillar include: 

1.       How are the inbound request rates regulated? 

2.       How is the resiliency build into the serverless application? 

The cost optimization pillar consists of five parts: cloud financial management practice, expenditure and usage awareness, cost-effective resources, demand management and resources supply, and optimizations over time. The questions asked for cost optimization include: 

1.       How are the costs optimized? 

The performance efficiency pillar is composed of four parts: selection, review, monitoring and tradeoffs. The questions asked for this pillar include: 

1.       How is the performance optimized for the serverless application? 

In addition to these questions, there’s quite a lot of opinionated and even authoritative perspectives into the appropriateness of a framework and they are often referred to as lenses. With these forms of guidance, a well-architected framework moves closer to reality

Friday, January 27, 2023

 

A proposal to group microservice candidates from existing applications:

Introduction: Human interpretation of existing applications for the purposes of refactoring relies on knowledge of abstract software models. There is no substitute for reverse engineering it this way because the levels of abstraction can be decided by those who benefit from them. Forward engineering part of the application modernization becomes straightforward when the abstractions detail just what needs to be refactored. On the other hand, most web application software often follow well-known patterns of model-view-controller and their variations. They also benefit from well-defined levels of front-end, middle tier and backend.

In these cases, the refactoring of application into microservices is possible when there are groups drawn from those levels and components. Even if the middle-tier is a large object-relational model and the database is a large instance, it is possible to determine many independent groups from the schema and classes by merely establishing a graph with classes as units and edges as relationships in terms of inheritance and composition. The independent groups will be those islands of sub-graphs that are not connected.  Establishing a graph of the class dependencies is helpful because the frequency of usage can be overlayed over the graph as weights on the edges, which then helps to ease the hot spots by splitting the usages. It will not help with forming new entities or objects but that is a detail that can become apparent once the groupings are well-formed. It is also possible to define new entities by grouping the properties and methods within a set of units but that is a level of detail further than the grouping of classes that encapsulate them and beyond the scope discussed here.

Grouping of units is based on the criteria of connected components in this case. This does not have to be the only criteria. Many other criteria can be included to form groups and these groups can even be ranked by the collective use of criteria. A frequently occurring group across criteria is ranked higher. Criteria can be based on co-occurrence of certain classes. Model, view and controller are classic examples of those classes that co-occur. Even when the mappings are not straight forward such as thick layers or monoliths, the combination of criteria such as independence sets and co-occurrence can help to improve the cohesion and separation of clusters. If there are multiple criteria, then the groups can be assigned a score by each criterion and the largest total can be thresholded to determine the selection.

Sample grouping: https://ideone.com/BUxjZT

 

  

 

 

Thursday, January 26, 2023

 

Sample authorization with AWS recognized tokens and users:

The steps for authorization in AWS are as follows:

1.       A user pool is setup with an app Client

2.       An HTTP API is set up with this user pool authorizer.

3.       The authorizer is validated using the identity token for a user

a.       This is available from the user pool using the following steps:

import { Auth } from 'aws-amplify';

 

async function signIn() {

    try {

        const user = await Auth.signIn(username, password);

    } catch (error) {

        console.log('error signing in', error);

    }

}

 

To repeat the signin, we can signout globally from all devices with:

import { Auth } from 'aws-amplify';

 

async function signOut() {

    try {

        await Auth.signOut();

    } catch (error) {

        console.log('error signing out: ', error);

    }

}

b.       Only the identity token in well-known JSON Web Token format is supplied. The access token is discarded

4.       When the authorizer is validated successfully, a sample API call can be made across the wire using a Postman sample as follows:

a.       Make an OAuth token using the Cognito’s oath endpoint

b.       Pass the OAuth token in the authorization header field.

 

Wednesday, January 25, 2023

 

This is a continuation of the errors encountered and the resolutions for the deployment of a function handler.

The credentials used for executing cli commands needs to be set beforehand only once. This option works very well for almost everyone. The only caveat is for the federated identity users who might not have a key and secret issues. The recommended approach in this case is to request the root user to take this specific action.

 

AWS has provisions to generate temporary programmatic credentials via its secure token server that can be utilized to perform command line actions. The use of this credentials requires account level privileges for a one-time setup that many federated users might not have. Hence, the request to the root user to enable the above-mentioned command to be executed.

 

The following are some of the ways to generate the credentials for command-line usages:

1.

 

a. aws configure sso

SSO session name (Recommended): my-sso

SSO start URL [None]: https://my-sso-portal.awsapps.com/start

SSO region [None]: us-east-1

SSO registration scopes [None]: sso:account:access

CLI default client Region [None]: us-west-2<ENTER>

CLI default output format [None]: json<ENTER>

CLI profile name [123456789011_ReadOnly]: my-dev-profile<ENTER>

 

b. aws configure sso-session

 

Signing in and getting credentials:

aws sso login --profile my-dev-profile

aws sso login --sso-session my-dev-session

aws sts get-caller-identity --profile my-dev-profile

aws s3 ls --profile my-sso-profile

aws sso logout

 

 

2. One can configure the AWS Command Line Interface (AWS CLI) to use an IAM role by defining a profile for the role in the ~/.aws/config file.

[profile marketingadmin]

role_arn = arn:aws:iam::123456789012:role/marketingadminrole

source_profile = default

 

3. Clearing cached credentials:

del /s /q %UserProfile%\.aws\cli\cache

 

4. Using credentials process with:

credential_process = "C:\Path\To\credentials.cmd" parameterWithoutSpaces "parameter with spaces"

 

Tuesday, January 24, 2023

 Handler Errors and resolutions continued. 

This document is in continuation of the errors encountered and their resolutions for deployment of a function handler in the AWS cloud. The first part of the article is linked here. This is the second part. 

One of the troublesome errors encountered is ensuring that the handler can put objects in an S3 bucket. The error encountered is usually “403: Forbidden” and it defies even the bucket administrator and sound bucket policies. 

It might seem surprising that even an S3 bucket owner might not be able to effectively use bucket policies, but it is inherent to buckets they are created as private with deny access by default. Clearing this default before authoring new bucket policies is sometimes the only resolution even though the bucket owner might be an admin on the AWS account. If there is an error with read-write access to bucket, the following things might need to be checked to resolve the dreaded “403: Forbidden” error.   

  1. Permissions are missing for s3:PutObject to add an object or s3: PutObjectAcl to modify the object’s ACL. 

  1. The requestor might not have permission to use an AWS Key management service (AWS KMS) key 

  1. There is an explicit deny statement in the bucket policy 

  1. Amazon S3 Block Public Access is enabled. 

  1. The bucket access control lists don’t allow the AWS account root user to write objects. 

  1. The AWS organizations' service control policy doesn’t allow access to Amazon S3. 

 

One of the ways to resolve this error has been to clear the initial bucket policy. There are two ways to do this: 

 

First, sign in to the AWS management console as the root user which might be different from an administrator who has AmazonS3FullAccess privileges. Only the root user can take effective steps to delete the initial bucket policy from the user interface. That is why this step might not be an effective resolution for everyone. 

 

Second, use the command-line interface to specify the following command: 

Aws s3api delete-bucket-policy –bucket <bucketName> --debug 

And this will also succeed in clearing the initial bucket policy. 

Monday, January 23, 2023

Handler Authorizers:

These are essential AWS mechanisms to enforce only certain roles to have permissions to access cloud resources. It is helpful to both APIs and APPs because it separates the concerns for the Lambda from the admission that those two entities request. This fits in with the general pattern of authorizing access to aws services and resources using identity pools and allowing the APIs and APPs to focus on the user pools for authenticating the user.

Separation of authentication and authorization concerns is similarly maintained between cloud (AWS) and identity provider (IdP) where the part of the authorizers is played by the Cognito offering. While services within the cloud leverage the identity pool, the IDPs are forced to make the individual users visible to the cloud via the user pool. 


For example, the following S3 create-update-delete (CRUD) operations can be authorized with a Cognito identity pool:

using Amazon;

using Amazon.S3;

using Amazon.S3.Model;

using System;

using System.Threading.Tasks;


namespace AuthNAuthZ.Demo.Controllers

{

    public class S3Controller: Controller

    {

        private const string bucketName = "bkt-upload-docs";

        private const string keyName = "key1";

        private const string filePath = @".\test.txt";

        private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;

        private static AmazonS3Client client = new AmazonS3Client(bucketName);


        // PUT: /<controller>/upload

        [Authorize]

        [HttpPut]

        public IActionResult Upload()

        {

Return WritingAnObjectAsync().Wait(); 

        }


        async Task<IActionResult> WritingAnObjectAsync()

        {

            try

            {

                var putRequest = new PutObjectRequest

                {

                    BucketName = bucketName,

                    Key = keyName,

                    FilePath = filePath,

                    ContentType = "text/plain"

                };

                

                putRequest.Metadata.Add("x-amz-meta-title", "sample-title);

                PutObjectResponse response = await client.PutObjectAsync(putRequest);

                Return Ok(response);

            }

            catch (AmazonS3Exception e)

            {

                Console.WriteLine(

                        "Error encountered: Message:'{0}' when writing an object"

                        , e.Message);

            }

            catch (Exception e)

            {

                Console.WriteLine(

                    "Unknown encountered on server. Message:'{0}' when writing an object"

                    , e.Message);

            }

            Return BadRequest();

        }

    }

}


    public class Startup

    {

        :

        :

        public void ConfigureServices(IServiceCollection services)

        {

            services.AddControllersWithViews();

            services.AddAuthentication(options =>

            {

                options.DefaultAuthenticateScheme = CookieAuthenticationDefaults.AuthenticationScheme;

                options.DefaultSignInScheme = CookieAuthenticationDefaults.AuthenticationScheme;

                options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme;

            })

            .AddCookie()

            .AddOpenIdConnect(options =>

            {

                options.ResponseType = Configuration["Authentication:Cognito:ResponseType"];

                options.MetadataAddress = Configuration["Authentication:Cognito:MetadataAddress"];

                options.ClientId = Configuration["Authentication:Cognito:ClientId"];

                options.Events = new OpenIdConnectEvents()

                {

                    OnRedirectToIdentityProviderForSignOut = OnRedirectToIdentityProviderForSignOut

                };


                //this code block must be leveraged to enable Role Based Authorization

                //options.TokenValidationParameters = new TokenValidationParameters

                //{

                //    ValidateIssuer = options.TokenValidationParameters.ValidateIssuer,

                //    RoleClaimType = "cognito:groups"

                //};

            });


            //this code block must be enabled to leverage Policy Based Authorization

            //Amazon Cognito users attributes are used to support claim-based authorization.

            /* One can use [Authorize] to ensure that only logged-in users can access the Page/Controller/Route.

             * or for more fine-grained control than authenticated users, they can be added to Cognito Groups. 

             * Those groups are sent as part of the user Claims. Then authorization polices can be created during the Startup.Configure method as follows:

             */

            services.AddAuthorization(options =>

            {

                options.AddPolicy("AdminOnly", policy =>

                    policy.RequireAssertion(context =>

                        context.User.HasClaim(c => c.Type == "cognito:groups" && c.Value == "Admin")));

            });

        }


Sunday, January 22, 2023

This is a sample program that when given the head of a linked list, reverses the nodes of the list k at a time, and returns the modified list.

 

k is a positive integer and is less than or equal to the length of the linked list. If the number of nodes is not a multiple of k then left-out nodes, in the end, would remain as it is.

Node reverseK(Node head, int k)

{

int n = 0;

for ( Node cur = head; cur != null; cur=cur.next) {n++};

Node cur = head;

Node prev =null;

Node root = null;

for (int i = 0; i < (n/k) + 1; i++)

{

  Node next = cur;

  For (int j = 0; next != null && j <k-1; j++) { next=next.next;}

  Node Temp = null;

  If (next) {

temp = next.next;

Next.next = null;

  }

  If (temp != null) {

  reverse(ref cur, ref next);

  head =cur;

  next.next = temp;

  }

  if (!root) {root = head};

  while(cur != null && cur.next != null && cur.next != next){ cur = cur.next;};

  if (!prev){

prev = cur;

cur = cur.next;

  } else {  

prev.next=head;

prev = head;

}

  cur=temp;

  next = temp;           

}

return root;

}

 

void reverse(ref node start, ref node end) {

  if (start == null) return;

  if (start.next == null) return;

  Node prev = null;

  Node cur = start;

  Node next = cur.next;

  while (next && cur != end) {

    Cur.next = prev;

    Prev = cur;

    Cur = next;            

    Next = cur.next;

  }

  Cur.next = prev;

  start = cur;

  end = next;

}

 

Sample test cases

List, K                  Result

Null, 0                 Null

[1] 0                    [1]

[1] 1                    [1]

[1] 2                    [1]

[1,2] 1                 [1,2]

[1,2,3], 1             [1,2,3]

[1,2,3], 2             [2,1,3]

[1,2,3], 3             [3,2,1]

[1,2,3,4,5,6], 3                 [3,2,1,6,5,4]

[1,2,3,4,5,6,7], 3              [3,2,1,6,5,4,7]

[1,2,3,4,5,6,7,8], 3             [3,2,1,6,5,4,7,8]