Tuesday, January 24, 2023

 Handler Errors and resolutions continued. 

This document is in continuation of the errors encountered and their resolutions for deployment of a function handler in the AWS cloud. The first part of the article is linked here. This is the second part. 

One of the troublesome errors encountered is ensuring that the handler can put objects in an S3 bucket. The error encountered is usually “403: Forbidden” and it defies even the bucket administrator and sound bucket policies. 

It might seem surprising that even an S3 bucket owner might not be able to effectively use bucket policies, but it is inherent to buckets they are created as private with deny access by default. Clearing this default before authoring new bucket policies is sometimes the only resolution even though the bucket owner might be an admin on the AWS account. If there is an error with read-write access to bucket, the following things might need to be checked to resolve the dreaded “403: Forbidden” error.   

  1. Permissions are missing for s3:PutObject to add an object or s3: PutObjectAcl to modify the object’s ACL. 

  1. The requestor might not have permission to use an AWS Key management service (AWS KMS) key 

  1. There is an explicit deny statement in the bucket policy 

  1. Amazon S3 Block Public Access is enabled. 

  1. The bucket access control lists don’t allow the AWS account root user to write objects. 

  1. The AWS organizations' service control policy doesn’t allow access to Amazon S3. 

 

One of the ways to resolve this error has been to clear the initial bucket policy. There are two ways to do this: 

 

First, sign in to the AWS management console as the root user which might be different from an administrator who has AmazonS3FullAccess privileges. Only the root user can take effective steps to delete the initial bucket policy from the user interface. That is why this step might not be an effective resolution for everyone. 

 

Second, use the command-line interface to specify the following command: 

Aws s3api delete-bucket-policy –bucket <bucketName> --debug 

And this will also succeed in clearing the initial bucket policy. 

Monday, January 23, 2023

Handler Authorizers:

These are essential AWS mechanisms to enforce only certain roles to have permissions to access cloud resources. It is helpful to both APIs and APPs because it separates the concerns for the Lambda from the admission that those two entities request. This fits in with the general pattern of authorizing access to aws services and resources using identity pools and allowing the APIs and APPs to focus on the user pools for authenticating the user.

Separation of authentication and authorization concerns is similarly maintained between cloud (AWS) and identity provider (IdP) where the part of the authorizers is played by the Cognito offering. While services within the cloud leverage the identity pool, the IDPs are forced to make the individual users visible to the cloud via the user pool. 


For example, the following S3 create-update-delete (CRUD) operations can be authorized with a Cognito identity pool:

using Amazon;

using Amazon.S3;

using Amazon.S3.Model;

using System;

using System.Threading.Tasks;


namespace AuthNAuthZ.Demo.Controllers

{

    public class S3Controller: Controller

    {

        private const string bucketName = "bkt-upload-docs";

        private const string keyName = "key1";

        private const string filePath = @".\test.txt";

        private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;

        private static AmazonS3Client client = new AmazonS3Client(bucketName);


        // PUT: /<controller>/upload

        [Authorize]

        [HttpPut]

        public IActionResult Upload()

        {

Return WritingAnObjectAsync().Wait(); 

        }


        async Task<IActionResult> WritingAnObjectAsync()

        {

            try

            {

                var putRequest = new PutObjectRequest

                {

                    BucketName = bucketName,

                    Key = keyName,

                    FilePath = filePath,

                    ContentType = "text/plain"

                };

                

                putRequest.Metadata.Add("x-amz-meta-title", "sample-title);

                PutObjectResponse response = await client.PutObjectAsync(putRequest);

                Return Ok(response);

            }

            catch (AmazonS3Exception e)

            {

                Console.WriteLine(

                        "Error encountered: Message:'{0}' when writing an object"

                        , e.Message);

            }

            catch (Exception e)

            {

                Console.WriteLine(

                    "Unknown encountered on server. Message:'{0}' when writing an object"

                    , e.Message);

            }

            Return BadRequest();

        }

    }

}


    public class Startup

    {

        :

        :

        public void ConfigureServices(IServiceCollection services)

        {

            services.AddControllersWithViews();

            services.AddAuthentication(options =>

            {

                options.DefaultAuthenticateScheme = CookieAuthenticationDefaults.AuthenticationScheme;

                options.DefaultSignInScheme = CookieAuthenticationDefaults.AuthenticationScheme;

                options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme;

            })

            .AddCookie()

            .AddOpenIdConnect(options =>

            {

                options.ResponseType = Configuration["Authentication:Cognito:ResponseType"];

                options.MetadataAddress = Configuration["Authentication:Cognito:MetadataAddress"];

                options.ClientId = Configuration["Authentication:Cognito:ClientId"];

                options.Events = new OpenIdConnectEvents()

                {

                    OnRedirectToIdentityProviderForSignOut = OnRedirectToIdentityProviderForSignOut

                };


                //this code block must be leveraged to enable Role Based Authorization

                //options.TokenValidationParameters = new TokenValidationParameters

                //{

                //    ValidateIssuer = options.TokenValidationParameters.ValidateIssuer,

                //    RoleClaimType = "cognito:groups"

                //};

            });


            //this code block must be enabled to leverage Policy Based Authorization

            //Amazon Cognito users attributes are used to support claim-based authorization.

            /* One can use [Authorize] to ensure that only logged-in users can access the Page/Controller/Route.

             * or for more fine-grained control than authenticated users, they can be added to Cognito Groups. 

             * Those groups are sent as part of the user Claims. Then authorization polices can be created during the Startup.Configure method as follows:

             */

            services.AddAuthorization(options =>

            {

                options.AddPolicy("AdminOnly", policy =>

                    policy.RequireAssertion(context =>

                        context.User.HasClaim(c => c.Type == "cognito:groups" && c.Value == "Admin")));

            });

        }


Sunday, January 22, 2023

This is a sample program that when given the head of a linked list, reverses the nodes of the list k at a time, and returns the modified list.

 

k is a positive integer and is less than or equal to the length of the linked list. If the number of nodes is not a multiple of k then left-out nodes, in the end, would remain as it is.

Node reverseK(Node head, int k)

{

int n = 0;

for ( Node cur = head; cur != null; cur=cur.next) {n++};

Node cur = head;

Node prev =null;

Node root = null;

for (int i = 0; i < (n/k) + 1; i++)

{

  Node next = cur;

  For (int j = 0; next != null && j <k-1; j++) { next=next.next;}

  Node Temp = null;

  If (next) {

temp = next.next;

Next.next = null;

  }

  If (temp != null) {

  reverse(ref cur, ref next);

  head =cur;

  next.next = temp;

  }

  if (!root) {root = head};

  while(cur != null && cur.next != null && cur.next != next){ cur = cur.next;};

  if (!prev){

prev = cur;

cur = cur.next;

  } else {  

prev.next=head;

prev = head;

}

  cur=temp;

  next = temp;           

}

return root;

}

 

void reverse(ref node start, ref node end) {

  if (start == null) return;

  if (start.next == null) return;

  Node prev = null;

  Node cur = start;

  Node next = cur.next;

  while (next && cur != end) {

    Cur.next = prev;

    Prev = cur;

    Cur = next;            

    Next = cur.next;

  }

  Cur.next = prev;

  start = cur;

  end = next;

}

 

Sample test cases

List, K                  Result

Null, 0                 Null

[1] 0                    [1]

[1] 1                    [1]

[1] 2                    [1]

[1,2] 1                 [1,2]

[1,2,3], 1             [1,2,3]

[1,2,3], 2             [2,1,3]

[1,2,3], 3             [3,2,1]

[1,2,3,4,5,6], 3                 [3,2,1,6,5,4]

[1,2,3,4,5,6,7], 3              [3,2,1,6,5,4,7]

[1,2,3,4,5,6,7,8], 3             [3,2,1,6,5,4,7,8]

 

 

Saturday, January 21, 2023

Some Lambda errors and resolutions:

1.       Lambda experiences timeout and provides no response:

Possibly this will help:

version: '3.9'

services:

  api:

    image: public.ecr.aws/sam/build-nodejs14.x:1.26.0

    volumes:

      - /var/run/docker.sock:/var/run/docker.sock:ro

      - ./dist:/var/task:ro

    ports:

      - 3000:3000

    command: sam local start-api --template stack.yaml --host 0.0.0.0 --docker-network application --container-host host.docker.internal --warm-containers EAGER

 

networks:

  default:

    name: application

Essentially, the idea is to increase the timeout for all the large dependencies to load or          use a warm start.

2.       Runtime.HandlerNotFound: index.handler is undefined or not exported

One of the following remediation steps could help assuming that a function handler exists as entrypoint for Lambda:

1.       Module.exports  = handler

2.       Exports default handler;

3.       Ensure that the file with the handler is at the root level.

4.       The handler reference in the template has path qualification.

3.       And the handler does not like import statements in the NodeJscode

                                                               i.      Use require as preferred on the Javascript console

                                                             ii.      Use ES modules as preferred by the newer nodeJs runtime

4.       The size of the code exceeds 50MB.

                                                               i.      If the archive exceeds 50MB, upload it to S3

                                                             ii.      Separate the dependency into layers

                                                           iii.      Use a container image.

Friday, January 20, 2023

 

A previous article described the test cases for validating lambda function handler. This article covers some of the issues encountered and their resolutions.

First, the version of the lambda runtime might be different from the development environment. This cascades to version incompatibilities with the package dependencies for the code invoked. The package-lock.json used with a nodeJs based lambda function handler articulates the versions expected for each dependency. Removing the dependencies folder named the ‘node_modules’ folder and refreshing it using the commands “npm install” and “npm audit fix” will adjust the versions to suit the runtime. Usually, the higher version of runtime has backward compatibility with the lower versions, so if the lambda code works with a lower version runtime, it should work with the latest.

A simple lambda code such as the following:

const AWS = require('aws-sdk')

const s3 = new AWS.S3()

 

exports.handler = async function(event) {

  return s3.listBuckets().promise()

}

Will work on older versions of the runtime.

If we use the Javascript v3 sdk, we might have a syntax as follows:

// Import required AWS SDK clients and commands for Node.js.

import { ListBucketsCommand } from "@aws-sdk/client-s3";

import { s3Client } from "./libs/s3Client.js";

export const run = async () => {

  try {

    const data = await s3Client.send(new ListBucketsCommand({}));

    console.log("Success", data.Buckets);

    return data; // For unit tests.

  } catch (err) {

    console.log("Error", err);

  }

};

And some of the errors encountered might be like “cannot use import statement outside a module in aws lambda console”. This could be quite a pesky issue even driving the code to change to using the require syntax. If the Lambda console allowed using this, it could have alleviated much of the hassle but there is an easy resolution to this. The package.json could include an attribute “type”: “module” to denote this is an ECMAScript. There are some version differences between ECMAScript5 and ECMAScript6 and specifying the attribute informs the Lambda runtime to use ES modules rather than the traditional syntax.

It is also better to use configuration layers for nodeJs modules. These Lambda layers are a convenient way to package dependencies so that the size of the uploaded deployment archives is reduced. A layer can contain libraries, a custom runtime, data or configuration files. Layers promote reusability and separation of responsibilities. Layer contents are archived into a zip file and uploaded to S3. They are imported under the /opt directory at execution. If the same folder structure is specified in the layer zip file archive, the function code can access the content without the need to specify the path.

 

Thursday, January 19, 2023

 

Testing NodeJs applications and serverless functions:

A previous post described running the application locally as a way of testing the code without requiring it to be deployed anywhere remote from the development server. This write-up follows up on it with a test case.

One way to write unit-tests and integration tests is to use Jest. It’s testing framework with a name that goes with its approach to be delightfully simple. It works on all forms of JavaScript and TypeScript such as Node, React, Angular and others.

A sample Jest unit script might look like this:

__tests__\unit\handlers\simple.test.js:

// Mock uuid

const uuidvalue = 'f8216640-91a2-11eb-8ab9-57aa454facef'

jest.mock('uuid', () => ({ v1: () =>  uuidvalue}));

 

// This includes all tests for documents handler

describe('Test simple handler', () => {

    let sendSpy;

 

    // Test one-time setup and teardown, see more in https://jestjs.io/docs/en/setup-teardown

    beforeAll(() => {

        // Mock s3 methods

        // https://jestjs.io/docs/en/jest-object.html#jestspyonobject-methodname

        sendSpy = jest.spyOn(Array.prototype, 'push');

 

    });

 

    // Clean up mocks

    afterAll(() => {

        sendSpy.mockRestore();

    });

 

     it('should simply return', async () => {

        const items =  {

            "Items": [

            ],

            "Count": 0,

            "ScannedCount": 0

        };

 

        // Return the specified value whenever the spied function is called

        sendSpy.mockReturnValue(items);

 

        const event = {

            "httpMethod": "GET",

            "rawPath": "/documents",

            "requestContext": {

                "requestId":"e0GDshQXoAMEJug="

            }

        }

 

        // Invoke Lambda handler

        var foo = new Array("foo", "bar");

        const result = foo.push("echo");

 

        const expectedResult = {

            statusCode: 200,

            body: JSON.stringify(items),

            headers : {

                "Content-Type": "application/json"

              }

        };

 

        // Compare the result with the expected result

        expect(result).toEqual(expectedResult);

    });

});

node_modules\.bin\jest

PASS  ./__tests__/unit/handlers/simple.test.js

  Test simple handler

    √ should simply return (1 ms)

 

Test Suites: 1 passed, 1 total

Tests:       1 passed, 1 total

Snapshots:   0 total

Time:        0.279 s, estimated 1 s

Ran all test suites.

 

Wednesday, January 18, 2023

 

Applications wishing to test a serverless function locally have one of the following options:

1.       Invoke Serverless Application Model (SAM) from the AWS public cloud command line interfaces and the steps are:

a.       sam build -t template.yml

b.       sam local invoke

The command to build the libraries for a default index.js lambda function handler comes with the option to create a container image. This can be very useful for portability and the image can be executed on any container framework.

While this is optional, the local invocation might require that all the source code and its dependencies be transpiled into a format that is easy to interpret and run.

Enter Babel, a well-known transpiler for this purpose and it is almost easy to invoke different formats of javascripts and typescript notations.

2.       Another option to test the lambda function handlers has been to write different unit tests and integration tests. These tests can be run with Jest – a test framework that makes it easy to execute the tests by discovering it from the build folder. All test files are usually named as .test.js or .spec.js and the test cases follow the describe/it readable specifications that have made unit-tests a pleasure to read and execute.

Jest is local to the node_modules folder where all the dependencies of the lambda function handler are installed. It can be installed with –save-dev option that eliminates the need to install it for production builds.

Executing the jest requires a mock library for unit tests and by virtue of the Javascript’s support for object’s prototype, features can be inherited and substituted.

3.       A final option is to use a middleware that exercises just the handler methods of the lambda function handler, so that they can be invoked across the wire by curl commands. Enter Koa – a lightweight middleware that is not bundled with anything else, and writing an http server becomes as simple as:

const Koa = require('koa');

const app = new Koa();

 

// response

app.use(ctx => {

  ctx.body = 'Hello Koa';

});

 

app.listen(3000);

 

and curl http://localhost:3000/ will return ‘Hello Koa’

This comes with the nice benefit that each of the handler method can now be tried individually, and the results will be similar to how the lambda functions are invoked.

 

Another way to exercise the methods would be to include the routes:

var koa = require('koa');
var http = require('http');
var router = require('koa-router')();
var bodyParser = require('koa-body')();

router.post('/resource', bodyParser, function *(next){
  console.log(this.request.body);
  this.status = 200;
  this.body = 'some output for post requests';
  yield(next);
});

startServerOne();

function startServerOne() {
  var app = koa();
  app.use(router.routes());
  http.createServer(app.callback()).listen(8081);
  console.log('Server at Port 8081');
}

These are some of the ways to test lambda function handlers.