Friday, March 5, 2021

Preparation for deploying API services to the cloud: 

Introduction: APIs are desirable features to deploy because they enable automation, programmability, and connectivity from remote devices. Deploying the API to the cloud makes it even more popular now that the clients can reach them from anywhere that has IP connectivity. The public clouds offer immense capabilities to write API services and deploy them, but the preparation is largely left to the source. This article tries to list some of those considerations that have proven themselves as noteworthy from numerous field experiences. 

1) Choose the right technology: There is a variety of stacks depending on the language and platform to choose from. Some are highly performant; others are more secure and a lot in between that do perform just well enough. The choice of technology stack depends on the changes that will be made to the APIs, the number of releases made in a year, the compute and storage resources needed for the APIs, and the maturity of the framework. There are side-by-side comparisons available to choose from, and the investment is usually a one-time cost even if the technical debt accrues over time. 

2) Anticipate the load: Some APIs like those for Whatsapp messages are going to be generating millions of calls every minute. Earlier, we had web farms that would scale to load behind the same virtual IP address but with newer frameworks such as Kubernetes, the services are deployed with ingress and external load balancers and can scale out dynamically. Whatsapp was written in Erlang to squeeze as much performance out of the APIs as possible and although it's been redesigned a lot, the deployment strategy remains with the same requirements. Some back-of-the-envelope calculation in terms of the number of servers depending on the total load and the load per server will help figure out the capacity required but Service-level agreement and performance indicators will help articulate those numbers better. 

3) Determine the storage: Many services fan out as micro-services relying on communication between them or to a central storage service but the costs for these calls are not really worked out even by the developer who writes them. Consequently, the timeouts and latencies become hard to determine. The storage service tends to virtualize the storage allowing all services to connect to it but the cost of the api call is contributed to even by the disk access and the right kind of storage will alleviate it. Standard solutions like relational database servers and online transactional processing systems can alleviate that but the deployer has the option to choose between stacks and vendors. There is significant scope for changes here. 

4) Determine the topology: If you are not deploying to the container orchestration framework or going native to the host with your service deployments, then you must determine how the servers are deployed. The firewall, load-balancers, proxies, and server distributions are only part of the topology. The data and control paths will vary based on topology and the right choices can make them more efficient. 

5) Tooling: With all the preparation, there will be some cost incurred in troubleshooting. Public clouds like Microsoft Azure have developer tools for all platform services targeting web and mobile, Internet of Things, Microservices, and Data + analytics, Identity management, Media streaming, High-Performance Compute, and Cognitive services. These platform services all utilize the core infrastructure of computing, networking, storage, and security. The Azure resource manager has multiple resources, role-based access control, custom tagging, and self-service templates. Azure is an open cloud because it supports open-source infrastructure tools such as Linux, Ubuntu, Docker, etc. layered with databases and middleware such as HadoopRedisMySQL, etc., app framework and tools such as nodeJS, Java, Python, etc., applications such as Joomla, Drupal, etc and management applications such as chef, puppet, etc. and finally with DevOps tools such as Jenkins, Gradle, Xamarin, etc. With the help of these tools, it is easier to troubleshoot. 

6) Create pipelines and dashboards for operations: Continuous Integration, Continuous Deployment, and Continuous Monitoring are core aspects of API service deployments. Investment in tools such as Splunk can automate and enable proper alerts and notifications to tend to the services. 

Conclusion: These are only some of the preparations for the API service deployments. The public clouds offer sufficient documentation to cover many other aspects. Please visit the following link to my blog post for more information 

Thursday, March 4, 2021

 Cloud Infrastructure and Cloud-Native Development: 

Introduction: Application and infrastructure shape the cloud. Once an investment is made into the choices of the technologies for the cloud, it becomes difficult to move to a newer cloud or public cloud. The choice of private versus public cloud is usually made beforehand and often for reasons different from the considerations of application or infrastructure. So, at this stage, it is merely the organization and utilization of resources, stacks, and technologies for the purpose of the anticipated workload. This article delves into some of those considerations. 

Description: Any cloud certification technology will describe the organization of a cloud in terms of SaaS, PaaS, and IaaS which are major cloud delivery layers, where SaaS stands for Software as a service, PaaS stands for Platform as a Service and IaaS stands for Infrastructure as a service.  Institutions invest in their being self-service, paid-on-demand, elastic, scalable, programmatically accessible, and available over the internet. The layers formed by these cloud services are stacked up, so they tend to hide complexities to the applications.  

A private cloud can improve its value by taking some of the following measures: 

  1. Provide container resources in addition to virtual machines to explode the number of computing resources.  

  1. Provide services that are customized to frequent usages by private cloud customers. This includes not only making it easier to use some services but also provisioning those that many customers often use. 

  1. Anticipate customer requests and suggest compute resources based on history and measurements. 

  1.  Provide additional services that more customers are drawn to the services and not just to the cloud. Additionally, the customers will not mind when the weight of the services is shifted between public and private cloud infrastructure as the costs dictate. 

  1.  Provide additional services that will not be offered elsewhere. For example, data tiering, aging, archival, deduplication, file services, backup and restore, naming and labeling, accelerated networks etc. offer major differentiation that does not necessarily have to lean towards machine learning to make the private cloud smart. 

  1.  Offer major periodic maintenance and activities on behalf of the customer such as monitoring disk space and adding storage, checking for usages, and making in place suggestions on the portal. 

  1.  Reduce the volume of service desk tickets aggressively with preemptive actions and minimizing them to only failures.  This is paying off debt so it may not translate to new services. 

  1.  Improving regional experiences not only with savvy resources but also improved networks for major regions. 

  1. Provide transparency, accounting, and auditing so that users can always choose to get more information for self-help and troubleshooting. FAQs and documentation could be improved preferably with a search field. 

  1.  Enable subscriptions to any or all alerts that can be set up by the customer on various activities. This gives the user informational emails with subjects that can be used to filter and treat at appropriate levels. 

Public clouds provide IaaS storage services with disks and files whereas they offer PaaS with services such as objects, tables, and queues. The storage offerings are built on a unified distributed storage system with guarantees for durability, encryption at Rest, strongly consistent replication, fault tolerance, and auto load balancing.  The IaaS is made up of storage arrays, virtual machines, and networking. The PaaS is made up of existing frameworks, web and mobile, microservices, and serverless computing. 

Many open-source cloud technologies such as Openstack provided out-of-box technologies that could add core features to the cloud and with easy programmability, but their maintenance was significant even when they used clusters. 

Kubernetes addressed both the infrastructure and resource by providing control and data planes that was much cleaner and more well-knit in design. PaaS may be called out as being restrictive to applications, dictating the choice of application frameworks, restricting supported language runtimes, and distinguishing apps from services. Kubernetes aims to support an extremely diverse variety of workloads. If the application has been compiled to run in a container, it will work with Kubernetes. PaaS provides databases, message bus, cluster storage systems but those can run on Kubernetes. There is also no click to deploy the service marketplace. Kubernetes does not build user code or deploy it. However, it facilitates CI workflows to run on it. 

We also have integration with Openstack for Linux containers. The plugin allows the hosts to compute nodes while the container takes on the workloads from the user. 

PaaS provides Application Gateway which can manage backend with rich diagnostics including access and performance logs, VM scale set support, and custom health probes. 

The Web Application Firewall security protects applications from web-based intrusions and is built using ModSecurity and CoreRule set. It is highly available and fully managed. 

Native Containers are small and fast. They have two characteristics. First, the containers are isolated from each other and from the host in that they even have their own file system. which makes it portable across cloud and OS distributions. Second, the immutable container images can be created at build/release time rather than the deployment time of the application since each application does not need to be composed with the rest of the application stack nor tied to the production infrastructure environment. Kubernetes extends this idea of app+container all the way where the host can be nodes of a cluster. Kubernetes evolved as an industry effort from the native Linux containers support of the operating systemIt can be considered as a step towards a truly container-centric development environment. Containers decouple applications from infrastructure which separates dev from ops.  

Conclusion: The choice of investments for a company in private cloud layers, therefore, includes a mix of Datacenter based technologies, Openstack or Mesos-based technologies, Kubernetes, and container orchestration technologies. 

#codingexercise

boolean intersects(Circle circle, Rectangle rectangle) {

closestXToCircle =Math.min(Math.max(circle.center.x, rectangle.bottomleft.x), rectangle.bottomleft.x + rectangle.width);

closestYToCircle = Math.min(Math.max(circle.center.y, rectangle.bottomleft.y), rectangle.bottomleft.y + rectangle.heignt);

if (distance(pair(closestXToCircle, closestYToCircle), circle.center.coordinates) <= circle.radius) {

    return true; 

return false;

}