Friday, March 5, 2021

Preparation for deploying API services to the cloud: 

Introduction: APIs are desirable features to deploy because they enable automation, programmability, and connectivity from remote devices. Deploying the API to the cloud makes it even more popular now that the clients can reach them from anywhere that has IP connectivity. The public clouds offer immense capabilities to write API services and deploy them, but the preparation is largely left to the source. This article tries to list some of those considerations that have proven themselves as noteworthy from numerous field experiences. 

1) Choose the right technology: There is a variety of stacks depending on the language and platform to choose from. Some are highly performant; others are more secure and a lot in between that do perform just well enough. The choice of technology stack depends on the changes that will be made to the APIs, the number of releases made in a year, the compute and storage resources needed for the APIs, and the maturity of the framework. There are side-by-side comparisons available to choose from, and the investment is usually a one-time cost even if the technical debt accrues over time. 

2) Anticipate the load: Some APIs like those for Whatsapp messages are going to be generating millions of calls every minute. Earlier, we had web farms that would scale to load behind the same virtual IP address but with newer frameworks such as Kubernetes, the services are deployed with ingress and external load balancers and can scale out dynamically. Whatsapp was written in Erlang to squeeze as much performance out of the APIs as possible and although it's been redesigned a lot, the deployment strategy remains with the same requirements. Some back-of-the-envelope calculation in terms of the number of servers depending on the total load and the load per server will help figure out the capacity required but Service-level agreement and performance indicators will help articulate those numbers better. 

3) Determine the storage: Many services fan out as micro-services relying on communication between them or to a central storage service but the costs for these calls are not really worked out even by the developer who writes them. Consequently, the timeouts and latencies become hard to determine. The storage service tends to virtualize the storage allowing all services to connect to it but the cost of the api call is contributed to even by the disk access and the right kind of storage will alleviate it. Standard solutions like relational database servers and online transactional processing systems can alleviate that but the deployer has the option to choose between stacks and vendors. There is significant scope for changes here. 

4) Determine the topology: If you are not deploying to the container orchestration framework or going native to the host with your service deployments, then you must determine how the servers are deployed. The firewall, load-balancers, proxies, and server distributions are only part of the topology. The data and control paths will vary based on topology and the right choices can make them more efficient. 

5) Tooling: With all the preparation, there will be some cost incurred in troubleshooting. Public clouds like Microsoft Azure have developer tools for all platform services targeting web and mobile, Internet of Things, Microservices, and Data + analytics, Identity management, Media streaming, High-Performance Compute, and Cognitive services. These platform services all utilize the core infrastructure of computing, networking, storage, and security. The Azure resource manager has multiple resources, role-based access control, custom tagging, and self-service templates. Azure is an open cloud because it supports open-source infrastructure tools such as Linux, Ubuntu, Docker, etc. layered with databases and middleware such as HadoopRedisMySQL, etc., app framework and tools such as nodeJS, Java, Python, etc., applications such as Joomla, Drupal, etc and management applications such as chef, puppet, etc. and finally with DevOps tools such as Jenkins, Gradle, Xamarin, etc. With the help of these tools, it is easier to troubleshoot. 

6) Create pipelines and dashboards for operations: Continuous Integration, Continuous Deployment, and Continuous Monitoring are core aspects of API service deployments. Investment in tools such as Splunk can automate and enable proper alerts and notifications to tend to the services. 

Conclusion: These are only some of the preparations for the API service deployments. The public clouds offer sufficient documentation to cover many other aspects. Please visit the following link to my blog post for more information 

No comments:

Post a Comment