Friday, December 10, 2021

 

Azure Blueprint usages 

As a public cloud, Azure provides uniform templates to manage resource provisioning across several services.   Azure offers a control plane for all resources that can be deployed to the cloud and services take advantage of them both for themselves and their customers. While Azure Functions allow extensions via new resources, Azure Resource provider and ARM APIs provide extensions via existing resources. This eliminates the need to have new processes introduced around new resources and is a significant win for reusability and user convenience. New and existing resources are not the only way to write extensions, there are other options such as writing it in the Azure Store or via other control planes such as container orchestration frameworks and third-party platforms. This article focuses on Azure Blueprints. 

Azure Blueprints can be leveraged to allow an engineer or architect to sketch a project’s design parameters, define a repeatable set of resources that implements and adheres to an organization’s standards, patterns, and requirements.  It is a declarative way to orchestrate the deployment of various resource templates and other artifacts such as role assignments, policy assignments, ARM templates, and Resource Groups. Blueprint Objects are stored in the Cosmos DB and replicated to multiple Azure regions. Since it is designed to set up the environment, it is different from resource provisioning. This package fits nicely into a CI/CD pipeline and handles both what should be deployed and the assignment of what was deployed. 

Azure Blueprints differ from ARM templates in that the former helps environment setup while the latter helps with resource provisioning. It is a package that comprises artifacts that declare resource groups, policies, role assignments, and ARM Template deployments. It can be composed and versioned and included in continuous integration and continuous delivery pipelines. The components of the package can be assigned to a subscription in a single operation, audited, and tracked. Although the components can be individually registered, the Blueprint facilitates a relationship to the template and an active connection. 

There are two categories within the Blueprint – definitions for deployment that explain what should be deployed and the definitions for assignments that explain what was deployed. A previous effort to author ARM Templates become reusable in Azure Blueprint. In this way, Blueprint becomes bigger than just the templates and allows reusing an existing process to manage new resources. 

A Blueprint focuses on standards, patterns, and requirements. The design can be reused to maintain consistency and compliance. It differs from an Azure policy in that it supports parameters with policies and initiatives. A policy is a self-contained manifest that governs resource properties during deployment and for already existing resources. Resources within a subscription adhere to the requirements and standards. When a Blueprint comprises resource templates and Azure policy along with parameters, it becomes holistic in cloud governance.

Thursday, December 9, 2021

Designing a microservices architecture a service on the public cloud

 Microservices is great for allowing the domain to drive the development of a cloud service. It fits right into the approach to do “one thing” for the company and comes with a well-defined boundary for that service. Since it fulfils business capabilities, it does not focus on horizontal layers as much as it focuses on end-to-end vertical integration. It is cohesive and loosely coupled with other services. The Domain Driven Design provides a framework to build the services. It comes with two stages – strategic and tactical.  The steps to designing with this framework includes 1. analyzing domain, 2. defining bounded context, 3. defining entities, aggregates and services and 4. Identifying microservices.

The benefits of this service include: This is a simple architecture that focuses on end-to-end addition of business capabilities. They are easy to deploy and manage. There is a clear separation of concerns. The front end is decoupled from the worker using asynchronous messaging. The front end and the worker can be scaled independently.

Challenges faced with this service include: Care must be taken to ensure that the front end and the worker do not become large, monolithic components that are difficult to maintain and update. It hides unnecessary dependencies when the front end and worker share data schemas or code modules.

Some examples of microservice include: The microservices are best suited for expanding the backend service portfolio such as for eCommerce. Works great for transactional processing and deep separation of data access. Useful to work with application gateway, load balancer and ingress.

Few things to consider when deploying these services include the following:

1.       Availability – Event sourcing components allow system components to be loosely coupled and deployed independently of one another. Many of the Azure resources are built for availability.

2.       Scalability – Cosmos DB and Service Bus provide fast, predictable performance and scale seamlessly as the application grows. The event sourcing microservices based architecture can also make use of azure functions and Azure container instances to scale horizontally.

3.       Security features are available from all Azure resources, but it is also possible to include Azure monitors and Azure Sentinels.

4.       Fault zones and update zones are already tackled by the Azure resources so the resiliency comes with the use of these resources and the architecture can improve the overall order processing system.

5.       Azure advisor provides effective cost estimates and improvements.

These are only a few of the considerations. Some others follow from the choice of technologies and their support in Azure.

Wednesday, December 8, 2021

Event driven vs big data

 

Let’s compare our description in yesterday's post with the Big Data architectural style of building services.  This can be a vectorized execution environment and typically involving a size of data not seen with the traditional database systems. Both the storage and the message queue handle large volume of data and the execution can be stages as processing and analysis.  The processing can be either batch oriented or stream oriented.  The analysis and reporting can be offloaded to a variety of technology stacks with impressive dashboards. While the processing handles the requirements for batch and real-time processing on the big data, the analytics supports exploration and rendering of output from big data. It utilizes components such as data sources, data storage, batch processors, stream processors, real-time message queue, analytics data store, analytics and reporting stacks, and orchestration.

Some of the benefits of this application include the following: The ability to mix technology choices, achieving performance through parallelism, elastic scale and interoperability with existing solutions.

Some of the challenges faced with this architectural style include: The complexity where numerous components are required to handle the multiple data sources, and the challenge to build, deploy and test big data processes. Different products require as many as skillsets and maintenance with a requirement for data and query virtualization. For example, U-SQL which is a combination of SQL and C# is used with Azure Data Lake Analytics while SQL APIs are used with Hive, HBase, FLink and Spark. With this kind of a landscape, the emphasis on data security gets diluted and spread over a very large number of components.

Some of the best practices with this architectural style leverage parallelism, partition data, apply schema-on read semantics, process data in place, balance utilization and time costs, separate cluster resources, orchestrate data ingestion and scrub sensitive data

Some examples include applications that leverage IoT architecture and edge computing.

Conclusion: Both these styles serve their purpose of a cloud service very well.

 

Tuesday, December 7, 2021

Event driven architectural style for cloud computing

 

The choice of architecture for a web service has a significant contribution to this effect. We review the choices between Event-Driven and the Big Data architectural styles.

Event Driven architecture consists of event producers and consumers. Event producers are those that generate a stream of events and event consumers are ones that listen for events

The scale out can be adjusted to suit the demands of the workload and the events can be responded to in real time. Producers and consumers are isolated from one another. In some extreme cases such as IoT, the events must be ingested at very high volumes. There is scope for a high degree of parallelism since the consumers are run independently and in parallel, but they are tightly coupled to the events. Network latency for message exchanges between producers and consumers is kept to a minimum. Consumers can be added as necessary without impacting existing ones.

Some of the benefits of this architecture include the following: The publishers and subscribers are decoupled. There are no point-to-point integrations. It's easy to add new consumers to the system. Consumers can respond to events immediately as they arrive. They are highly scalable and distributed. There are subsystems that have independent views of the event stream.

Some of the challenges faced with this architecture include the following: Event loss is tolerated so if there needs to be guaranteed delivery, this poses a challenge. Some IoT traffic mandate a guaranteed delivery Events are processed in exactly the order they arrive. Each consumer type typically runs in multiple instances, for resiliency and scalability. This can pose a challenge if the processing logic is not idempotent, or the events must be processed in order.

Some of the best practices demonstrated by this code. Events should be lean and mean and not bloated. Services should share only IDs and/or a timestamp.  Large data transfer between services in this case is an antipattern. Loosely coupled event driven systems are best.

Some of the examples with this architectural style include edge computing including IoT traffic. It works great for automations that rely heavily on asynchronous backend processing and it is useful to maintain order, retries and dead letter queues

Monday, December 6, 2021

Big Compute vs Big Data architectural styles for implementing a cloud service

 

A web service for the cloud must be well suited for the business purpose its serves not only in its functionality but also in the non-functional aspects which are recorded in the Service-Level Agreements. The choice of architecture for a web service has a significant contribution to this effect. We review the choices between Big Compute and the Big Data architectural style.

The Big Compute architectural style refers to the requirements for many cores to handle the compute for the business such as for image rendering, fluid dynamics, financial risk modeling, oil exploration, drug design and engineering stress analysis. The scale out of the computational tasks is achieved by their discrete, isolated and finite nature where some input is taken in raw form and processed into an output. The scale out can be adjusted to suit the demands of the workload and the outputs can be conflated as is customary with map-reduce problems.  Since the tasks are run independently and in parallel, they are tightly coupled. Network latency for message exchanges between tasks is kept to a minimum. The commodity VMs used from the infrastructure is usually the higher end of the compute in that tier. Simulations and number crunching such as for astronomical calculations involve hundreds if not thousands of such compute.

Some of the benefits of this architecture include the following: 1) high performance due to the parallelization of tasks. 2) ability to scale out to arbitrarily large number of cores, 3) ability to utilize a wide variety of compute units and 4) dynamic allocation and deallocation of compute.

Some of the challenges faced with this architecture include the following: Managing the VM architecture, the volume of number crunching, the provisioning of thousands of cores on time and getting diminishing returns from additional cores.

Some of the best practices demonstrated by this code include It exposes a well-designed API to the client. It can auto scale to handle changes in the load. It caches semi-static data. It uses a CDN to host static content. It uses a polyglot persistence when appropriate. It partitions data to improve scalability, it reduces contention, and optimizes performance.

Some of the examples with this architectural style include applications that leverage the Azure Batch managed service to use a VM pool with uploaded code and data artifacts. In this case, the Azure Batch provisions the VMs, assigns the tasks and monitors the progress. It can automatically scale out the VMs in response to the workloads. When an HPC pack is deployed to Azure, it can burst the HPC cluster to handle peak workload.

Sunday, December 5, 2021

The architectural styles for implementing a cloud service. (Continued)

 

Let’s compare the architectural style described in the previous post with the N-Tier architectural style of building services. This involves many logical layers and physical tiers. It comprises of WebTier, Messaging and Middle-tier and it may or may not involve a FrontEnd. In the closed style, a layer can only call the next layer immediately down and in the open style, a layer can call any of the layers below it.

Some of the benefits of this application include the following: There is portability between cloud and on-premises, and between cloud platforms. There is less learning curve for most developers. There is a natural evolution from the traditional application mode, and it is open to heterogeneous environment (Window/Linux)

Some of the challenges faced with this architectural style include: The middle tier degenerates to a data access layer that just does CRUD operations on the database which introduces unnecessary latency. There is a monolithic design that prevents independent deployment of features. Managing an IaaS application is more work than an application that uses only managed services. It can be difficult to manage network security in a large system.

Some of the best practices faced with this architectural style include changes in load can easily be accomplished by scaling out. Asynchronous messaging can decouple tiers. Semi static data can be cached. The database tier can be configured for high availability, using a solution such as SQL Server which is always on availability groups. It places a web application firewall (WAF) between the front end and the Internet. It places each tier in its own subnet and use subnets as a security boundary. The access is restricted to the data tier.

Some examples include a simple web application, or an application that migrates an on-premises application to Azure with minimal refactoring, and a unified development of on-premises and cloud applications.

Conclusion: Both these styles serve the purpose of a cloud service very well.

Saturday, December 4, 2021

The architectural styles for implementing a cloud service.

 


Introduction:

A web service for the cloud must be well suited for the business purpose its serves not only in its functionality but also in the non-functional aspects which are recorded in the Service-Level Agreements. The choice of architecture for a web service has a significant contribution to this effect. We review the choices between Web-Queue architectural style and the N-Service architectural style.

The web-queue can absorb the latencies from the events and user actions are translated to events. It decouples the frontend and the API layer so that they become more responsive to the users. All the actions taken by the user can be mapped to one or other form of messages that are sent to the message queue which is usually the service bus. The Web Queue can handle plenty of messages and can even scale out to catch up with the items in the queue. Each message is handled by a different handler and there is one-to-one mapping which makes it easy to view.

Some of the benefits of this architecture include the following: 1) It is relatively simple architecture that is easy to understand.2) It is Easy to deploy and manage. 3) There is a clear separation of concerns. 4) The front end is decoupled from the worker using asynchronous messaging and 5) the front end and the worker can be scaled independently.

Some of the challenges faced with this architecture include the following: The front end and the worker can both become arbitrarily large, monolithic components which increase the maintenance costs. It may also hide dependencies, if the front end and worker share data schemas or code modules.

Some of the best practices demonstrated by this code include It exposes a well-designed API to the client. It can auto scale to handle changes in the load. It caches semi-static data. It uses a CDN to host static content. It uses a polyglot persistence when appropriate. It partitions data to improve scalability, it reduces contention, and optimizes performance.

Some of the examples with this architectural style include applications with a relatively simple domain, those with some long-running workflows or batch operations or when there are managed services rather than infrastructure as a service (IaaS).