This is a continuation of the article introduced here: https://1drv.ms/w/s!Ashlm-Nw-wnWhKdsT81vgXlVl38AfA?e=qzxyuJ.
Specifically, it discusses the technology choices from the Application
Architecture guide for Azure Public Cloud.
1.
Choosing a candidate service:
a.
if full control of the compute is required, a
virtual machine or its scaleset is appropriate.
b.
If it has HPC workload, Azure Batch is helpful.
c.
If it has a microservice architecture, it has an
Azure App Service.
d.
If it has an event-driven workload with
short-lived processes. Azure functions suit the task.
e.
If a full-fledged orchestration is required,
Azure Container Instances are helpful.
f.
If a managed service is needed with .Net
framework, Azure Service Fabric is helpful.
g.
If Spring boot applications are required, Azure
Spring Cloud is helpful.
h.
If RedHat Openshift is required, Azure RedHat
Openshift is dedicated to this purpose.
i.
If a managed infrastructure is required, Azure
Kubernetes Service does the job.
There are two ways to migrate on-premises compute to the
cloud:
The first involves the ‘lift and shift’ pattern which is a
strategy for migrating a workload to the cloud without the redesigning ans is
also called ‘rehosting’ pattern.
The second involves refactoring an application to take
advantage of the cloud native features.
2.
Microservices architecture can have further
options for deployment. There are two
approaches here: The first involves a
service orchestrator that manages services running on dedicated nodes (VMs) and
the second involves a serverless architecture using Functions-as-a-service
(FaaS). When Microservices are deployed as binary executables aka Reliable
Services, the Reliable Services Programming Model makes use of Service Fabric
Programming APIs to query system, report health, receive notifications on
configuration and code changes, and discover other services. This is
tremendously advantageous for building stateful services using so called
Reliable Collections. AKS and Mesophere provide alternative infrastructures for
deployment.
3.
The Kubernetes at the edge compute option is
excellent to keep operational costs low, easily configure and deploy a cluster,
find flexibility with existing infrastructure at the edge, and for running a
mixed node cluster with both Linux and Windows nodes. Out of the three options
for leveraging Kubernetes with Baremetal, K8s on Azure Stack edge and AKS on
HCI, the last option is the easiest to work with.
4.
Choosing
an identity service involves comparing the options for self-managed Active
Directory services, Azure Active Directory, and managed Azure Active Directory
Domain Services. Azure Active Directory does away with one’s own directory
services and leveraging the cloud provided one instead.
5.
Choosing a data store is not always easy. The
term polyglot persistence is used to describe solutions that use a mix of data
store technologies. Therefore, it is important to understand the main storage
models and their tradeoffs.
6.
Choosing an analytical data store is not always
about Big Data or lambda architectures with its incremental data processing
speed serving layer and batch processing layer, although they both require it.
The driving force varies on a case-by-case basis.
7.
Similarly, AI/ML services can be leveraged
directly from the Azure Cognitive Services portfolio, but its applicability
varies on a case-by-case basis as well.
No comments:
Post a Comment