Thursday, March 31, 2022

 

Service Fabric (continued)     

Part 2 compared Paxos and Raft. Part 3 discussed SF-Ring, Part 4 discussed its architecture and Part 5 described compute planning and scaling.  This article describes Service Fabric security best practices.

Azure Service Fabric makes it easy to package deploy and manage scalable and reliable microservices. It helps with developing and managing cloud applications. These applications and services can be stateless or stateful. They are run with high efficiency and load balancing. It supports real-time data analysis, in-memory computation, parallel transactions, and event processing in the applications.

The security best practices are described at various levels. At the level of an instance of Service Fabric, the Azure Resource Manager templates and the Service Fabric PowerShell modules create secure clusters. X.509 certificates must be used to secure the instance. Security policies must be configured and the Reliable Actors security configuration must be implemented. The TLS must be configured so that all communications are encrypted. Users must be assigned to roles and Role based Access Control must be used to secure all control plane access.

At the level of a cluster, certificates continue to secure the cluster and client access – both read-only and admin access are secured by Azure Active Directory. Automated deployments use scripts to generate, deploy and roll over the secrets. The secrets are stored in the Azure Key Vault and the Azure AD is used for all other client access. Authentication is required from all users. The cluster must be configured to create perimeter networks by using Azure Network Security Groups. Cluster virtual machines must be accessed via jump servers with Remote Desktop Connection.

Within the cluster, there are three scenarios for implementing cluster security by various technologies.

Node-to-node security: This scenario secures communication between the VMs and the computers in the cluster. Only computers that are authorized to join the cluster can host applications and services in the cluster.

Client-to-node security: This scenario secures communication between a Service Fabric client and the individual nodes in the cluster.

Service Fabric role-based access control: This scenario uses separate identities for each administrator and user client role that accesses the cluster. The role identities are specified when the cluster is created.

A detailed checklist for security and compliance is also included for reference: https://1drv.ms/b/s!Ashlm-Nw-wnWzR4MPnriBWYTlMY6  

 

 

 

 

Wednesday, March 30, 2022

Service Fabric (continued)     

Part 2 compared Paxos and Raft. Part 3 discussed SF-Ring, Part 4 discussed its architecture and Part 5 described compute planning and scaling.  This article describes Service Fabric security best practices.

Azure Service Fabric makes it easy to package deploy and manage scalable and reliable microservices. It helps with developing and managing cloud applications. These applications and services can be stateless or stateful. They are run with high efficiency and load balancing. It supports real-time data analysis, in-memory computation, parallel transactions, and event processing in the applications.

The security best practices are described at various levels. At the level of an instance of Service Fabric, the Azure Resource Manager templates and the Service Fabric PowerShell modules create secure clusters. X.509 certificates must be used to secure the instance. Security policies must be configured and the Reliable Actors security configuration must be implemented. The TLS must be configured so that all communications are encrypted. Users must be assigned to roles and Role based Access Control must be used to secure all control plane access.

At the level of a cluster, certificates continue to secure the cluster and client access – both read-only and admin access are secured by Azure Active Directory. Automated deployments use scripts to generate, deploy and roll over the secrets. The secrets are stored in the Azure Key Vault and the Azure AD is used for all other client access. Authentication is required from all users. The cluster must be configured to create perimeter networks by using Azure Network Security Groups. Cluster virtual machines must be accessed via jump servers with Remote Desktop Connection.

Within the cluster, there are three scenarios for implementing cluster security by various technologies.

Node-to-node security: This scenario secures communication between the VMs and the computers in the cluster. Only computers that are authorized to join the cluster can host applications and services in the cluster.

Client-to-node security: This scenario secures communication between a Service Fabric client and the individual nodes in the cluster.

Service Fabric role-based access control: This scenario uses separate identities for each administrator and user client role that accesses the cluster. The role identities are specified when the cluster is created.

Tuesday, March 29, 2022

Service Fabric (continued)    

Part 2 compared Paxos and Raft. Part 3 discussed SF-Ring and Part 4 discussed its architecture. This article describes compute planning and scaling.

Service Fabric supports a wide variety of business applications and services. These applications and services can be stateless or stateful. They are run with high efficiency and load balancing. It supports real-time data analysis, in-memory computation, parallel transactions, and event processing in the applications. Applications can be scaled in or out depending on the changing resource requirements.

Service Fabric handles hosts stateful services that must support large scale and low latency. It can help process data on millions of devices where the data for the device and the computation are co-located. It is equally effective for both core and edge services and scales to IoT traffic. Apps and services are all deployed in the same Service Fabric cluster through the Service Fabric deployment commands and yet each of them is independently scaled and made reliable with guarantees for resources. This independence improves agility and flexibility.

Scalability considerations depend on the initial configuration and whether scaling is required for the number of nodes of each node type or if it is required for services.

Initial cluster configuration is important for scalability. When the service fabric cluster is created, the node types are determined, and each node type can scale independently. A node type can be created for each group of services that have different scalability or resource requirements. A node type for the system services must first be configured. Then separate node types can be created for public or front-end services and other node types as necessary for the backend. Placement services can be specified so that services are only deployed to the intended node types.

The durability tier for each node type represents the ability for Service Fabric to influence virtual machine scale set updates and maintenance operations. The production workloads requires Silver or higher durability tier. If the bronze durability tier is used, additional steps are required for scale-in.

Each node type can have a maximum of 100 nodes. Anything more than that will require more node types. A VMSS does not scale instantaneously so the delay must be tolerated during autoscaling. Automatic scale in to reduce the number depends on silver or gold durability tier.

Scaling services depend on whether the services are stateful or stateless. Autoscaling of stateless services can be done by using the average partition load trigger or setting instance count to -1 in the service manifest. Stateful services require each node to get adequate replicas. Dynamic creation or deletion of services or whole application instances is also supported.

Average partition load trigger allows us to scale up the number of nodes. The instanceCount in the service manifest automatically creates and deletes service instances to match.

 

 

 

Monday, March 28, 2022

 Service Fabric (continued)    

Part 2 compared Paxos and Raft. Part 3 discussed SF-Ring and Part 4 discussed its architecture. This article describes compute planning and scaling.

Service Fabric supports a wide variety of business applications and services. These applications and services can be stateless or stateful. They are run with high efficiency and load balancing. It supports real-time data analysis, in-memory computation, parallel transactions, and event processing in the applications. Applications can be scaled in or out depending on the changing resource requirements.

Service Fabric handles hosts stateful services that must support large scale and low latency. It can help process data on millions of devices where the data for the device and the computation are co-located. It is equally effective for both core and edge services and scales to IoT traffic. Apps and services are all deployed in the same Service Fabric cluster through the Service Fabric deployment commands and yet each of them is independently scaled and made reliable with guarantees for resources. This independence improves agility and flexibility.

Capacity and Scaling are two different considerations for Service Fabric and must be reviewed individually. Cluster capacity considerations include Key considerations include initial number and properties of cluster node types, durability level of each node type, which determines Service Fabric VM privileges within Azure infrastructure, and reliability level of the cluster, which determines the stability of Service Fabric system services and overall cluster function

A cluster requires a node type. A node type defines the size, number, and properties for a set of nodes (virtual machines) in the cluster. Every node type that is defined in a Service Fabric cluster maps to a virtual machine scale set aka VMSS. A primary node type is reserved to run critical system services. Non-primary node types are used for backend and frontend services.

Node type planning considerations depend on whether the application has multiple services or if they have different infrastructure needs such as greater RAM or higher CPU cycles or if any of the application services need to scale out beyond hundred nodes or if the cluster spans availability zones.

Sunday, March 27, 2022

 

Service Fabric (continued)    

Part 2 compared Paxos and Raft. Part 3 discussed SF-Ring and Part 4 discussed its architecture. This article describes its usage scenarios.

Service Fabric supports a wide variety of business applications and services. These applications and services can be stateless or stateful. They are run with high efficiency and load balancing. It supports real-time data analysis, in-memory computation, parallel transactions, and event processing in the applications. Applications can be scaled in or out depending on the changing resource requirements.

Service Fabric handles hosts stateful services that must support large scale and low latency. It can help process data on millions of devices where the data for the device and the computation are co-located. It is equally effective for both core and edge services and scales to IoT traffic.

Service Fabric is also useful for scenarios that require low-latency reads and writes, such as in online gaming or instant messaging. Applications can be built to be interactive and stateful without having to create a separate store or cache. Gaming and instant messaging are some examples of this scenario.

Applications that must reliably process events or streams of data run well on Service Fabric with its optimized reads and writes. Service Fabric supports application processing pipelines, where results must be reliable and passed on to the next processing stage without any loss. These pipelines include transactional and financial systems, where data consistency and computation guarantees are essential.

Stateful applications that perform intensive data computation and require the colocation of processing (computation) and data in applications benefit from Service Fabric as well. Stateful Service Fabric services eliminate that latency, enabling more optimized reads and writes. As an example, real-time recommendation selections for customers that require a round trip-time latency of less than hundred milliseconds are handled with ease.

Service Fabric also supports highly available services and provides fast failover by creating multiple secondary service replicas. If a node, process, or individual service goes down due to hardware or other failure, one of the secondary replicas is promoted to a primary replica with minimal loss of service.

Individual services can also be partitioned where services are hosted by different hosts. Individual services can also be created and removed on the fly. Services can be scaled from a few instances on a few nodes to thousands of instances on several nodes and dialed down as well. Service Fabric helps with the complete life cycles.

Examples of stateless services include Azure Cloud Services and that for stateful microservices that must maintain authoritative state beyond the request and its response include ASP.Net and node.js services Service Fabric provides high availability and consistency of state through simple APIs that provide transactional guarantees backed by replication.

Stateful services in Service Fabric bring high availability to all types of applications and not just those that depend on a database or a data store. This covers both relational and big data stores. Applications can have both their state and data managed for additional performance gains without sacrificing reliability, consistency or availability

Apps and services are all deployed in the same Service Fabric cluster through the Service Fabric deployment commands and yet each of them is independently scaled and made reliable with guarantees for resources. This independence improves agility and flexibility.

Stateful microservices simplify application design because they remove the need for the additional queues and caches and have traditionally been required to address the availability and latency requirements of purely stateless applications. Service Fabric provides reliable services and reliable actors programming models and reduce the application complexity while achieving high throughput and low latency.

 

 

 

Saturday, March 26, 2022

 

Service Scalability and Reliability

These are some observations about scalability and reliability of a cloud based service.

The primary consideration is between the tradeoffs for compute versus data optimizations.

The scale-out of the computational tasks is achieved by their discrete, isolated, and finite nature where some input is taken in raw form and processed into an output. The scale out can be adjusted to suit the demands of the workload and the outputs can be conflated as is customary with map-reduce problems.  Since the tasks are run independently and in parallel, they are tightly coupled. Network latency for message exchanges between tasks is kept to a minimum.

Compute-oriented improvements have the following benefits: 1) high performance due to the parallelization of tasks. 2) ability to scale out to arbitrarily large number of cores, 3) ability to utilize a wide variety of compute units and 4) dynamic allocation and deallocation of compute. 

Some of the best practices demonstrated by this approach include the following: It exposes a well-designed API to the client. It can auto-scale to handle changes in the load. It caches semi-static data. It uses polyglot persistence when appropriate. It partitions data to improve scalability, reduce contention, and optimize performance. There are Kusto endpoints for read-only data in USNat, USSec and public cloud.

The storage approach leans on multiple persistence and larger volume of data so that services can stage processing and analysis both of which have different patterns. The data continues to be made available in real-time but there is a separation of read-only and read-write. Copy-on-Write mechanism is provided by default and versioning is supported.

Some of the benefits of this approach include the following: The ability to mix technology choices, achieving performance through efficiency in data processing, queuing on the service side, and interoperability with existing service technology stacks. 

Some of the best practices with this architectural style leverage parallelism, partition data, apply schema-on read semantics, process data in place, balance utilization and time costs, separate cluster resources, orchestrate data ingestion and scrub sensitive data. 

Some of the architectural guidance and best practice for implementing cloud services is found via the reference documentation online and as presented in this article

 

Friday, March 25, 2022

 

Service Fabric (continued)    

Part 2 compared Paxos and Raft. Part 3 discussed SF-Ring and Part 4 discussed its architecture. This article describes its support for microservices. 

 

Service Fabric provides an infrastructure to build, deploy, and upgrade microservices efficiently with options for auto scaling, managing state, monitoring health, and restarting services in case of failure. It helps developers and administrators to focus on the implementation of workloads that are scalable, reliable, and manageable by avoiding the issues that are regularly caused by complex infrastructures. The major benefits it provides include deploying and evolving services at very low cost and high velocity, lowering costs to changing business requirements, exploiting the widespread skills of developers, and decoupling packaged applications from workflows and user interactions.

 

Service Fabric follows an application model where an application is a collection of microservices. The application is described in an application manifest file that defines the different types of service contained in that application, and pointers to the independent service packages. The application package also usually contains parameters that serve as overrides for certain settings used by the services. Each service package has a manifest file that describes the physical files and folders that are necessary to run that service, including binaries, configuration files, and read-only data for that service. Services and applications are independently versioned and upgradable.

 

A package can deploy more than one application but if one service fails to upgrade, the entire application is rolled back. For this reason, the microservices architecture is best served by multiple packages. If a set of services share the same resources and configuration or have the same lifecycle, then those services can be placed in the same application type.

Service Fabric programming models can be chosen whether the services are stateful or stateless.

 

Service Fabric distinguishes itself with support for strong consistency and support for stateful microservices. Each of the SF components offer strong consistency behavior. There were two ways to do this: provide consistent – build consistent applications on top of inconsistent components or use consistent components from the grounds-up. The end-to-end principle dictates that if performance is worth the cost for a functionality, then it can be built into the middle. If consistency were instead to only be built at the application layer, each distinct application would have significant costs for maintenance and reliability. Instead, if the consistency is supported at each layer, it allows higher layer design to focus on their relevant notion of consistency and allows both weakly consistent applications and strongly consistent applications to be built on top of Service Fabric. This is easier than building consistent applications over an inconsistent substrate. 

 

A stateless service is chosen when it must scale, and the data or state can be stored externally.  There is also an option to run an existing service as a guest executable and it can be packaged in a container will all its dependencies. Service Fabric models are both container and executable as stateless services.

 

An API gateway (ingress) sits between external clients and the microservices and acts as a reverse proxy, routing requests from clients to microservices. As an http proxy, it can handle authentication, SSL termination, and rate limiting.