Wednesday, March 8, 2023

 Data in motion – IoT solution and data replication

The transition of data from edge sensors to the cloud is a data engineering pattern that does not always get a proper resolution with the boilerplate Event-Driven architectural design proposed by the public clouds because much of the fine tuning is left to the choice of the resources, event hubs and infrastructure involved in the streaming of events.  This article explores the design and data in motion considerations for an IoT solution beginning with an introduction to the public cloud proposed design, the choices between products and the considerations for the handling and tuning of distributed, real-time data streaming systems with particular emphasis on data replication for business continuity and disaster recovery. A sample use case can include the continuous events for geospatial analytics in fleet management and its data can include driverless vehicles weblogs.

Event Driven architecture consists of event producers and consumers. Event producers are those that generate a stream of events and event consumers are ones that listen for events. The right choice of architectural style plays a big role in the total cost of ownership for a solution involving events.

The scale out can be adjusted to suit the demands of the workload and the events can be responded to in real time. Producers and consumers are isolated from one another. IoT requires events to be ingested at very high volumes. The producer-consumer design has scope for a high degree of parallelism since the consumers are run independently and in parallel, but they are tightly coupled to the events. Network latency for message exchanges between producers and consumers is kept to a minimum. Consumers can be added as necessary without impacting existing ones.

Some of the benefits of this architecture include the following: The publishers and subscribers are decoupled. There are no point-to-point integrations. It's easy to add new consumers to the system. Consumers can respond to events immediately as they arrive. They are highly scalable and distributed. There are subsystems that have independent views of the event stream.

Some of the challenges faced with this architecture include the following: Event loss is tolerated so if there needs to be guaranteed delivery, this poses a challenge. IoT traffic mandates a guaranteed delivery. Events are processed in exactly the order they arrive. Each consumer type typically runs in multiple instances, for resiliency and scalability. This can pose a challenge if the processing logic is not idempotent, or the events must be processed in order.

The benefits and the challenges suggest some of these best practices. Events should be lean and mean and not bloated. Services should share only IDs and/or a timestamp.  Large data transfer between services is an antipattern. Loosely coupled event driven systems are best.

IoT Solutions can be proposed either with an event driven stack involving open-source technologies or via a dedicated and optimized storage product such as a relational engine that is geared towards edge computing. Either way capabilities to stream, process and analyze data are expected by modern IoT applications. IoT systems vary in flavor and size. Not all IoT systems have the same certifications or capabilities.

When these IoT resources are shared, isolation model, impact-to-scaling performance, state management and security of the IoT resources become complex. Scaling resources helps meet the changing demand from the growing number of consumers and the increase in the amount of traffic. We might need to increase the capacity of the resources to maintain an acceptable performance rate. Scaling depends on number of producers and consumers, payload size, partition count, egress request rate and usage of IoT hubs capture, schema registry, and other advanced features. When additional IoT is provisioned or rate limit is adjusted, the multitenant solution can perform retries to overcome the transient failures from requests. When the number of active users reduces or there is a decrease in the traffic, the IoT resources could be released to reduce costs. Data isolation depends on the scope of isolation. When the storage for IoT is a relational database server, then the IoT solution can make use of IoT Hub. Varying levels and scope of sharing of IoT resources demands simplicity from the architecture. Patterns such as the use of the deployment stamp pattern, the IoT resource consolidation pattern and the dedicated IoT resources pattern help to optimize the operational cost and management with little or no impact on the usages.   

Edge computing relies heavily on asynchronous backend processing. Some form of message broker becomes necessary to maintain order between events, retries and dead-letter queues. The storage for the data must follow the data partitioning guidance where the partitions can be managed and accessed separately. Horizontal, vertical, and functional partitioning strategies must be suitably applied. In the analytics space, a typical scenario is to build solutions that integrate data from many IoT devices into a comprehensive data analysis architecture to improve and automate decision making.

Event Hubs, blob storage, and IoT hubs can collect data on the ingestion side, while they are distributed after analysis via alerts and notifications, dynamic dashboarding, data warehousing, and storage/archival. The fan-out of data to different services is itself a value addition but the ability to transform events into processed events also generates more possibilities for downstream usages including reporting and visualizations.

One of the main considerations for data pipelines involving ingestion capabilities for IoT scale data is the business continuity and disaster recovery scenario. This is achieved with replication.  A broker stores messages in a topic which is a logical group of one or more partitions. The broker guarantees message ordering within a partition and provides a persistent log-based storage layer where the append-only logs inherently guarantee message ordering. By deploying brokers over more than one cluster, geo-replication is introduced to address disaster recovery strategies.

Each partition is associated with an append-only log, so messages appended to the log are ordered by the time and have important offsets such as the first available offset in the log, the high watermark or the offset of the last message that was successfully written and committed to the log by the brokers and the end offset  where the last message was written to the log and exceeds the high watermark. When a broker goes down, subsequent durability and availability must be addressed with replicas. Each partition has many replicas that are evenly distributed but one replica is elected as the leader and the rest are followers. The leader is where all the produce and consume requests go, and followers replicate the writes from the leader.

A pull-based replication model is the norm for brokers where dedicated fetcher threads periodically pull data between broker pairs. Each replica is a byte-for-byte copy of each other, which makes this replication offset preserving. The number of replicas is determined by the replication factor. The leader maintains a ledge called the in-sync replica set, where messages are committed by the leader after all replicas in the ISR set replicate the message. Global availability demands that brokers are deployed with different deployment modes. Two popular deployment modes are 1) a single broker that stretches over multiple clusters and 2) a federation of connected clusters.

Some replicas are asynchronous by nature and are called observers. They do not participate in the in-sync replica or become a partition leader, but they restore availability to the partition and allow producers to produce data again. Connected clusters might involve clusters in distinct and different geographic regions and usually involve linking between the clusters. Linking is an extension of the replica fetching protocol that is inherent to a single cluster. A link contains all the connection information necessary for the destination cluster to connect to the source cluster. A topic on the destination cluster that fetches data over the cluster link is called a mirror topic. This mirror may have a same or prefixed name, synced configurations, byte for byte copy and consumer offsets as well as access control lists.

Managed services over brokers complete the delivery value to the business from standalone deployments of brokers such that cluster sizing, over-provisioning, failover design and infrastructure management are automated. They are known to amplify the availability to 99.99% uptime service-level agreement. Often, they involve a replicator which is a worker that executes connector and its tasks to co-ordinate data streaming between source and destination broker clusters.  A replicator has a source consumer that consumes the records from the source cluster and then passes these records to the Connect framework. The Connect framework would have a built-in producer that then produces these records to the destination cluster. It might also have dedicated clients to propagate overall metadata updates to the destination cluster.

In a geographically distributed replication for business continuity and disaster recovery, the primary region has the active cluster that the producers and consumers write to and read from, and the secondary region has read-only clusters with replicated topics for read only consumers. It is also possible to configure two clusters to replicate to each other so that both of them have their own sets of producers and consumers but even in these cases, the replicated topic on either side will only have read-only consumers. Fan-in and Fan-out are other possible arrangements for such replication.

Disaster recovery almost always occurs with a failover of the primary active cluster to a secondary cluster. When disaster strikes, the maximum amount of data usually measured in terms of time that can be lost after a recovery is minimized by virtue of this replication. This is referred to as the Recovery Point Objective. The targeted duration until the service level is restored to the expectations of the business process is referred to as the Recovery Time Objective. The recovery helps the system to be brought back to operational mode. Cost, business requirements, use cases and regulatory and compliance requirements mandate this replication and the considerations made for the data in motion for replication often stand out as best practice for the overall solution.

 

No comments:

Post a Comment