Introspection analytics
A Flink job might be dedicated to perform periodic analysis of introspection data or to collect information from sensors. The job can also consolidate data from other sources that are internal to the stream store and hidden from users.
Batching and statistics are some of the changes with which the analytics job can help. Simple aggregate queries per time window for sum(), min(), max() can help make more meaningful events to be persisted in the stream store.
The FlinkJob may have network connectivity to read events from external data stores and these could include events published by sensors. Usually those events make their way to the stream store irrespective of whether there is an introspection store or analytics job or not in the current version of the system. In some cases, it is helpful for the analytical jobs to glance at backlog and rate of accumulation in the stream store as overall throughput and Latency for all the components taken together. Calculation and persistence of such diagnostics events is helpful for trends and investigations later.
The use of Flink job dedicated to introspection store immensely improves the querying capability. Almost all aspects of querying as outlined in the Stream processing of Apache Flink by O’Reilly Media can be used for this purpose
Distributed Collection agents
As with any store not just introspection store, data can come from different sources for the destination as the store. Collection agents for each type of source make it convenient for the data to be transferred to the store
The collection agents do not themselves need to be monitored. The data they send can be lossy but it should arrive at the store. The store is considered a singleton local instance. It may not even be in the same system as the one it serves. The rest of the store may be global and shared but the data transfer from collection agent does not have to go directly to the global shared storage. If it helps to have the introspection store serve as the same local destination for all collection agents, the introspection store can be kept outside the global storage. In either case the streams are managed centrally by the stream store and the storage refers to tier 2 persistence.
Distributed Destinations:
Depending on the mode of deployment, the collection agents can be either lean or bulky. In the latter case, they come configured with their own storage so that all the events are batched under the resource restriction of the site where the agent is deployed. Those batched events can then be periodically pushed to the introspection store. This is rather useful when certain components of the system don’t even share the same cluster or host on which the streamstore is deployed. The collection agents are usually as close to the data source as possible so the design to keep them going regardless of whether the rest of the system is reachable or not is prudent given that certain sites might even be dark. Under such circumstance, the ability to propagate events collected remotely for introspection of data collection agents will be very helpful for administrators to use as and when they like.
No comments:
Post a Comment