Having described the collection of metrics from streams with primitive information on events, let us now see how to boost the variety and customization of metrics. In this regard, the field extraction performed from the parsing of historical events, provides additional datapoints which become helpful in generating metrics. For example, the metrics can now be based on field names and their values as they occur throughout the stream, giving information on the priority and severity about certain events. This summary information about the stream now includes metrics about criteria pertaining to the events.
The set of features mentioned earlier are independent of each other which allows them to be implemented in stages or evaluated for priority. The reference to metadata is only a suggestion that it pertains to a give stream and the events in the stream. The container for such interpreted data might be implemented instead as a global stream. This gives the ability to collect summary information across all streams in a scope or dedicate a stream for all the operational data for the stream store.
There are a few cases where the metadata might be little or none. This includes encrypted data and binary data. The bulk of the use cases for the stream store instead deals with textual data. So the benefits from the features mentioned above are not marginal.
Client applications can do without the support for such metadata from the stream store but it is cumbersome for them as they inject intermittent events with summary data that they would like to publish for their events. These special events can be skipped during data reads from the stream. As with any events, they will need to be iterated.
Clients would find it inconvenient to write different types of events to the same stream unless they use different payloads in the same type of envelope or data type. On the other hand, it is easier for the clients to have readers and writers to read a single type of event from a stream. This favors the client to choose a model where they store processing related information in a dedicated stream. The stream store however can create internal streams specifically for the purpose of metadata. The latter approach ties the setup and teardown of the internal streams to the data stream and becomes a managed approach.
No comments:
Post a Comment