Thursday, June 25, 2020

Use of cache by stream store clients ( continued from previous article )


The stitching of segment ranges can be performed in the cache instead of the stream store by the design mentioned yesterday. Even a message queue broker or a web accessible storage can benefit the use of conflating of streams


As long as the historical readers follow the same order as the order of segmentRanges in the source stream, the events will be guaranteed to be written in the same order  in the destination stream. This calls for the historical readers to send their payload to a blocking Queue  and the writer  will follow the same order of writes as the order in which they are in the queue . The blocking Queue backed by the cache becomes a serializer of the different segmentRanges to the destination stream and the writes can be superfast when all the events for the segmentRange are available in memory


The stitching of streams into a single stream at the stream store varies considerably from the same logic in the clients. The stream store not only is the best place to do so but it also is the authoritative one. The clients can conflate the stream from a source to a destination regardless of whether the destination is the same store from which the stream was read. Instead, the stream store is the one that can maintain the integrity of the streams as they are stitched together. The resulting stream is guaranteed to be the true one as compared to the result of any other actors. 


The conflation of stream at the stream store is an efficient operation because the events in the original streams now become part of the conflated stream if the original stream does not need to retain its identity. The events are also not copied one by one in this case. The entire segmentRange of the streams to be conflated will simply have its metadata replaced with what corresponds to the new stream. The rewrite of metadata across all the participating streams in the conflation is then a low cost super fast operation. 


The stream store does not need anything more complicated than a copyStream operation which can then be appended to a stream builder. Each stream already knows how to append an event at the tail. If a collection of events cannot be appended, a copy stream operation will enable the events to be duplicated and since it is does not have an identity, it can be stitched with builder stream by rewriting the metadata. This can be done on an event by event basis also but that already is exposed via the reader and writer interface. Instead, the copyStream operation on the stream manager can handle a collection of events at a time. 

No comments:

Post a Comment