A previous article explained the approach for UAV Swarm video sensing and this article explains the differences between stream and batch processing of the input. Most video sensing applications make use of one or the other form of processing depending on how intensive they want the analytics to be or how fast they want to study the images such as to track an object. Our approach was that of a platform across use cases such that there is deep data analysis and resource scheduling with ways to beat trade-offs in latency and flexibility. It also comes with the benefits of improved data quality, offline features, minimal supervision, greater efficiency, simplified processes and the ability to query both with structured query operators as well as natural language queries.
This is not to say stream processing must be avoided but that the analysis of each and every image as a datapoint could be avoided with little loss of fidelity in the responses to the queries from the video sensing applications. Stream processing manages endless flow of data while swiftly identifying and retaining the most important information with security and scalability, so use cases that do without it are not in scope but we do extend the boundary in that direction. We believe the gains from the batch processing characteristics such as being less critical, more fault-tolerant, simpler to implement and extend, and flexibility in defining batches reduce the Total Cost of Ownership and in a way that frees up the video sensing application from the infrastructure concerns.
In this regard, we cite the following comparisons via charts:
Use cases Latency from Streaming Latency from Batch
Occurrences of object
Object description such as circular roof
Distance between objects
Location information
Tracking of objects N/A
Color based search of objects such as red car
Shape based search such as triangular parking/building
Time lapse of a location N/A
Use cases Cost from Streaming in terms of (tokens, api calls, size of digital footprint) Cost from Batch in terms of (tokens, api calls, size of digital footprint)
Occurrences of object
Object description such as circular roof
Distance between objects
Location information
Tracking of objects
Color based search of objects such as red car
Shape based search such as triangular parking/building
Time lapse of a location
No comments:
Post a Comment