We were discussing Event Storage such as for Telemetry and introspection.
The form of writing the logic for telemetry and introspection can be different from product to product. Some like to write standalone tool while others like to incorporate it as diagnostic APIs and runtime queries. This leads to a collection of utilities, services and programs that evolve on a case by case basis. Some planning to allow the growth of the utilities by creating an inventory of components and layers and coming up with a common framework for querying their health helps in the long run. This system architecture is a core value proposition of the patent application and is best known to the maker.
Among the implementations available for telemetry and introspection, one of arguments against making it a part of the product has traditionally been that this is essentially a reporting stack and even if the data is continuous as if in a stream, much of the analysis is read-only and is independent from the system that is busy with the read-write operations on the product that is performance-oriented. This calls for an event infrastructure framework built on stream store that can be independent analytical platform available even as a separate stack that can be standardized as a published plugin for many products. This is true for all logs, metrics and events that are generated continuously and in real-time from the products and are available to be read by these stacks. This follows a push model of health information from the products. The pull model of retrieving data from the products will require expertise from each component. In such a case, the logic is part of the product and exposed via the packaging of logic referenced earlier. Both the push and the pull model have their respective usages. The discussion in this document is an argument for improving the pull model with consistency, innovation and sound system architecture while working well with other products in the ecosystem that relay the information.
The form of writing the logic for telemetry and introspection can be different from product to product. Some like to write standalone tool while others like to incorporate it as diagnostic APIs and runtime queries. This leads to a collection of utilities, services and programs that evolve on a case by case basis. Some planning to allow the growth of the utilities by creating an inventory of components and layers and coming up with a common framework for querying their health helps in the long run. This system architecture is a core value proposition of the patent application and is best known to the maker.
Among the implementations available for telemetry and introspection, one of arguments against making it a part of the product has traditionally been that this is essentially a reporting stack and even if the data is continuous as if in a stream, much of the analysis is read-only and is independent from the system that is busy with the read-write operations on the product that is performance-oriented. This calls for an event infrastructure framework built on stream store that can be independent analytical platform available even as a separate stack that can be standardized as a published plugin for many products. This is true for all logs, metrics and events that are generated continuously and in real-time from the products and are available to be read by these stacks. This follows a push model of health information from the products. The pull model of retrieving data from the products will require expertise from each component. In such a case, the logic is part of the product and exposed via the packaging of logic referenced earlier. Both the push and the pull model have their respective usages. The discussion in this document is an argument for improving the pull model with consistency, innovation and sound system architecture while working well with other products in the ecosystem that relay the information.
No comments:
Post a Comment