One of the trends in operational practice is to rely on tools that sets thresholds and raises alerts. This translates to incident response instead of active and strenuously polling. As part of the response, we search the logs. Most of these are interactive command line executions but each step may be time consuming due to the volume of the logs. One way to mitigate this is to run a sequential batch script that repeats the commands on smaller chunks of data. This however means we lose the aggregations unless we store intermediary data. Fortunately this was possible using files. However most log archive systems are read only and the files may not be read from. This also restricts parallelizing tasks using library such as celery because those require network access to message broker and the only access allowed is ssh. One way to overcome this is to scatter and gather data from multiple ssh sessions. This is easier to automate because the controller does not have to be local to the log server.
Another option is to leave the log server as-is and draw all the data into a log index. Then the search and reporting stacks can use the index. Since the index is designed to grow to arbitrary size, we can put all the logs in it. Also, the search stack enables as many search sessions as necessary to perform the task. They may even be made available via API, SDK and UI which enable applications to leverage parallelism as appropriate. For example, the SDK can be used with task parallel libraries such as Celery so that the same processing can be done in batches of partitioned data. The data can be partitioned based on historical timeline or they can be partitioned based on other attributes. The log index server also helps the application to preserve search artifacts so that the same can be used later or in other searches. The reporting stack sits over the search stack because the input to the reporting dashboard is the results of search queries. These search queries may be optimized, parallelized or parameterized so that they have near real-time performance. The presence of search and reporting stacks in new log indexing products indicates that these are separate areas of concerns which cannot be mixed with the conventional log readers into a monolithic console session.
Another option is to leave the log server as-is and draw all the data into a log index. Then the search and reporting stacks can use the index. Since the index is designed to grow to arbitrary size, we can put all the logs in it. Also, the search stack enables as many search sessions as necessary to perform the task. They may even be made available via API, SDK and UI which enable applications to leverage parallelism as appropriate. For example, the SDK can be used with task parallel libraries such as Celery so that the same processing can be done in batches of partitioned data. The data can be partitioned based on historical timeline or they can be partitioned based on other attributes. The log index server also helps the application to preserve search artifacts so that the same can be used later or in other searches. The reporting stack sits over the search stack because the input to the reporting dashboard is the results of search queries. These search queries may be optimized, parallelized or parameterized so that they have near real-time performance. The presence of search and reporting stacks in new log indexing products indicates that these are separate areas of concerns which cannot be mixed with the conventional log readers into a monolithic console session.
No comments:
Post a Comment