Today we discuss another application for Splunk. I want to spend the next few days reviewing the implementation of some core components of Splunk. But for now, I want to talk about API monitoring. Splunk exposes a REST API model for its features that are called from the UI and by SDK. These APIs are logged in the web access log. The same APIs can be called from mobile applications on Android devices and iPhones/iPads. The purpose of this application is to get statistics from API calls such as percentage of times error was encountered, the number of internal server errors, the number and distribution of timeouts. And with the statistics gathered, we can set up alerts on thresholds exceeded. Essentially, this is along the same lines as Mashery api management solution. While APIs monitored by Mashery help study traffic from all devices to the API providers, in this case, we are talking about that for a Splunk instance from the enterprise users. Mobile apps are currently not available for Splunk but when it does, this kind of application would help to troubleshoot those applications as well because it would show the differences between other callers and those devices.
The way Mashery works is with the use of a http/s proxy. However in this case we rely on the logs directly assuming that all the data we need is available in the logs. The difference between searching the logs and running this application is that the application has continuous visualization and fires alerts.
This kind of application is different from a REST modular input because the latter indexes the response from the APIs and in this case we are not keen on the responses but the response code. At the same time we are also interested in user-agent and other such header information to enrich our stats just so long as they are logged.
Caching is a service available in Mashery or from Applications such as AppFabric but this is likely a candidate feature for Splunk rather than this application due to the type of input to the application. Caching works well when requests responses are intercepted but in our case this application is expected to use the log as an input.
The way Mashery works is with the use of a http/s proxy. However in this case we rely on the logs directly assuming that all the data we need is available in the logs. The difference between searching the logs and running this application is that the application has continuous visualization and fires alerts.
This kind of application is different from a REST modular input because the latter indexes the response from the APIs and in this case we are not keen on the responses but the response code. At the same time we are also interested in user-agent and other such header information to enrich our stats just so long as they are logged.
Caching is a service available in Mashery or from Applications such as AppFabric but this is likely a candidate feature for Splunk rather than this application due to the type of input to the application. Caching works well when requests responses are intercepted but in our case this application is expected to use the log as an input.
No comments:
Post a Comment