Tuesday, May 13, 2014

In this post we will continue to look at logparser and Splunk. Consider a case when Splunk can translate searches to SQL queries. This would mean that all the operators that Splunk enables on the search bar such as regex, rex, top, stats, multikv, collect etc all work without seeing any difference between Splunk indexes or SQL data providers.  Splunk seamlessly parses the search operations on data without importing it into its indexes. In such cases there would have to be a translation of Splunk search operators into LINQ or SQL depending on where the data resides. A cursory look at the operators will suggest that the predicates be pushed down as close to the data as possible. In this case, Splunk keeps its index and data operators as close as possible. If the operators were to target the data on an external source there would be several copies of the data and translations involved. This will be similar to the pipe operation in Splunk. Splunk exposes several semantics that work well in pipe operations. This is very helpful to IT world for administration as well as for automation. What Splunk provides for analysis is significantly improved by its search language. While we can generalize the same operators for other data sources, the search language works well for Splunk data because of the fields extraction and event indexing.

LINQ on Splunk


Language-Integrated Query (LINQ) is a set of features introduced in Visual Studio 2008 that extends powerful query capabilities for use by applications. LINQ is widely used in web applications and services in a variety of industries such as finance, retail, telecommunication and insurance. LINQ provides two features. It enables constructs in the language that have become very popular with web developers. And it abstracts the underlying data source while enabling the same query logic to operate against any data source. The language syntax for querying are available as standard query operators and they are very much in nature to SQL ANSI standard which have an established foundation in data storage and retrieval. While SQL is ubiquitously used for relational data, the standard query operators are not restricted to work against a database. LINQ can be executed on XML, plaintext, CSV, databases etc. so long as the standard query operators can see a sequence of objects on which to perform their operations such as selection, conversion, merging etc. This enumeration of the items in a sequence is defined by an interface popularly known as IEnumerable.  The behavior demanded from collections implementing this interface is that the items can be extracted via iterations. At the same time, the data providers implement an interface called the IQueryable that can take an expression tree and execute it on the data source. Together these interfaces connect an application to the data source with the help of powerful querying capabilities.
Splunk also provides programmable interfaces to its data source via what are known as Entities and Collections, which provide a common mechanism for a plethora of resources. Most of the SDK makes REST API calls and hence operates on a single resource or a collection. Entity and EntityCollection both derive from common base class Resource.  Note that this pattern exposes the resources per se and not their querying capabilities.  When objects are instantiated they are local snapshots of values as read and copied from the server. Changes made on the server are not available until a refresh is called. Compare this with the object relational mapping (ORM) software that use LINQ patterns. The ORMs are able to interpret that the state of an object has become ‘dirty’ from local updates and needs to save to disk seamlessly.  At the same time, an observer pattern notifies the application of the updates made to the server data.
Splunk does have its own powerful querying capabilities.  Splunk search operators are similar to Unix style piped operators and are also available for use from the same SDK. These are exposed as searches and jobs and they can take a string containing the search query and execute it on the server. The results are then returned to the caller. Searches can be either one shot or real time.  The one-shot is a synchronous API and it returns a stream. Real-time searches return live events as they are indexed, and this type of search continues to arrive. To view the results from a real-time search, we view the preview results.  Some parameters to the Search Job enable this mode. In the normal execution mode, a Search Job is first created with a search string and then the job is polled for completion. A built-in XML parser can then render the stream of results.
Thus we note that there is a mismatch in the way the items are retrieved from the server from the server to the application between LINQ and Splunk search. That said, the power of Splunk can be made available via adapters via LINQ patterns.

Monday, May 12, 2014

LogParser and Splunk
We will cover some topics common to logParser and Splunk and see how we can integrate these.
LogParser is a versatile and powerful tool that enables searches over xml, csv etc. It has a universal query access to text based data.
LogParser can take a SQL expression on the command line and output the results that match the query.
Splunk has a unix style rich query operators to perform search that goes beyond just retrieving the results.
The advantage of a common LINQ expression for querying is that it can work with any data source. If we could consider logParser and Splunk as two different query based data providers, then arguably there is a way to support LINQ style querying over Splunk.
Let us on the other hand look at integrating LogParser and Splunk directly. While one can take the output of the other it is preferable that Splunk takes log parser as a modular input. More on this shortly.
Architecturally, there is considerable difference between LogParser and Splunk
Splunk to SQL connector apps already read relational data. There are plenty of apps that can connect to different SQL sources and perhaps use SQL queries.
However there is very limited apps that read logParser.
LogParser can read different kinds of data not just xml or csv. It can read different data such as event viewer.
The ability to query over such data is a major benefit contributing to its popularity on Windows systems.
Splunk could translate user queries to SQL and in this way have access to not only log parsers capabilities but also the data sources that are typically used with LogParser.
class Employee:
 def __init__(self, id, mgrId, name):
  self.id = id
  self.mgrId = mgrId
  self.name = name
    
# sample data
#E = [Employee(1, 0, 'ABC'), Employee(2, 1, 'DEF'), Employee(3, 1, 'GHI')]
E = [Employee(1, 1, 'ABC'), Employee(4,4,'IJK'), Employee(2, 1, 'DEF'), Employee(5,4,'MNO'), Employee(3, 1, 'GHI')]

def directReportsBFS(E,s):
 Q = []
 level = 0
 Q.append(s)
 while (len(Q) > 0):
  c = Q.pop(0)
  if c is None:
   level = level + 1
   continue
  Reports = [e for e in E if e.mgrId == c.id and e.mgrId != e.id]
  print 'Name:' + c.name + ' Level:' + str(level) + ' Reports:' +"[{0}]".format(", ".join(str(r.name) for r in Reports))
  Q.append(None)
  all(Q.append(report) for report in Reports)


[directReportsBFS(E,e) for e in E if e.mgrId == e.id]

Sunday, May 11, 2014

In this blog post, I talk about matrix operations from the book on Algorithms and Data Structures. I'm covering these as part of the series of earlier posts from that book. The book says operations on matrices are at the heart of scientific computing. Efficient algorithms for working with matrices are therefore discussed. One important issue that arises in practice is numerical stability. Numerical stability means that there can be rounding errors which affect the results and are considered numerically unstable.
A matrix is a rectangular array of numbers. The elements of a matrix in row i and column j is aij. The set of all m * n matrices with real valued entries is denoted by R m*n.  The transpose of a matrix A is the matrix A' obtained by exchanging the rows and columns of A.
A vector is a one dimensional array of numbers. A column vector is a n * 1 matrix. A row vector is a 1 * n matrix. A transpose of a column vector is a row vector.
The unit vector is one whose ith element is 1 and all the other elements are 0. The size of a unit vector is clear from the context. A zero matrix is a matrix whose every entry is 0.
Square n * n matrices arise frequently. Several special cases of square matrices are of particular interest.
A diagonal matrix has aij = 0 whenever i != j. An upper triagonal matrix is one for which uij = 0 if i > j. All entries below the diagonal are zero.  A lower triagonal matrix is one for which lij = 0 for i < j.
Operations on matrices involve matrix addition such as in cij = aij + bij.
Matrix subtraction is the addition of a negative matrix.
In Matrix multiplication, we start with two matrices  A and B that are compatible in the sense that the number of columns of A is equal to the number of rows of B. We start with two matrices A and B that are compatible. then we perform C = AB where cik = Sum-j=1 to n (aij.bjk)
Matrices may have some of the associative properties.
Identity matrices are identities for matrix multiplication.
Matrix multiplication is associative
A(BC) = (AB)C
A(B+C) = AB + AC
(B+C)D = BD + CD
The inverse of an n x n matrix A is the n x n matrix A(-1) where paranthesis denotes superscript such that AA(-1) = A(-1)A = In
If a matrix has an inverse, it is called determinant or non-singular.
The vectors x1, x2, xn are linearly dependent if there exists co-efficients not all of which are zero.
The ijth minor of an n x n matrix A, for n > 1 is the (n-1)x(n-1) matrix obtained by deleting the ith row and the jth column of A.  The determinant of an n x n matrix A can be defined recursively in terms of its minors by
det (A) =   {  a11 if n = 1
                 { Sum for j = 1 to n [ (-1)^(1+j)   a1j   det(A 1,j) if n > 1
The term (-1)^(i+j).det(Aij) is known as the cofactor of the element aij.
The determinant of square matrix A has the following properties:
If any row or any column of A is zero, then det(A) is zero
The det(A) is multiplied by lambda if the entries of any one row or any one column are all multiplied by lambda.
The determinant of A is unchanged if the entries in one row are added to those in another row.
The det(A) equals the det(A transpose)
The det(A) is multiplied by -1 if any two rows or any two columns are exchanged.
For any square matrix, we have det(AB) = det(A).det(B)

In this post we look about a pseudo random number generator algorithm but we will revert to Queue implementation shortly. Specifically we discuss Mercene Twister algorithm. This method is based on Mersenne prime number and hence its name. The prime number has a value to 2 ^ 19937 -1. This method is said to be commonly used for most of the languages. Python Ruby Pascal PHP Maple MATLAB etc.
The commonly used version of this algorithm is called MT19937 which produces a sequence of 32 bit integers/
This method has a very long period of 2 ^ 19937  - 1. With a long period, there is more distribution and though it does not guarantee randomness, there is less so shorter periods.
It is k-distributed to 32 bit accuracy for every 1 < k < 623.
A pseudo-random sequence xi of w-bit integers of period P and is said to be k-distributed to v-bit accuracy, when we take the numbers formed by the leading v bits of x then each of the possible 2 ^ kv possible combinations of bits occurs  the same number of times in a period. We can discount the all-zero combinations. Essentially we say that they are unique even if we take just a portion their bits. Random numbers are generally not expected to repeat. So using 32 bit notations all 32 bits are involved in making them distinct. But if we take only a few of the bits and they are still distinct, then they are said to be k-distributed.
Also, this method passes numerous tests for statistical randomness.

Saturday, May 10, 2014

In this post, we talk about the rules in queue alerts module design. ( http://1drv.ms/1m5CYRQ.)

When the rules are updated, the queue manager informs the worker. Since there are more than one workers, the queue manager can decide which rules to push to which worker. The workers are all doing the same task regardless of the rules. They evaluate the messages per the rules and defer the filtered messages to be forwarded or acted upon. The workers could maintain a list of messages or they could just put in a queue for the manager to read. I’ve denoted this with SendMessages() The manager may have a common queue across all workers such as an IO completion port and the messages may even be stamped with a worker id to denote the application context. How we scale out the applications between the manager and the workers is left to performance considerations but we mention to avoid more than one data structure such as IO completion port for communication between the manager and the workers. Ideally, the workers could take on beefier role while the manager manages the rules and retrieves the messages. If the Manager can GetAllMessages() without the worker’s involvement, that may be yet another improvement.