Monday, April 25, 2022

Improving queries part 9

Improving queries part 9

 

This is a continuation of the best practice in writing Kusto queries as introduced here.    

and continued with the log queries for Analytics such as with Azure Monitor Logs. While part 5, 6, and 7 recognized optimizations, best practices and the management aspects of Kusto queries, this discusses expressions which come useful in log queries that span across workspaces and apps. Such queries give a system wide view of the data.  The following section describes the translations between SQL queries and Kusto.

Kusto supports a subset of SQL language.  These translations do not necessarily need to be worked out. The Kusto operator EXPLAIN when prefixed to a SQL statement translates it to the corresponding Kusto query.

Some operators are now called out with tips for best performance. Selection of data from a SQL table usually involves a row set or a column set. These can be correspondingly called in KQL with the where filter and the project operator for selecting columns. The rules referred to in the earlier parts of this article about aggressively reducing the size of the data holds.

The popular test for data to be null can be tried out with isnotnull(operator). Data checking must be towards the last after the query has utilized the schema to narrow down the result-sets.

The comparisons with date are simplified with the ago(1d) like operators. The datetime range can be described with the <= and >= operators. Time part of the queries must be applied earliest in the set of filters.

The test for data to have literals and patterns also have equivalents with the has, contains and approximation operators.

The grouping operations are substituted with the summarize operator. The dcount(type) operator uses hyperloglog algorithm. These can be computationally expensive so the data spread can be reduced whenever possible.

The column aliases and calculations are made possible with the Extend operator.

The ordering operators and behavior remain similar.

Packing and unpacking data between json and table is made possible with the mv-extend, bag_remove_keys and extractjson operators.

Convert to json:

T

| extend PackedRecord = pack_all()

| summarize Result = make_list(PackedRecord)

And back with:

let result =

print myDynamicValue = dynamic(<jsonArray>

)

| mvexpand myDynamicValue

| evaluate bag_unpack(myDynamicValue);

result

 

The jsonpath notations are supported by the ‘$’ for root object, ‘.’ or ‘[]’ for child object and [] for array subscript.

Reference: Sample queries

No comments:

Post a Comment