This
is a continuation of the article
on operational engineering for software deployments in the public cloud. It
lists some of the queries that find re-use across projects, teams and environments.
·
Organize counts of
incidents, alerts, breaches and usages by account, subscription, resource types
and resources hierarchy
·
Determine load across
stamps, resource types, resources per region, availability zone, datacenter,
cluster, node, container hierarchy.
·
Determine users who
used two step verification and their success rate - The growth in popularity of
one-time passcodes over captcha and other challenge questions could be plotted
on a graph as a trend by tagging the requests that performed these operations.
One of the reasons one-time passcodes are popular is that unlike other forms
they have less chance of going wrong. The user is guaranteed to get a fresh
code and the code will successfully authenticate the user. OTP is used in many
workflows for this purpose.
·
Searching for login
attempts - The above scenario also leads us to evaluate the conditions where
customers did end up re-attempting where the captcha or their interaction on
the page did not work. The hash of failures and their counts will determine
which of these is a significant source of error. One of the outcomes of this is
that we may discover some forms of challenges as not suitable for the user. In
these cases, it is easier to migrate the user to other workflows.
·
Modifications made to
account state - Some of the best indicators of fraudulent activity is the
pattern of access of account state whether it is to read or write. For example,
the address, zip code and payment methods of the account change less frequently
than the password for the user. If these do change often for a user and from
different sources, they may lead to fraud detection.
· Additional
lines around a match in a log provide additional attributes that may now be
searched for direct information or indirectly tagged and counted as
contributing towards the tally for the labels.
·
Pivoting – Request
parameters that are logged can be numerous and often span large text such as
for tokens. Moreover, pivoting the parameters and aggregating the requests based
on these parameters becomes necessary to explore range, count, and sum of the
values for these parameters. To do this, we use awk and datamash operators.
·
grouping selections
and counting is enhanced with awk and datamash because we have transformed data
in addition to the logs. For example, if we are searching for http requests
grouped by parameters with one for each request, then we could include the
pivoted parameters in aggregations that match a given criteria.
·
In the absence of an already
existing tags for these pivoted request parameters and their aggregations, we
can now create new tags with search and replace command in the same logic as
above but with piping operation.
·
Logs, clickstreams,
and metrics are only some of the ways to gain insight into customer activities
related to their identity. While there may be other ways, customers or their
application usages can be differentiated, these intend to provide a unique
perspective to any holistic troubleshooting.
No comments:
Post a Comment