Wednesday, February 14, 2018

Blockchain Technology has been used to decentralize Security and proves to be great for identity management. A blockchain is a continuously growing list of records called blocks which are linked and secured using cryptography. Since it is resistant to tampering, it becomes an open distributed ledger to record transactions between two parties. In identity managements it avoids the use of an authentication server and a password database. Each device or user media is given a private key which is guaranteed to be unique and a control from that device such as the click of a button is the equivalent of signing in. Identity management user interface therefore becomes far simpler than what users are currently frustrated with today. Even in such scenarios, the security is only improved when the user mentions something only she knows.  As stated earlier, security is about knowing as well as having. Therefore, the user interface even with blockchain technology could do with some visual aid to the process of signing in. This is particularly more meaningful  where user interactions matter and the identity management is not for automations. Here the use of the interface is a visual equivalent to securing the private key with a password.
One such user interface could be the use of solving a captcha that not only the machines but also nobody other than the owner can answer. As an example, we enter one time passcodes in the form of six digit numbers. Instead if there were a panel of nine tiles which only the individual can select six and in a sequence that is unique and known only to the individual, then it becomes the equivalent of a password. Even One time passcodes relayed from the authentication server could be considered a passwordless equivalent to the conventional login. Here we are merely making it easier for the user to answer based on his habit of selection rather than his reliance on remembering the password. In this case, we have
    function append(id) {
      var text = $("#password").val();
       if (!text)
       {
            toggle();
            $("#password").val(id);
       }else{
            if (text.length < 6) {
             $("#password").val(text+id);
            }else{
                toggle();
            }
       }
    }
    function toggle() {
    var passwordField = document.getElementById('password');
    var value = passwordField.value;
    if(passwordField.type == 'password') {
        passwordField.type = 'text';
    }
     else {
       passwordField.type = 'password';
     }
       passwordField.value = value;
    }

Tuesday, February 13, 2018


We were looking at some of the search queries that are collected from the community of those using  logs from an identity provider:

We were discussing how the additional lines around a match provide additional attributes that may now be searched for direct information or indirectly tagged and counted as contributing towards the tally for the labels.
In the logs, we can leverage protocols other than http and oauth. For example, if we use SAML or other encrypted but shared parameters, we can use it for correlations. Similarly, user agents generally give a lot of information about the origin and can be used to selectively filter the requests. In addition to protocols, applications and devices contributing to request parameters, cookies may also store information that can be searched when they make it to the logs. Most mobile devices also come with app stores from where packet capture applications for those devices can be downloaded and installed. Although the use of simulator and live debugging does away with the use of packet capture applications, they certainly form a source of information.

The logs for mobile devices can also be shared especially if they are kept limited and small with a finite number of entries.

48) Pivoting – Request parameters that are logged can be numerous and often spanning large text such as for tokens. Moreover, pivoting the parameters and aggregating the requests based on these parameters becomes necessary to explore range, count and sum of the values for these parameters. In order to do this, we use awk and datamash operators.

49) grouping selections and counting is enhanced with awk and datamash because we have transformed data in addition to the logs. For example, if we are searching for http requests grouped by parameters with one for each request, then we could include the pivoted parameters in aggregations that match a given criteria.

50) In the absence of an already existing tags for these pivoted request parameters and their aggregations, we can now create new tags with search and replace command in the same logic as above but with piping operation.

#codingexercise:

Determine fourth order Fibonacci series:
T(n) = Fib (Fib(Fib(Fib(n))))

generate maze

for (int i=1; i<ROWS; i++) {
  for(int j=1;j<COLS;j++) {
    String c = (Math.floor((Math.random()*2)%2)) ? "|" : "__";
    Console.Write(c);
  }
  Console.Writeline("<br/>");
}

Monday, February 12, 2018

We were looking at some of the search queries that are collected from the community of those using  logs from an identity provider:

In particular, we were looking for a few lines above and below a match to include associated event attributes. This is easy with a streaming operation in the shell command with "grep –C<N> literal file".  In SQL this becomes slightly complicated involving a recursive common table expression. A nested query might work too provided the identifiers are continuous.
For example:
SELECT a.*
FROM Table1 as a,
(SELECT id FROM Table1 WHERE message LIKE '%hello%') as b
WHERE a.ids BETWEEN b.id-N AND b.id+N;
On the other hand by using max(b.id) < id  and  min(b.id) > id as the sentinels, we can now advance the sentinels row by row in a recursive query to always include a determined number of lines above and below the match
For example:
with sentinels(prevr, nextr, lvl) as (
  select nvl((select max(e.employee_id)
              from   hr.employees e
              where  e.employee_id < emp.employee_id),
              emp.employee_id) prevr,
         nvl((select min(e.employee_id)
              from   hr.employees e
              where  e.employee_id > emp.employee_id),
              emp.employee_id) nextr,
         1 lvl
  from   hr.employees emp
  where  last_name = @lastname
  union all
  select nvl((select max(e.employee_id)
              from   hr.employees e
              where  e.employee_id < prevr),
              prevr
         ) prevr,
         nvl((select min(e.employee_id)
              from   hr.employees e
              where  e.employee_id > nextr),
              nextr
         ) nextr,
         lvl+1 lvl
  from   sentinels
  where  lvl+1 <= @lvl
)
  select e.employee_id, e.last_name
  from   hr.employees e
  join   sentinels b
  on     e.employee_id between b.prevr and b.nextr
  and    b.lvl = @lvl
  order  by e.employee_id; 
 adapted from Oracle blog by Chris Saxon

The additional lines around a match provide additional attributes that may now be searched for direct information or indirectly tagged and counted as contributing towards the tally for the labels.

In the logs, we can leverage protocols other than http and oauth. For example, if we use SAML or other encrypted but shared parameters, we can use it for correlations. Similarly user agents generally give a lot of information about the origin and can be used to selectively filter the requests. In addition to protocols, applications and devices contributing to request parameters, cookies may also store information that can be searched when they make it to the logs. Most mobile devices also come with app stores from where packet capture applications for those devices can be downloaded and installed. Although the use of simulator and live debugging does away with the use of packet capture applications, they certainly form a source of information.
The logs for mobile devices can also be shared especially if they are kept limited and small with a finite number of entries.

Sunday, February 11, 2018


We were looking at some of the search queries that are collected from the community of those using  logs from an identity provider:

Some other interesting events for identity include:

45) looking for a few lines above and below a match to include associated event attributes. This is easy with a streaming operation in the shell command with "grep –C<N> literal file".  In SQL this becomes slightly complicated involving a recursive common table expression. A nested query might work too provided the identifiers are continuous.
For example:
SELECT a.*
FROM Table1 as a,
(SELECT id FROM Table1 WHERE message LIKE '%hello%') as b
WHERE a.ids BETWEEN b.id-N AND b.id+N;
On the other hand by using max(b.id) < id  and  min(b.id) > id as the sentinels, we can now advance the sentinels row by row in a recursive query to always include a determined number of lines above and below the match

46) grouping selections and counting now works successfully with the above logic. For example, if we are searching for http requests in a long that span multiple lines one for each request parameter, then we could include the associated parameters to corresponding to the requests that match as tags to group the requests. For example

grep -C7 match file | grep tag | cut -d"=" -f1 | sort | uniq -c | sort -nr

47) In the absence of an already existing tags, we can now create new tags with search and replace command in the same logic as above but with piping operation..

Saturday, February 10, 2018


We were looking at some of the search queries that are collected from the community of those using  logs from an identity provider:

Some other interesting events for identity include:

41) device access calls When mobile applications make request and responses to the server, they are harder to debug live because the code is usually tried on a simulator. Both iOS and android allow applications to be simulated and debugged so it may perform the same on actual device. However, logs provide a convenient mechanism to track the conversations with the server so long as the conversation can be narrowed down based on device, application, customer and session.

42) Device access without customer  -  devices may have to do handshakes before a customer data flow can be initiated. Fortunately, most applications and devices now follow similar Oauth protocol to handle this. They use client based identifier and secret that is specific to the application and the device. A device based authorization flow is also different from other oauth workflows because it uses no user-context mode. These calls are therefore easily searchable with oauth parameters.

43) Device with customer context - When the device engage in OAuth conversations with the customer context they usually carry an access token or a refresh token. These refresh tokens are exchanged old for the new so we can enumerate all such conversations based on the old and new tokens issues during the conversation.  This line of search is very helpful across all api calls made with oauth because the calls are usually short lived and the access token spans more than one call so searching for other calls in the vicinity of a call is now just a regular expression or literal search

44) long lived customer context - When the devices engage in conversations on behalf of the customer and the user agent sessions are not lasting upto an hour but there is cross domain access, the number of api calls increase significantly even for the narrowed conversation. In such cases, we shift to higher level identifiers such as session tokens for single sign-on or identifiers for client context.

Friday, February 9, 2018

Web Assets as a software update

Introduction:
Any application with a web interface requires the usage of resources in the form of markup, stylesheets and scripts. Although they may represent code for the interaction with the end user, they don’t necessarily have to be maintained on the server side and treated the same way as server side code. This document argues for using an update service for any code that is not maintained on the server side. The update service automatically downloads and installs the latest update to the code on a device or a relay server by a pull mechanism rather than the conventional pipeline based push mechanism.

Description:
Content Delivery Network are widely popular to make web application assets available to a web page regardless of whether it is hosted on the mobile, desktop or software as a service. They serve many purposes but primarily function as a set of proxy servers distributed over geographical locations such that the web page may readily find them and download them at high speed regardless of when, where and how the web page is displayed. Update service on the other hand is generally a feature of any software platform such that tenants can download the latest update from their publisher. The server on the other hand has been a model where there is a single source code from a single point of origin and usually gated over a pipeline and every consuming device or application points to this server via web redirects. These three software publishing conventions make no restrictions over the size or granularity of individual releases and generally they are determined based on what can be achieved within a timeline. Since the most recent update is guaranteed to work compatible with previous versions of host or device ecosystem and updates are mostly forward progressive, there is very little testing or requirement to ensure that new releases mix and match on a particular host works well. Moreover, a number of request responses are already being made to load a web page. Therefore, there is no necessity to make these downloads or responses to be a minimum size. This brings us to a point where we view assets not as a bundle but as something discrete that can be versioned and made available over a content delivery network.  The rules for publishing assets to a set of proxy servers are similar to the rules for releasing code to a virtual server.

Conclusion:
Software may be viewed both in terms of server side logic and client updated assets. The granularity of releases for both can be fine grained and independently verified. The distribution may be finely balanced so that the physical representation of what makes a web application, is much more modular and an opt in for every consumer.

Thursday, February 8, 2018

We were looking at some of the search queries that are collected from the community of those using  logs from an identity provider:

Some other interesting events for identity include:

37) cross API calls - in the API sequence across layers such as http filters we discussed how to walk down the chain in the logs to find out which layer responded with an  error. This mention here is for the same layer cross API calls which determine the response from this layer. Sometimes we have the information for responses gatherer via cross api calls and determining their failures requires inspection of the responses formed in this layer.

38) state sharing between APIs - most caller and callee share state or keys for each other and this helps in tracking or studying them in the logs. The count of unique such states indicates the distinct conversations between APIs. In this case we can even re-use this to find out the exact input or output for a particular customer. Often the customerId is shared in the request parameters itself, so listing all APIs by customerId should have covered this case but this is not necessarily true for APIs from different departments that may not follow the same rules. In such cases the translation of customerId to the corresponding key/state helps find the API calls.

39) incorrect API responses  - one of the most notorious failures in the services is when the api fails without an exception. The latter is very helpful for diagnosis and troubleshooting because it determines a point of failure. In its absence reconstructing the point of failure by studying requests and responses at the API become very difficult. For this purpose tracing the api activity may come helpful but because production logs are rarely at debug level, it would behoove the api to log incorrect responses also. In such cases, the results are easier to diagnose and determine when compared with the other successful calls.

40) state pass through - one of the most successful techniques is when apis capture and append state that will be helpful downstream. In the example cited above, the logs were to be enhanced to improve the diagnosability. Here the data speaks for itself. The data carries all the information we need.subsequently and the operation at any particular layer merely has to look at this state.

#codingexercise
Generate the nth Newman Conway Sequence number. This sequence is shown as
1 1 2 2 3 4 4 4 5 6 7 7
It is defined as the recursion :
P(n) = P(P(n - 1)) + P(n - P(n - 1))
and with closure conditions as
P (1) = 1
P (2) = 1

double GetNCS ( double n )
{
if ( n == 1 || n == 2)
      return 1;
else
      return GetNCS (GetNCS (n-1)) + GetNCS (n-GetNCS (n-1));
}
n = 3:
P (P (2))+P (3-P (2))
    = P (1) +P (2) 0= 2
n = 4
P (P (3))+P (4-P (3))
= P (2) +P (2) = 2