Monday, April 9, 2018

#InterestingCodingExercise
We are given a level wise serialized binary tree without level sentinels where null node is denoted by "#" and represented by an integer otherwise. The integer is a positive weight associated with that node and does not represent its identifier. The indices of the serialization array may be used as the node's identifier, if necessary. We need to extract the sum of all weights where no two weights are related directly. A node is related if it is in adjacent levels. This criteria may be dependent on the problem setter. For example, not directly related could also mean the node is neither the left nor the right for the node above. We will use the level criteria here:

static int findMax(String tree, int n)
{
        String[] elements = tree.split("\\s+");
        int maxLevel = (int)(Math.log(elements.length+1) / Math.log(2));
        int sums[] = new int[maxLevel];
        long odd = 0;
        long even = 0;
        for (int level = 0; level < maxLevel; level++) {
            
            for (int i = (int)Math.pow(2, level)-1; i < Math.pow(2, level+1)-1; i++) {
                // this assumes a full level otherwise we could go with previous level elements
                if (!elements[i].equals("#")) {
                    sums[level] += Integer.parseInt(elements[i]);
                }
            }
            
        }
        for (int i = 0; i < sums.length; i++)
        {
            if (i%2 == 0){
                even += sums[i];
            }else{
                odd += sums[i];
            }
        }

        return odd > even ? odd : even;
}

     3
  4     5
3   1   3

we get result as 9.   

if we do not have full representation at each level, we can use the following:
int start = 0;
int end = 0;
int prev = 0;
int current = 0;
for (int i = 0; i < elements.length; i++)
{
if (i == 0) { start = 1; end = 3; continue;}
if (i == end - 1) { start = end; end = start + 2 * prev; prev = 0;}
if ( nodes[current].left == INT_MIN) {
      if (elements[i] == "X") {
           nodes[current].left = INT_MAX;
      } else {
          nodes[current].left == -1;
          prev++;
      }
}
if ( nodes[current].left != INT_MIN && nodes[current].right == INT_MIN) {
      if (elements[i] == "X") {
           nodes[current].right = INT_MAX;
      } else {
           nodes[current].right = 1;
           prev++;
      }
      current++;

}
}

Sunday, April 8, 2018

Automated code reviews
Contents
Automated code reviews 1
Introduction 1
Description 1
Conclusion 1

Introduction: Software is written incrementally in the form of well tested features. This typically involves additional code and tests. Before the code makes its way through the pipeline to the release for use by customers, it has a chance to be vetted in many stages. This writeup tries to articulate the benefits of adding an automated code review as a criteria for such validations.
Description: When the software is compiled, tested, peer reviewed and checked into the pipeline, it has already gone through some form of static validations as well as automated and manual reviews.  After it enters the pipeline, there are several tests that run against it. These tests include both unit-tests and content–tests. This works well for new software but legacy software is often large and complex. Many software products claim millions of lines of code that require feverish rituals for mere compilation. Moreover covering all surface area with adequate tests has traditionally fallen short regardless of high infusion of engineering resources.  The problem perhaps lies in the cost-benefit ratio of the percentage of time dedicated for new development versus clearing technical debt. Therefore, it can be safely assumed that legacy software often cannot meet the stringent combing done for newer software. Even compile time checks are loosened for legacy software in favor of acceptance criteria. Moreover, newer software are written in managed languages which are far more easier to validate and make consistent as opposed to earlier invented but still very much widely accepted languages.
In this context, additional processes are tolerated for the build and release of legacy software as long as they are advisory in nature but not triggering a failure or interruption. Compile and static code analysis tools have come in very handy for provide the earliest feedback in terms of the discovery of defects in a code. Along with annotations to the parameters and methods, these code analysis tools can help detect a wide variety of defects such as buffer overruns, unguarded access, initialization mistakes and such others. Annotations work slightly differently from instrumentation of the code where the latter usually does not make it to the release while the former remains available in release and as informational only. Furthermore, the takeaway here is that annotation comes directly from the developers in their attempt to use the tool to improve their own code health. On the other hand analysis of interactions between components is hardly known until at least runtime. The code reviews help here in that peers can flag potential overlooks as they shed light from multiple perspectives. However the code is not the only source of information for the highest source of defects. Such kind of information comes from stack traces that are usually hashed or they come from defect databases where the stack trace may be one of the attributes. Consequently, the defects database and specifically known stack-traces could provide an alternate source of information that could translate to checks during build time. These additional checks may be package specific or branch specific and therefore become a form of automated code reviews.
Conclusion: Software code reviews could lean on information gained from history and defect tracking databases whenever available. In short, the static code vetting need not be standalone any more.

Saturday, April 7, 2018

Dumb Clients and Smart Cloud ? 
 
 
Introduction:   
As cloud computing becomes ubiquitous, compute and storage has become elastic for backend processing which facilitates moving much of the logic away from applications and mobile clients. This writeup challenges the notion whether clients want to retain any logic on their end. 
Description: 
Software evolved to become a service and offered via browsers in software-as-a-service (SaaS) model. At the same time serverless computing and containerization technology has evolved. While the SaaS has been powered by a single service usually a monolith sitting somewhere in an enterprise, its migration to the cloud was somewhat straightforward. Take the service that powers the SaaS and move it to the cloud.  
There were two problems with these. Changes to the service needlessly impacted the rest of the functionality from the software. At the same time front-end evolved to using more and more services so it became fatter and fatter.  
Neither of these two technologies have taken advantage of modular deep allocation of resources in the form of containers, services and even serverless computing. The idea behind serverless computing is that a modular chunk of code can spin up on demand resources for its exection without affecting any of the existing production support requirements. If the mashup of the services and the portal could be achieved in the backend and the applications or clients consuming the services were merely using native software development kits or a browser enabled markup, stylesheet and script combination, then they could be leaner, meaner and more efficient in their processing.  
The benefit of using a homogeneous and thin application / client is that it does not need to do any processing on its end and can merely treat the data from operation done via services as viewmodels. These viewmodels are sufficient for the front-end be it an application or client. 
Success with the Model-View-Controller pattern was widely acknowledged to have separated their respective concerns. Here we are separating services into fanned out serverless compute that can be brought together at the backend before sending the associated viewmodel to the frontend for rendering. 
The notion of service and resources is never lost. Similarly there seems no loss of fidelity in using a thin overall mashup of services or serverless computing as long as it is done on the backend so that there is only one set of endpoints for the application / client to talk to and anything behind that is entirely at the discretion of the provider. Internal services can be upgraded to serverless computing without any change on the front-end and therefore prepare a path for migration. 
Conclusion:  
Newer applications have embraced webAPI frameworks but these same frameworks could not facilitate a newer organization that enforces thinner simpler clients and applications with consistency. 

Friday, April 6, 2018

Today we will continue to discuss Microsoft Dynamics AX. We briefly reviewed the user interface features to get to know Dynamics AX. Then we looked at the variety of tools for viewing and analyzing business data. Data is usually presented in Reports. Reports can be standard-reports, Auto reports or ad hoc reports. Tasks might take longer to execute. Sometimes its best to let it run elsewhere and at some other time where they can be prioritized, queued and executed. Documents may be attached to data. This is helpful in cases where we need to add notes and attachments to the data.  The enterprise portal enables web based access to the dynamics instance on a server installed in the enterprise network. Access is role based such as employees, Sales representatives, consultants, vendors, customers etc. This completes a brief overview of the user interface.
The General Ledger is probably the core financial section of the user interface. It is used to record fiscal activities for accounts for a certain period. For example, it has accounts receivable and accounts payable sections.  The entries in the ledger may have several parameters, defaults, sequences, dimensions, dimension sets and hierarchies. In addition, tax information and currencies are also included.
Cost accounting is another key functionality within Dynamics AX.The cost of overheads and the indirect costs to the entities can be setup as cost categories and they can be defined for use within cost accounting. These cost categories can have a link to the ledger accounts. This way ledger transactions show up under the appropriate category. Costs can even be redistributed among other categories and dimensions so that the cost structure of the business can be analyzed. The dimensions of a cost usually include the cost center and the purpose.
#codingexercise
Recently I was asked how to avoid a while loop to ensure that a random number generator does not repeat the same number In C# for example Random object instance has a Next method and it already has a uniform distribution.  This is sufficient to say that the retry can be bounded to a finite number so that duplicates are avoided. Otherwise we can split the range and sequentially exchange with a random candidate from the other half of the range . 
Also a note on format specifiers in C++:
#include <stdio.h>
#include <iostream>
#include <iomanip>
#include <string>
using namespace std;
int main()
{
string s;
double d;
cin >> s;
cout << s << " World" << endl;
cin >> d;
cout << fixed <<  setfill('x') << setw(10) << setprecision(6) << d << endl;
return 0;
}
Hello
Hello World
3.14
xx3.140000

int pacmanMoves(vector<vector<int>> board, int I, int j)
{
Int rows = board.size();
Int cols = board[0].size();
If (i >= rows || j >= cols) return 0;
Int down = board[i][j] + pacmanMoves(board, I+1,j);
Int right = board[i][j]  + pacmanMoves(board, i, j+1);
return (down > right ) ? down : right;
}