Thursday, November 30, 2017

We resume our discussion about correlation versus regression. We saw that one of the best advantage of a linear regression is the prediction with regard to time as in independent variable. When the data point have many factors contributing to their occurrence, a linear regression gives an immediate ability to predict where the next occurrence may happen. This is far easier to do than come with up a model that behaves as good fit for all the data points. It gives an indication of the trend which is generally more helpful than the data points themselves. Also a scatter plot s only changing in one dependent variable in conjunction with the independent variable. Thus lets us pick the dimension we consider to fit the linear regression independent of others. Lastly, the linear regression also gives an indication of how much the data is adhering to the trend via the estimation of errors.
We also saw how model parameters for linear regressions are computed. We saw how the best values for the model parameters can be determined from the
The correlation coefficient describes the strength of the association between two variables. If the two variables, the correlation coefficient tends to +1. If one decreases as another increases, the correlation coefficient tends to -1.  If they are not related to one another, the correlation coefficient stays at zero. In addition, the correlation coefficient can be related to the results of the regression. This is helpful because we now find a correlation not between parameters but between our notions of cause and effect. This also leads us to use correlation between any x and y which are not necessarily independent and dependent variables.  This follows from the fact that the correlation coefficient (denoted by r) is symmetric in x and y. This differentiates the coefficient from the regression.
Non-linear equations can also be "linearized" by selecting a suitable change of variables.  This is quite popular because it makes the analysis simpler. But reducing the dimensions is prone to distortion of the error structure. It is a oversimplification of the model.  It violates key assumptions and impacts the resulting parameter values. All of this contributes toward incorrect predictions and are best avoided.  Non-linear squares analysis has well defined techniques that are not too difficult with computing. Therefore it is better to do non-linear square analysis when dealing with non-linear inverse models.
#codingexercise
Given three sorted arrays, find one element from each array such that the element is closest to the given element. All the elements should be from different arrays.
For Example :-
A[] = {1, 4, 10}
B[] = {2, 15, 20}
C[] = {10, 12}
Given input: 10
Output: 10 15 10
10 from A, 15 from B and 10 from C
List<int> GetClosestToGiven(List<int> A, List<int> B, List<int>C, int value)
{
assert (A.Count > 0 && B.Count > 0 && C.Count > 0);
var ret = new List<int>();
ret.Add(GetClosest(A, value)); // using binary search
ret.Add(GetClosest(B, value));
ret.Add(GetClosest(C, value));
return ret;
}
int GetClosest(List<int> items, int value)
{
int start = 0;
int end = items.Count-1;
int closest = items[start];
while (start < end)
{
closest = Math.Abs(items[start]-value) < Math.Abs(items[end]-value) ? items[start] : items[end];
int mid = (start + end ) /2;
if (mid == start) return closest;
if (mid == end) return closest;
if (items[mid] == value)
{
return value;
}
if (items[mid] < value)
{
  start = mid;
}else{
  end = mid;
}
}
return closest;
}

Wednesday, November 29, 2017

We were discussing detecting accounts owned by a user and displaying last signed in. We referred only to the last signed in feature without descrption of its implementation. This information can be persisted as a single column in the identity table. Most tables generally have Created, Modified timestamps so in this case we can re-purpose the modified timestamp.  However the Identity record may also be modified for purposes other than last signed in. In addition, the last signed in activity is also more informational when it describes the device from which it is signed in. Therefore keeping track of devices table joined with login time will help this case.  Making an entry for the device id and timestamp is sufficient and we only need to keep track of one per login used by the owner. Translation to domain objects can then proceed with Object Relational Mapping. Finally we merely add an attribute on the view model for the display on the page.
The data is all read and write by the system and therefore has no special security considerations. It is merely for display purposes.

#codingexercise
Print all possible palindromic partitions of a string

We can thoroughly exhaust the search space by enumerating all substrings 
with starting position as 0 to Length-1 
and size of substring as that single character itself to a length including the last possible character.
For each of these substrings, we can determine whether it is a palindrome or not
bool isPalindrome(string A)
{
int start = 0; 
int end = A.Length-1;
bool ret = false;
if (A.Length == 0) return false;
while(start <= end)
{
if (A[start] != A[end])
{
return false;
}
start++;
end--;
}
ret = true;
return ret;
}

Tuesday, November 28, 2017

Detecting accounts owned by a user and displaying last signed in.
We were talking about networking devices in the earlier post. While shared devices seem to be shrinking in number and personal devices seem to be growing in number, we face all the more question on whether those devices are secured. Identity is used to login to a device or site and usually consists of a username and a credential. With the devices becoming personal, a user may remain signed in to the device since it is physically secured by her. 

An owner may want to sign in with different credentials if she wants to separate the concerns by authenticating and performing actions under a different login. This may not be typical in every day use but having more than one account to your blog or email provider or another company website is altogether very common.

When an owner wants more than one identities created, usually she gives them a different name. While many take the precaution of using a common prefix or suffix to recognizes these different accounts, they are not necessarily required. Consequently, grouping the different accounts for the same person is not easy except by the owner when she recalls all that she created.  On the other hand, when the account is created and if we can leave an annotation or allow the account creation process to read the previously used account from the devices, some associations may be set up. This is very helpful to group the accounts. The step does not need to be taken only at registration time but can be taken at any time after that should the owner want to tag different accounts. Otherwise we are left to discovering these related accounts by matches in first name, last name and some other fields such as recovery email.

Discovering related accounts is one thing. Presenting those to the user for actions such as deletion is another thing. Just like we display last signed in activity for different devices as a security measure for the user, we can could also display related accounts for account hygiene by the owner.

For businesses, the use case for displaying the last-signed-in-on-device activity is perhaps more relevant than showing related accounts but this may change quickly with the ability to switch accounts when shown on the login page.

#codingexercise
we were discussing bridges and torches problem and how the participants can either be on the left or right side of bridge.
if we use a bitmask for their presence on one side of bridge, we can quickly calculate the other side as follows: 
To get the right mask value we can use:
int GetLeftMask(int rightmask)
{
// use the 1s in all positions of the number and xor it with the right mask
return ((1 << n) - 1) ^ rightmask;
}

Monday, November 27, 2017

Wireless Access Point in base station mode, relay mode and remote mode: 
Wireless access points are ubiquitous in home and office. They are often called wireless routers and usually connect to a wired cable that lets the router connect to the internet. PCs, laptops, phones and iPads connect to it wirelessly using the protocols of the 802.11 family which enable mobility for the person wanting to access the internet.  
However, the wireless routers have a limited range. When the devices can find sufficient signal from the router, connectivity to the internet is a joy. We are happy to browse and stream data over the connection. When the connectivity is poor, there are a few options available and we discuss these. The typical course of action is to buy an upgraded router – one preferably with better antenna and replace the existing router. This has had remarkable impact in most usages not only from the improved hardware but also from the improved protocols. The family of data protocols used with the wireless router to establish and maintain a wireless connection also called Wi-Fi protocols (short for Wireless Fidelity) have undergone several iterations with improvements in data transmission rates, power management and so on. These Wi-Fi protocols were labeled alphabetically with ‘b’, ‘g’ and ‘n’ becoming notable revisions. Together this alphabet soup protocols came out of the box and gave added power to the user. The range however does not extended automatically. 
The protocol however allows the wireless access points to work in one of three following modes: 
  1. As a base station to connect to the internet over an LAN cable or Ethernet. 
  1. As a relay base station to relay data between other base stations 
  1. As a remote base station that allows clients to connect but passes the data to 2) or 1) for connectivity 
While commercial devices allow the functionality of 1) and the protocols technical feasibility of extending range with 2) and 3), users seldom leverage the ability of the access point to operate in relay or remote mode.  Wireless companies don’t make it any easier to leverage these functionalities. On the other hand, they sell separate devices for those with large homes and call them wireless extenders.  These wireless extenders are not only sold separately, they are even bundled with signal amplifiers and traffic snooping capabilitiesDedicated wireless repeater is also sold separately.  This contains two wireless routers where one of them picks up the existing Wi-Fi network and then transfers the signal to the other router with boosted signals. This technique of using one network with another is called bridging and the definition is expanded to include those where one of the networks is wired. Bridging can even be done on network that share similar infrastructure. If you think wired Ethernet cables are the only ones that allow network traffic to be conducted, even those for electrical circuit of the house can be reused to create a link from the Wi-Fi router to your device as for example with Powerline Ethernet kit. While extenders and repeaters improve coverage, they still load the existing main base station. This reduces speed in some cases. Consequently bridging is favored over the extenders.  If we wanted to convert existing older model routers to bridge or repeat, we are possibly out of luck even with reconfiguration of the device. Perhaps the devices of tomorrow can be made more open to begin with in their corresponding areas of operation. 

Sunday, November 26, 2017

#codingexercise
We were discussing a sample problem of crossing the bridge.
There are 4 persons (A, B, C and D) who want to cross a bridge in night.
A takes 1 minute to cross the bridge.
B takes 2 minutes to cross the bridge.
C takes 5 minutes to cross the bridge.
D takes 8 minutes to cross the bridge.
There is only one torch with them and the bridge cannot be crossed without the torch. There cannot be more than two persons on the bridge at any time, and when two people cross the bridge together, they must move at the slower person’s pace.
Can they all cross the bridge in 15 minutes ?
Solution: A and B cross the bridge. A comes back. Time taken 3 minutes. Now B is on the other side.
C and D cross the bridge. B comes back. Time taken 8 + 2 minutes. Now C and D are on the other side.
A and B cross the bridge. Time taken is 2 minutes. All are on the other side.
Total time spent is 3 + 10 + 2 = 15 minutes.
Next we wanted to generalize this.
The combination chosen works for this example by observing the tradeoff between having at least one fast member available on the right to come back and the pairing of slow folks on the left to go to the right so that they are not counted individually.
We noted that they have overlapping subproblems.
We have left and right sides. The number of people on the left side can vary between 0 to a large number. The next move can either be from the left side to the right side or vice versa. Therefore we can maintain a dynamic programming table of that many rows and two columns. At any time, this table stores the minimum time it takes for that many number of people on the left side given that move so we need not recalculate. it. Also we don't just store the number of people, we actually store the bitmask so as to give the positions of the people present on the left side. With this we can immediately know the presence roll on the right side. Given that any one of the n people can make the move, we pick the one that yields the minimum time on recursion. We evaluate this for both moves separately since two can go in one direction and only one on return. We try for every pair and given that we have already exhausted the cases for a number less than the current iteration i from 0 to n-1, we try the pairing with numbers between i+1 to n-1. Finally, we return the minimum time.

To get the right mask value we can use:
int GetRightMask(int leftmask)
{
// use the 1s in all positions of the number and xor it with the left mask
return ((1 << n) - 1) ^ leftmask;
}
For the return path we are only selecting one person. We can find this one by iterating through 1 to n for those on the right side to find the recusive minimum time For the forward path, we have to pick a pair. The pair can be combined by selecting any one of the n people together with any of the candidate from those greater than i but less than n.

Saturday, November 25, 2017

Today we start talking about correlation instead of regression. We saw that one of the best advantage of a linear regression is the prediction with regard to time as in independent variable. When the data point have many factors contributing to their occurrence, a linear regression gives an immediate ability to predict where the next occurrence may happen. This is far easier to do than come with up a model that behaves as good fit for all the data points. It gives an indication of the trend which is generally more helpful than the data points themselves. Also a scatter plot s only changing in one dependent variable in conjunction with the independent variable. Thus lets us pick the dimension we consider to fit the linear regression independent of others. Lastly, the linear regression also gives an indication of how much the data is adhering to the trend via the estimation of errors.
We also saw how model parameters for linear regressions are computed. We saw how the best values for the model parameters can be determined from the
The correlation coefficient describes the strength of the association between two variables. If the two variables, the correlation coefficient tends to +1. If one decreases as another increases, the correlation coefficient tends to -1.  If they are not related to one another, the correlation coefficient stays at zero. In addition, the correlation coefficient can be related to the results of the regression. This is helpful because we now find a correlation not between parameters but between our notions of cause and effect. This also leads us to use correlation between any x and y which are not necessarily independent and dependent variables.  This follows from the fact that the correlation coefficient (denoted by r) is symmetric in x and y. This differentiates the coefficient from the regression.
Non-linear equations can also be "linearized" by selecting a suitable change of variables.  This is quite popular because it makes the analysis simpler. But reducing the dimensions is prone to distortion of the error structure. It is a oversimplification of the model.  It violates key assumptions and impacts the resulting parameter values. All of this contributes toward incorrect predictions and are best avoided.  Non-linear squares analysis has well defined techniques that are not too difficult with computing. Therefore it is better to do non-linear square analysis when dealing with non-linear inverse models.
#codingexercise
There are 4 persons (A, B, C and D) who want to cross a bridge in night.
A takes 1 minute to cross the bridge.
B takes 2 minutes to cross the bridge.
C takes 5 minutes to cross the bridge.
D takes 8 minutes to cross the bridge.
There is only one torch with them and the bridge cannot be crossed without the torch. There cannot be more than two persons on the bridge at any time, and when two people cross the bridge together, they must move at the slower person’s pace.
Can they all cross the bridge in 15 minutes ?
Solution: A and B cross the bridge. A comes back. Time taken 3 minutes. Now B is on the other side.
C and D cross the bridge. B comes back. Time taken 8 + 2 minutes. Now C and D are on the other side.
A and B cross the bridge. Time taken is 2 minutes. All are on the other side.
Total time spent is 3 + 10 + 2 = 15 minutes.
The combination chosen works for this example by observing the tradeoff between having at least one fast member available on the right to come back and the pairing of slow folks on the left to go to the right so that they are not counted individually.
However, for a general purpose we need to try out all cases with overlapping subproblems.

Friday, November 24, 2017

We were talking about linear regression. One of the best advantage of a linear regression is the prediction with regard to time as in independent variable. When the data point have many factors contributing to their occurrence, a linear regression gives an immediate ability to predict where the next occurrence may happen. This is far easier to do than come with up a model that behaves as good fit for all the data points. It gives an indication of the trend which is generally more helpful than the data points themselves. Also a scatter plot s only changing in one dependent variable in conjunction with the independent variable. Thus lets us pick the dimension we consider to fit the linear regression independent of others. Lastly, the linear regression also gives an indication of how much the data is adhering to the trend via the estimation of errors.
Non-linear equations can also be "linearized" by selecting a suitable change of variables.  This is quite popular because it makes the analysis simpler. But reducing the dimensions is prone to distortion of the error structure. It is a oversimplification of the model.  It violates key assumptions and impacts the resulting parameter values. All of this contributes toward incorrect predictions and are best avoided.  Non-linear squares analysis has well defined techniques that are not too difficult with computing. Therefore it is better to do non-linear square analysis when dealing with non-linear inverse models.
#codingexercise
double maximizeScalarProductWhenSwappingIsAllowed(List<int> A, List<int>B) 
{ 
assert (A.Count == B.Count); 
A.Sort(); 
B.Sort() 
A.Reverse(); 
B.Reverse(); 
Return GetScalarProduct(A,B); 
} 
Double GetScalarProduct(List<int>A, List<int>B) 
{ 
Double result = 0; 
For (int I = 0;  < A.Count; i++) 
{ 
result += A[I] * B[I]; 
} 
Return result; 
} 

Thursday, November 23, 2017

We continue our discussion on inverse modeling to represent the system. An inverse model is a mathematical model that fits experimental data. It aims to provide a best fit to the data. We use the least squares error minimization to fit the data points. Minimizing Chi Square requires that we evaluate the model based on the parameters. One way to do this is to find where the derivatives with regard to the parameters are zero.  This results in a general set of non-linear equations.
We talk about linear regression analysis next. This kind of analysis fits a line to a scatter plot of data points. The same least squares error discussed earlier also helps center the line to the data plot which we call a good fit. The model parameters are adjusted to determine this fit by minimizing error.
To determine the best parameters for the slope  and intercept  of the line, we calculate the partial derivatives with respect to them and set  them to zero. This yields two equations to be solved for two unknowns. The standard error of the estimate quantifies the standard deviation of the data at a given value of the independent variable. Standard error of slope and intercept can be used to place confidence intervals.
One of the best advantage of a linear regression is the prediction with regard to time as in independent variable. When the data point have many factors contributing to their occurrence, a linear regression gives an immediate ability to predict where the next occurrence may happen. This is far easier to do than come with up a model that behaves as good fit for all the data points. It gives an indication of the trend which is generally more helpful than the data points themselves. Also a scatter plot s only changing in one dependent variable in conjunction with the independent variable. Thus lets us pick the dimension we consider to fit the linear regression independent of others.
Lastly, the linear regression also gives an indication of how much the data is adhering to the trend via the estimation of errors.

A MagicValue X is defined as one that satisfies the inequation KX <= Sum from M to N of Factorial(I) x Fibonacci(I) such that X is maximum
Given, M, N, and K approximate X
        static double Fib(int x)
        {
           if (x <= 1) return 1;
           return Fib(x-1) + Fib(x-2);
        }
        static double Fact(int x)
        {
            double result = 1;
            while (x > 0)
            {
                result *= x;
                x--;
            }
            return result;
        }
        static int GetMagicValue(int N, int M, int K)
        {
            double P = 0;
            double prev = P;
            for (int k = M; k >= N; k--)
            {
                double fib = Fib(k);
                double fact = Fact(k);
                double product = fib * fact;
                prev = P;
                P += product;
            }
            double result = P/K;
            return Convert.ToInt32(Math.Floor(result));
        }  

Wednesday, November 22, 2017

We were discussing Sierpinksi triangles earlier. An equilateral white triangle gets split into four equilateral sub-triangles and the one at the center gets colored red. This process is repeated for all available white squares in each iteration. You are given an integer m for the number of lines following and an integer n in each line following that for the number of iterations for each of which we want an answer.  
What is the total number of triangles after each iteration

We said the recursive solution is as follows:
        static double GetRecursive(int n) 
        { 
         if (n == 0) return 1; 
         return GetRecursive(n-1) + GetRecursive(n-1) + GetRecursive(n-1) 
                    + 1 // for the different colored sub-triangle 
                    + 1; // for the starting triangle
        } 

The same problem can be visualized as  one where the previous step triangle becomes one of the three sub-triangles in the next step.
In this case, we have

    double GetCountRepeated(int n) 
   { 
              double result = 1; 
              for (int i = 0; i < n; i++) 
              { 
                    result = 3 * result 
                                + 1 // for inner triangle 
                                + 1; // for outer triangle
              } 
              return result; 
   }