Saturday, November 8, 2014

In this blog post, I discuss a few implementations of authentication for web applications, particularly one that involves python language. There are several available for NodeJs and many companies already use them for the application and often they rely on OAuth and SSO. I haven't seen the use of CAS in companies but I have seen it in educational institutions. Today I want to quickly review some of the design behind the registration packages for django. There are quite a few available namely:
django - social auth - which makes social authentication simpler.
Pinax - which makes it popular for websites
django-allauth which integrates authentication, addressing, registration, account management as well as 3rd party social account
django-userena which makes user accounts simpler
django-social registration which combines OpenID, OAuth and FacebookConnect
django-registration which is probably the most widely used for the framework
django-email-registration which claims to be very simple to use and other such packages.
These implementations are essentially to facilitate the user account registration via templated views and a database or other membership provider backends.

There are other implementations as well such as EngineAuth, SimpleAuth and AppEngine-OAuth-Library. EngineAuth does the multiprovider authentication and saves the userid to a cookie.
SimpleAuth supports OAuth and OpenID. AppEngine-OAuth now provides user authentication against third party websites.


 However, one of the things I'm looking for is a NodeJs style implementation that uses the providers as strategy. If we look at the passport implementation for example, I like the fact that we can easily change the strategy to direct against the provider of choice. In fact the interface is something that makes it quite clear.
You use methods like
app.get('/login', function(req, res, next)) {
passport.authenticate('AuthBackendOfChoice', function (req,res, next) {
:
etc.
You also use methods like the following:
var passport = require('passport') , OAuthStrategy = require('passport-oauth').OAuthStrategy; passport.use('provider', new OAuthStrategy({ requestTokenURL: 'https://www.provider.com/oauth/request_token', accessTokenURL: 'https://www.provider.com/oauth/access_token', userAuthorizationURL: 'https://www.provider.com/oauth/authorize', consumerKey: '123-456-789', consumerSecret: 'shhh-its-a-secret' callbackURL: 'https://www.example.com/auth/provider/callback' }, function(token, tokenSecret, profile, done) { User.findOrCreate(..., function(err, user) { done(err, user); }); } ));
I haven't found a django-passport implementation in the repo or for that matter any python-passport implementation.
but Netor technologies has a mention for something that's same name but is also an interesting read.
For example, they create a table to keep the application_info and user_info. The application_info is similar to the client in the OAuth protocol. In that it keeps track of the applications as well as the user information. The user information is keeping track of usernames and passwords. The user_applications is the mapping between the user and the applications.
The authentication is handled using a Challenge Response scheme.  The server responds with the user's password salt along with a newly generated challenge salt and a challenge id. The client sends back a response with the hash resulting from hash(hash(password+salt) + challenge). These are read by the server and deleted after use. There's no need to keep them.
The code for create user looks like this:
def create(store, user, application = None):
      if application is None:
          application = Application.findByName(unicode('passport'))
      result = UserLogin.findExisting(user.userid, application.applicationid)
      if result is None:
              result = store.add(UserLogin(user, application))
              store.commit()
      return result
and the authentication methods are handled in the controllers. The BaseController has the method  to get user login and the servicecontroller has the method to authenticate via a challenge.
This seems a clean example for doing a basic registration of user accounts and integrating with the application.
However, I'm wondering why we don't have the passport library ported to python yet.
The implementation for that is also relatively easy.
One we have to define the class for Strategy and Passport. Next we implement the authenticate method and target the appropriate strategy. An out of box SessionStrategy may also be provided. If we look at the authenticate method, we are issuing a challenge and attempting to validate against one of the Strategies until there is none left. In fact, the passport framework implements just the initialize and the authenticate method.
I've listed EngineAuth, SimpleAuth and AppEngine-OAuth but they are still not the same to implement. 

Friday, November 7, 2014

We now look at Gram-Schmidt conjugation technique. This helps us generate a set of A-orthogonal search directions say di. This technique provides a simple way to generate them. We begin with a set of n-linearly independent vectors ui which are say co-ordinate axes or something better. To construct di, take ui and subtract out any components that are not A-orthogonal to the previous vectors. We begin with two linearly independent vectors u0 and u1. Set d0 = u0. The vector u1 is composed of two components one u+ that is along the direction of u0 and another it's A-orthogonal u*. After conjugation, only the A-orthogonal portion remains and d1 = u*. In the next step we will have two A-orthogonal vectors d0 and d1 and we construct the new one based on the previous two. We find the components along each of the previous A-orthogonal vectors so far. The components are found by taking some length in the direction of the previously found vectors. This length is determined the same way as we mentioned earlier with a pair of A-orthogonal vector. After conjugation, only the A-orthogonal portion remains to be included to our set. We repeat until we have generated a full set of n orthogonal vectors.

def gram-schmidt(set_ui, A):
      set_di = []
      for i in range(len(set_ui)):
           components = 0
           for k in range(i):
               components += component(ui,dk) # - (utranspose A dk) / (dk-transpose A dk)
           di = ui + components
           set_di = di.append(component)
      return set_di


Let us take a closer look at the conjugate directions method. We see that if the search vector is constructed by using the above method with axial unit vectors, it becomes equivalent to performing Gaussian elimination. If we take the orthogonal directions method with the same axial vectors and stretched it, it would look very much like the one with the conjugate directions.

def gaussian-elimination(A,n,b):
     # augment the matrix A with b as the final column
     # decomposition to upper triangular matrix by repeatedly multiplying the first row and subtracting from that row.
     # back-substitution
     for j in range(0:n-2):  # start from the first column (we have to go column order first)
     for i in range(n-1:j+1): # start from last row, repeat for each row above except for first
             # pick a row to calculate the ratio
             rows = [ r in range(0:i-1) if A[r,j] != 0 ]
             if len(rows) == 0:
                      raise ('bad input')
             row = rows[0]
              
             if A[i,j] != 0:
                 ratio = A[i,j]  / A[row,j]
                 for k in range(0:n-1): # set first column to zero, repeat
                                                 # and don't forget the augmented column                 
                   if  ratio > 0:
                       A[i,k] = A[i,k] - ratio * A[row,k]
                       b[i,0] = b[i,0] - ratio * b[row,0]
                   else
                       A[i,k] = A[i,k] + ratio*A[row,k]
                       b[i,0] = b[i,0] + ratio * b[row,0]
      # we now have a matrix with lower triangular zero
      # substitute the last value in the last row to the row above to solve each line
         
                 


Thursday, November 6, 2014

#coding exercise
Bool memcmp (char* src, char* dest, size_t n)
{
If ( !src || !dest || n <= 0) return false;
While (n)
{
If (*dest != *src) return false;
Dest++;
Src++;
n--;
}
Return true;
}

We will continue our discussion on steepest descent method and conjugate directions. The steepest descent method often finds itself taking steps in the same direction. It would have been better if the steps were taken right the first time. How do we correct this ? We take two orthogonal search directions and take steps with just the right length along these directions such that after a certain number of steps we are done. This is what we will discuss in conjugate direction method.

As an example, we can use the co-ordinate axes as search directions. The first step leads to the correct x-coordinate and the second step is the vertical step to reach the center. For each step we choose a point such that it is a step length in the direction di. We use the fact that ei is orthogonal to di, so that we should never step in the direction of di again. Using this we want to determine the step length but we have a catch-22 situation. We don't know the step length without knowing ei and if we knew ei, we wouldn't have to compute at all.  To overcome this, we don't use search directions but instead use A-orthogonal or conjugate directions To picture these conjugate directions we imagine ellipsis that can be expanded or stretched on a bubble to form concentric circles. Our new requirement is that ei+1 is A-orthogonal to di. The benefit of this new orthogonality condition is that it equivalent to finding the minimum point along the search direction di as in the steepest descent. We can prove this by setting the directional derivative to zero. but we look at finding the step length when the search directions are A-orthogonal. The step length can now be calculated just the same way we derived that for the steepest descent and we see that it is expressed in terms of the residual. In fact, if the search vector were the residual, this step length would be the same as in the steepest descent.
While the method of orthogonal directions works only when we know the final destination, the conjugate directions method works in n-iterations. The method of conjugate directions converges in n-steps. The first step is taken along some direction d(0). The minimum point x is chosen by some constraint that e(1) must be A-orthogonal to d(0) The initial error can be expressed as a sum of A-orthogonal components. Each step of the conjugate directions eliminates one of these components. 

Wednesday, November 5, 2014

Today we discuss a new idea for an application. This application provides a personal box for you to use online. You can store digital content or share.it ages and collects the items so that you don't have to bother about cleaning up. Moreover, you can mark items to get burned after some time. You can also safely and securely share items with others using tiny urls or a variety of data formats knowing that it will be temporary. The idea is to bring the garbage collector to you. However it is something that provides very specific functionality and may need to be assessed  for appeal and business value. More on this perhaps later.

Tuesday, November 4, 2014

#codingexercise matrix addition
Int [, ] addition ( int [,] left, int [,] right)
{
If (left == null || right == null) return null;
Int lc  = left.GetLength (0);
Int lr   = left.GetLength (1);
Int rc  = right.GetLength (0);
Int rr  = right.GetLength (1);
If (lc != rc || lr != rr) return null;
Var ret = new int [lr, lc]();
For (int r = 0; r < lr;  r++)
{
   For (int  c = 0; c < lc; c++)
     {
           Ret [r,c] += left [r,c] + right [r,c];
      }
}
Return ret;
}
#codingexercise matrix subtraction

Int [, ] Subtraction ( int [,] left, int [,] right)

{

If (left == null || right == null) return null;

Int lc  = left.GetLength (0);

Int lr   = left.GetLength (1);

Int rc  = right.GetLength (0);

Int rr  = right.GetLength (1);

If (lc != rc || lr != rr) return null;

Var ret = new int [lr, lc]();

For (int r = 0; r < lr;  r++)

{

   For (int  c = 0; c < lc; c++)

     {

           Ret [r,c] += left [r,c] - right [r,c];

      }

}

Return ret;

}

In continuation of our discussion on the convergence of steepest descent method, we will see that there is instant convergence even with a set of eigenvectors. For a symmetric matrix, there exists a set of n orthogonal eigenvectors of A. As we can scale eigenvectors arbitrarily, each of them is chosen as unit-length and the error term is expressed as a linear combination  of this eigenvector.  Then we see that the residual can be expressed as the sum of the eigenvector components. We saw that when the set has only one eigenvector, the convergence is achieved in one step by choosing the inverse of the eigenvalue. Here all vectors have a common eigenvalue and therefore leads again to a onestep convergence.

The lowest value of the function is at the minimum of the paraboloid.

Monday, November 3, 2014

#codingexercise
int[,] Transpose (int[,] matrix, int row, int col)
{
   var ret = new int[col, row]
   for (int i = 0; i < row; i++)
     for (int j = 0; j < col; j++)
     {
           ret[j, i] = matrix[i,j]
     }
    return ret;
}

Assert (matrix = Transpose(Transpose(matrix));

We will continue to look at the convergence of Steepest Descent method. Let's first consider the case where ei is an eigenvector with eigenvalue Lambda-e. Then the residual which is A applied to the negative eigenvector ei, is also an eigenvector.  Since we discussed earlier that the position in the next step can be determined from the position in the current step based on step length and direction, we see that when we express the residuals as eigen vectors, the next residual turns out to be zero. This implies that it takes only one step to converge to the exact solution.  The current position lies on one of the axes of the ellipsoid, and so the residual points directly to the center of the ellipsoid. Choosing gradient as  negative eignenvalue gives us instant convergence.

#codingexercise
Matrix multiplication
Int [, ] multiply ( int [,] left, int [,] right)
{
If (left == null || right == null) return null;
Int lc  = left.GetLength (0);
Int lr   = left.GetLength (1);
Int rc  = right.GetLength (0);
Int rr  = right.GetLength (1);
If (lc != rr) return null;
Var ret = new int [lr, rc]();
For int r = 0; r < lr;  r++;
   For int  c = 0; c < rr; c++
     {
         For ( int k = 0; k < lc; k++)
                     Ret [r,c] += left [r,k] × right [k, r];
}
Return ret;

}

Sunday, November 2, 2014

We resume our discussion on matrix operations. In this post, we will pick up Jacobii iteration. The Jacobii method also solves the same linear equation as before. The matrix A is split into two parts: D whose diagonal elements are identical to those of A, and whose off-diagonal elements are zero and E whose diagonal elements are zero and whose off diagonal elements are identical to those of A. A is the sum of D and E. The usefulness of D is that it is a diagonal matrix and so it can be inverted.
As compared to the steepest descent method, this does not have to look for a value that converges to zero.Using the linear equation and the split of the matrix into the diagonal specific matrices, we can now write equation as an iteration where the next step is calculated based on a constant times the previous step taken together with another constant. Each iteration leads closer to the final solution which is called a stationary point because the next iteration at the solution will be the same as the previous step. Let us take a closer look at what each iteration does. If we express each iterate (current step) as the sum of the exact solution and the error term, then the equation becomes a simple translation in terms of previous step. In other words, each iteration does not affect the correct part of the iterate but only the error term as the iterations converge to the stationary point. Hence the initial vector x0 has no effect on the inevitable outcome. And it also does not affect the number of iterations required to converge to a given tolerance.
We will next discuss spectral radius which determines the speed of convergence in this method.Suppose that vj is the eigenvector of B with the largest eigenvalue. The spectral radius of B  is this largest eigenvalue. Now the error can be expressed as a linear combination of eigenvectors, one of which will be in the direction of vj. and this one will be the slowest to converge because the eigenvalue is proportional to the steepness of the corresponding slope.  This follows from the eigenvectors being aligned along the axes of the paraboloid describing the quadratic form of the function. The rate of convergences depends on this spectral radius and it is in turn dependent on A. Unfortunately, this technique does not apply to all A. However, this method illustrates the usefulness of eigenvectors. The eigenvector paths of each successive error term determine the path of convergence.They converge normally at the rate defined by their eigenvalues.
#coding exercise
Given the matrix in the coding exercise described earlier, print the elements in  a single dimensional sorted order
List<int> GetSortedElements(int[,] matrix, int row, int col)
{
 // assuming parameter validation
  var ret = new List<int>();
  int i = 0;
  int j = 0;
  var front = new List<int>(); // column corresponding to each row of the front
  for (int i = 0; i < row; i++) { front.add(0); } // initialize to first column index for each row
  while (i < row && j < col)
 {
  // extract min of allfront
  var candidate = new List<int>();
  for (int k = 0; k< row; k++)
        candidate.add((front[k] < col ? matrix[k, front[k]] :  INT_MAX);
  int min = candidate.min();
  i = candidate.IndexOf(min);
  front[i] = front[i] + 1;
  j = front[i];
  ret.Add(min);
}
return ret; 
}
23 34 45 46 68 89
27 35 47 69 97 110
32 46 48 65 98 112