Saturday, March 31, 2018


Driver-Rider Real-Time Monitoring
Introduction:
Problem Statement: When parents offer to engage in carpool for their kids in daily commutes, the location information of the assets is generally unknown for the duration of the ride. Map based mobile device applications may enable driver –rider location sharing but their real time notifications to stationary observers are extremely expensive to both the device  on the moving vehicle as well as the handheld on the stationary observer. This document attempts to provide an efficient solution to elastic scaling of observers and rides that is not only performing but also convenient without requiring anything more than a browser interface for the publisher and the subscriber.


Differentiation:
Google Maps and Uber/Lyft applications enable location sharing but they heat up the mobile device when turned on.  Moreover, neither application allows an observer to include more than just the ride from a particular participant.
Design:
Without the use of native mobile applications, a browser based application that can query local mobile operating system’s location sharing primitives is sufficient to publish data to cloud based service.
A message queue broker from the cloud handles the exchange of messages required for sharing the location information between publisher and elastic  group of observers. A global cloud database keeps track of all the information regarding rides and observers.
Architecture:
All cloud based technologies are sufficient for this purpose. And web interface for browser display can be based on server side page displays.
Performance:
The use of cloud based technologies such as queue service, cloud database and such others are sufficient to improve performance. Standalone superfast erlang based applications are not required.
Security:
Access control is based on row level security in the database enabling granular control of all necessary assets.
Testing:
All server side code regardless of the tier involved can be unit-tested and integration tested with selenium web-browser based tests.
Conclusion:
Publisher-subscriber application of location sharing mobile units can be applied to a variety of domains.

#codingexercise

Given a triangular structure of members, find the minimum sum path from top to bottom:

Solution: Since each level has to be represented once, pick the minimum of each level to add to the desired sum

otherwise the exhaustive case is :

int GetMinSumPathTopToBottom(int[,] A, int rows, int cols, int i, int j)

    if (i == rows) {return 0;}
    Debug.Assert( 0 <= i && i < rows && 0 <= j && j < cols);
    var locals = new List<int>();
    sums += A[i,j];
    for (int k = 0; k < cols; k++)
    {     
        locals.Add(GetMinSumPathTopToBottom(A, rows, cols, i+1, k));
    }
    sums += locals.min();
    return sums;
}

Friday, March 30, 2018

Today onwards we will start discussing Microsoft Dynamics AX from the Dynamics suite of products. Uptil now, we have talked about reporting solutions and dashboards from a variety of technology stacks such as Grafana, Splunk  and SSRS but it would be interesting to see such techniques applied to customer relationship management. Dynamics 365 is more than a personal book for your customer, it provides a professional with the best tools for managing their data, updating records, and status both online and offline.
Dynamics AX enables us to capture our expense transactions and receipt information. It also helps us create and submit timesheets. Initially we used to have expensive ERP software until Dynamics AX came along the way.
Some of the major benefits from AX include the following:
1) While most enterprise resource planning software were traditional enterprise products, Dynamics AX is built with cloud services. This facilitates ubiquity and reachability from any device anywhere.  Not only does this simplify production deployments for the product offerings, it also enables continuous availability and updates. As with all the benefits of cloud computing such as elastic scaling, load balancing, regional availabilities etc the services powering Dynamics AX also means improved handling of data. Improvements in data handling was previously mentioned here: https://goo.gl/n4G2TU
2) Another advantage to Dynamics AX is the integrations it can perform via connectors, plugins and data sources. More data generally means better reports and more meaningful insight and not just from a statistics perspective but from several analysis techniques such as with PowerBI
3) Faster development  and deployment - The entire software development life-cycle with off the shelf products such as AX now becomes far easier and continuous without any compromise in control, governance and compliance. This is a big win for common criteria certification
4) Improved development environment - The development environment could not be any better than the Visual Studio and .NET framework integration along with the rich set of tools that come with that IDE. The entire Application Object Tree (AOT) is available to browse in the Visual Studio. The development is with X++ language which is specific to accounting and business  management systems. It is considered at par with managed languages such as C#.
5) Scoping - Earlier users were granted access via role centers. Dynamics AX uses the equivalent of namespaces called workspaces which enable the users to focus on the most important aspects of their tasks.
6) Web Interface - cloud service power rich user interface via the web browser that makes it near ubiquitous to access and perform daily activities.

Thursday, March 29, 2018

Software application metric for aging
Introduction: 
Software development life cycle is often used to indicate the ritual that involves planning, creating, testing and deploying an information system. As the cycle repeats there is a lot of time involved maintenance. As the software ages, it spends more time in this stage. This essay tries to articulate a metric for determining the age of software and how to keep it fresh. 
Description: 
Software quality metrics, generally, fall in three categories. They are either: 
Product based – These capture the characteristics of the product such as size, complexity, design features, performance and quality level. 
Process based – These attempt to track the activities taken for development and maintenance of software. 
Project based – These attempt to describe the resources, timeline, cost, schedule and productivity associated with the software project.  
The metrics associated with software quality are more typical to be related to product based and process based rather than project based.  Yet they do not indicate whether the software has matured – a term loosely used to describe higher costs for fixing defects such that the return on investment for maintenance activities is well over the cost of newly written software.  
Periodic refactoring and rewriting modules and components of the software somehow alleviates massive rewrites by replacing smaller chunks of the overall code. Most applications are well organized and written from the start as allowing flexibility with little or controlled changes to the code. Yet a single method might become overwhelmingly complex over time with the addition of more and more branching of logic. For example a method to draw a shape might require different handling depending on the parameters passed to it. Consequently the code becomes so convoluted with handling these different cases that it is termed spaghetti code. A metric for nested branching is called cyclomatic complexity and it indicates how deep the branching goes before reaching a result. In some sense metrics such as cyclomatic complexity determine the current state of the software but there is no metric that keeps track of the progress of these complexity over time  
In this regard, a metric known as entropy is used to indicate the level of maintenance involved in any module or organizational unit of software. Such a metric that can progress monotonically over time and continues to remain stable and beyond compromise, then becomes a great indicator for aging. With the help of vectors and features, software organization units can now be represented and classified with the same rigor as vector model space. Therefore a multidimensional metric involving multiple scalars then becomes convenient to indicate the age of the software.  
Conclusion 
Software metric for aging continues to be a challenge but advances in using vector space model along with neural net can help determine the pain points better allowing the overall software to remain young. 


Wednesday, March 28, 2018

We were discussing the advantages of JavaScript SDK over JSP pagelets. The separation of an SDK from an API only helps client side development. This does not pose any disruption to the existing services and models of the service provider. The SDK forms a separate layer over and on top of the services so that the business clients can choose to use the existing services or develop newer modes of content display using the JavaScript SDK. Powerful jQuery plugins and those exported by the service provider provide an immense combination to not only offload the customization of  display but also enhance their integration into business workflows by replacing the existing interruption mechanisms with seamless business workflows. The SDK need not be just in Javascript and other languages can be used facilitating a broader ecosystem. Moreover command line interface may also be facilitated with the same REST APIs that enables an SDK. Samples or even exportable SDKs can be developed by the service provider by consuming the same APIs.
The service Provider may also maintain its own user interface also powered by consuming the REST APIs and JavaScript SDK that it exports to its clients. In other words, the native user interface from the service provider as well as the self-customized interface from the client can exist side by side serving disjoint audience but maintained together as identity resources with the service provider. 
Moreover, with the adoption of cloud-based technologies, PaaS platform and containers, the client-side technologies may be freed up and made popular with their developer community. This lets the service provider develop more and more widgets and consolidate their best practice that others may not want to invest in. Furthermore, the service provider may allow embrace partnership with vendors for different workflows and interfaces while consolidating the server-side APIs across data types. 
To list the disadvantages of JavaScript over JSP, we can include
1) single point maintenance facilitated by all server side code
2) consolidation and consistency in views and all customizations via parameters
3) tight control of client side displays and customizations
4) arguably improved security through less surface area.

Tuesday, March 27, 2018

Trade-offs between Javascript SDK and Java pagelets from a service provider:
Service providers can ship a Javascript SDK to improve customization and programmability.
This does not pose any disruption to the existing services and models of the service provider. The SDK forms a separate layer over and on top of the services so that the business clients can choose to use the existing services or develop newer modes of content display using the JavaScript SDK. Powerful jQuery plugins and those exported by the service provider provide an immense combination to not only offload the customization of  display but also enhance their integration into business workflows by replacing the existing interruption mechanisms with seamless business workflows. 
There is a tradeoff in exporting a JavaScript SDK from the service provider instead of the service provider providing the client-side display. It has to rely on the clients to send the data securely without compromise. This is generally difficult to do without an all-in approach. However, the way the service provider may pass the data between its services is similar to how an external service might send the credentials to the service provider. Therefore, these services and the UI can also be hosted as single-origin for the JavaScript SDK while the service provider is exclusively API based. 
The service Provider may also maintain its own user interface also powered by consuming the REST APIs and JavaScript SDK that it exports to its clients. In other words, the native user interface from the service provider as well as the self-customized interface from the client can exist side by side serving disjoint audience but maintained together as identity resources with the service provider.  
Is it safe for the data to be sent over the https on several external network?  This question is not really solved by the pagelet technology from the service provider. That said, it's true that data can be compromised when transferred from network to network. It's also true that procuring and processing data only with server-side technology reduces surface area and client involvement. However, pagelet technologies do not decentralize the development of the interface or the technologies that are used to deploy them. Moreover, with the adoption of cloud-based technologies, PaaS platform and containers, the client-side technologies may be freed up and made popular with their developer community. This lets the service provider develop more and more widgets and consolidate their best practice that others may not want to invest in. Furthermore, the identity provider may allow embrace partnership with vendors for different workflows and interfaces while consolidating the server-side APIs across data types.  
To list the advantages of JavaScript over JSP, we can include:
1) writing and debugging via browser is easier 
2) no more compilation required 
3) performance at par or even better with modular and refactored code 
4) standard REST interface adoption 

5) JsUnit is available for unit-testing so any existing language features are not lost with JavaScript. 
Login screen enhancement: https://1drv.ms/w/s!Ashlm-Nw-wnWtWiupMGBf_WbEJxS

Monday, March 26, 2018

We continue discussing Convolutional Neural Network (CNN) in image and object embedding in a shared space, shape signature and image retrieval.
We were discussing the Euclidean distance and the distance matrix. As we know with Euclidean distance, the chi-square measure, which is the sum of squares of errors, gives a good indication of how close the objects are to the mean. Therefore it is a measure for the goodness of fit. The principle is equally applicable to embedding space. By using a notion of errors, we can make sure that the shapes and images embedded in the space do not violate the intra member distances. The only variation the authors applied to this measure is the use of Sammon error instead of the chi-square because it encourages the preservation of the structure of local neighborhoods while embedding. The joing embedding space is a Euclidean space of lower dimension while the shapes and the images are represented in the original high dimensional space.
The Sammon Error is a weighted sum of differences between the original pairwise distances and the embedding pairwise distances. Dissimilar shapes have more differences and therefore they are weighted down.
The embedding of shapes and images proceeds with minimizing the Sammon Error using non-linear Multi-dimensional scaling. It is a means of visualizing the level of similarity of individual cases in a dataset. By minimizing the intra member distance in the placement of items in an N-dimensional space.
We saw how the embedding space is created.  Mapping new shapes is slightly more effort. The space was originally constructed with a set of 3D shapes. They were jointly embedded. Introducing a new shape requires us to find an embedding point. The steps for this include:
First, a feature vector is computed.
Second, pairwise distances are computed.
Third we minimize the Sammon error but this time applying Liu-Nocedal method which is a large scale optimization method that combines BFGS steps and conjugate directions steps. BFGS is an iterative method for solving unconstrained non-linear optimization problems.
The 3D shapes in the embedding space have abundant information to train the CNN and also to perform data generation. The shapes are represented as clean and complete meshes which allows control and flexibility. Many images can be generated from the shapes using a rendering process. This is called image setting.  In the embedding space, a shape is mapped to a point. For each image, its association with a shape is automatically known.  The collection of images and shapes form the training data for CNN.
CNN models can approximate high dimensional and non-linear functions as we recall the feature vector has a large number of attributes and the Sammon error minimization objective is a non-linear function.  CNN can infer millions of parameters.  CNN therefore can be precise and informative once it is trained on a large amount of data. If the data is not proper, CNN cannot learn enough latent information and there results have overfitting. When the images are generated with rich variation in lighting and viewpoint and superimposed on random backgrounds, the CNN has sufficient data. Approximately 1 million images are synthesized per category.

Sunday, March 25, 2018

We continue discussing Convolutional Neural Network (CNN) in image and object embedding in a shared space, shape signature and image retrieval.
We were discussing the Euclidean distance and the distance matrix. As we know with Euclidean distance, the chi-square measure, which is the sum of squares of errors, gives a good indication of how close the objects are to the mean. Therefore it is a measure for the goodness of fit. The principle is equally applicable to embedding space. By using a notion of errors, we can make sure that the shapes and images embedded in the space do not violate the intra member distances. The only variation the authors applied to this measure is the use of Sammon error instead of the chi-square because it encourages the preservation of the structure of local neighborhoods while embedding. The joing embedding space is a Euclidean space of lower dimension while the shapes and the images are represented in the original high dimensional space.
The Sammon Error is a weighted sum of differences between the original pairwise distances and the embedding pairwise distances. Dissimilar shapes have more differences and therefore they are weighted down.
The embedding of shapes and images proceeds with minimizing the Sammon Error using non-linear Multi-dimensional scaling. It is a means of visualizing the level of similarity of individual cases in a dataset. By minimizing the intra member distance in the placement of items in an N-dimensional space.

We saw how the embedding space is created.  Mapping new shapes is slightly more effort. The space was originally constructed with a set of 3D shapes. They were jointly embedded. Introducing a new shape requires us to find an embedding point. The steps for this include:
First, a feature vector is computed.
Second, pairwise distances are computed.
Third we minimize the Sammon error but this time applying Liu-Nocedal method which is a large scale optimization method that combines BFGS steps and conjugate directions steps. BFGS is an iterative method for solving unconstrained non-linear optimization problems.

#proposal for login screens ui enhancements:  https://1drv.ms/w/s!Ashlm-Nw-wnWtWrw5g03hYU5CBKL

Saturday, March 24, 2018

We continue discussing Convolutional Neural Network (CNN) in image and object embedding in a shared space, shape signature and image retrieval.
We were discussing the Euclidean distance and the distance matrix. As we know with Euclidean distance, the chi-square measure, which is the sum of squares of errors, gives a good indication of how close the objects are to the mean. Therefore it is a measure for the goodness of fit. The principle is equally applicable to embedding space. By using a notion of errors, we can make sure that the shapes and images embedded in the space do not violate the intra member distances. The only variation the authors applied to this measure is the use of Sammon error instead of the chi-square because it encourages the preservation of the structure of local neighborhoods while embedding. The joing embedding space is a Euclidean space of lower dimension while the shapes and the images are represented in the original high dimensional space.
The Sammon Error is a weighted sum of differences between the original pairwise distances and the embedding pairwise distances. Dissimilar shapes have more differences and therefore they are weighted down.
The embedding of shapes and images proceeds with minimizing the Sammon Error using non-linear Multi-dimensional scaling. It is a means of visualizing the level of similarity of individual cases in a dataset. By minimizing the intra member distance in the placement of items in an N-dimensional space. The goal of MDS is to get the co-ordinate matrix. It uses the notion that the co-ordinate matrix can easily be formed from the eigenvalue decomposition of the scalar product matrix B. The steps of a classical MDS algorithm are as follows:
1. Form pairwise distance matrix that gives proximity between pairs
2. Multiply the squared proximities with a form of identity matrix to get matrix B
3. Extract the m-largest positive eigenvalues and the corresponding m eigenvectors
4. Form the co-ordinate matrix from the eigen vectors and the diagonal matrix of the eigen values

#proposal for login screens ui enhancements:  https://1drv.ms/w/s!Ashlm-Nw-wnWtWrw5g03hYU5CBKL

Friday, March 23, 2018

We continue discussing Convolutional Neural Network (CNN) in image and object embedding in a shared space, shape signature and image retrieval.
CNN has the ability to separate an image into various layers of abstraction while capturing different features and elements. This lets CNN to be utilized for different learning tasks where the tasks may differ on the focus they require. It is this adaptive ability of CNN that is leveraged for joint embedding.  The CNN is first trained to map an image depicting an object similar to a shape to a corresponding point in the embedding space such that the position of the point for the image is closer to the point for the shape. During this training, the CNN discovers latent connection between that exists between an image and the object it features. Then when a test image is presented, the latent connection helps to place that image in the embedding space closer to the object it features.
Moreover, CNN can generalize from different tasks. This makes it useful to repurpose a well-trained network. Since it learns from a high dimensional space, CNN can differentiate even similar images for a variety of tasks.
We were discussing the Euclidean distance and the distance matrix. As we know with Euclidean distance, the chi-square measure, which is the sum of squares of errors, gives a good indication of how close the objects are to the mean. Therefore it is a measure for the goodness of fit. The principle is equally applicable to embedding space. By using a notion of errors, we can make sure that the shapes and images embedded in the space do not violate the intra member distances. The only variation the authors applied to this measure is the use of Sammon error instead of the chi-square because it encourages the preservation of the structure of local neighborhoods while embedding. The joing embedding space is a Euclidean space of lower dimension while the shapes and the images are represented in the original high dimensional space.
The Sammon Error is a weighted sum of differences between the original pairwise distances and the embedding pairwise distances. Dissimilar shapes have more differences and therefore they are weighted down.
The embedding of shapes and images proceeds with minimizing the Sammon Error using non-linear Multi-dimensional scaling. It is a means of visualizing the level of similarity of individual cases in a dataset. By minimizing the intra member distance in the placement of items in an N-dimensional space. The goal of MDS is to get the co-ordinate matrix. It uses the notion that the co-ordinate matrix can easily be formed from the eigenvalue decomposition of the scalar product matrix B. The steps of a classical MDS algorithm are as follows:
1. Form pairwise distance matrix that gives proximity between pairs
2. Multiply the squared proximities with a form of identity matrix to get matrix B
3. Extract the m-largest positive eigenvalues and the corresponding m eigenvectors
4. Form the co-ordinate matrix from the eigen vectors and the diagonal matrix of the eigen values

#proposal for login screens ui enhancements:  https://1drv.ms/w/s!Ashlm-Nw-wnWtWrw5g03hYU5CBKL

Thursday, March 22, 2018

We continue discussing Convolutional Neural Network (CNN) in image and object embedding in a shared space, shape signature and image retrieval.
The 2D distance matrix formed from word embeddings in text documents that is dimensionality reduced and classified using the softmax function  is similarly put to use with the distance matrix between 3D models although the feature vector, distance calculation, algorithm and error function are different. Neural nets are applied to embedding in both text documents and images.
CNN has the ability to separate an image into various layers of abstraction while capturing different features and elements. This lets CNN to be utilized for different learning tasks where the tasks may differ on the focus they require. It is this adaptive ability of CNN that is leveraged for joint embedding.  The CNN is first trained to map an image depicting an object similar to a shape to a corresponding point in the embedding space such that the position of the point for the image is closer to the point for the shape. During this training, the CNN discovers latent connection between that exists between an image and the object it features. Then when a test image is presented, the latent connection helps to place that image in the embedding space closer to the object it features.
Moreover, CNN can generalize from different tasks. This makes it useful to repurpose a well-trained network. Since it learns from a high dimensional space, CNN can differentiate even similar images for a variety of tasks.

As we know with Euclidean distance, the chi-square measure, which is the sum of squares of errors, gives a good indication of how close the objects are to the mean. Therefore it is a measure for the goodness of fit. The principle is equally applicable to embedding space. By using a notion of errors, we can make sure that the shapes and images embedded in the space do not violate the intra member distances. The only variation the authors applied to this measure is the use of Sammon error instead of the chi-square because it encourages the preservation of the structure of local neighborhoods while embedding. The joing embedding space is a Euclidean space of lower dimension while the shapes and the images are represented in the original high dimensional space.

#proposal for login screens:  https://1drv.ms/w/s!Ashlm-Nw-wnWtWiX5uxOG6zc4a8K
Thumbnail images instead of literals can also enhance the login screens. Avatars are an example of this.

Wednesday, March 21, 2018

We continue discussing Convolutional Neural Network (CNN) in image and object embedding in a shared space, shape signature and image retrieval.
The CNN approach consists of four major components: embedding space construction, training image synthesis, CNN training phase, and the final testing phase. In the first phase, a collection of 3D images is embedded into a common space. In the second phase, the training data is synthesized using 3D shapes in a rendering process which yields annotations as well.  In the third phase, a network is trained to learn the mapping between images and 3D shape induced embedding space. Lastly, the trained network is applied on new images to obtain an embedding into the shared space. This facilitates image and shape retrieval.
The embedding space is where both real-world images and shapes co-exist.  The space organizes latent objects between images and shapes. In order to do this, the objects are initialized from a set of 3D models.  The are pure and complete representation of objects. They don't suffer from the noise in images. The distance between 3D models is both informative and precise. With the help of 3D models, the embedding space becomes robust.
The shape distance metric computes the similarity between two shapes by the aggregate of similarities among corresponding views. This method is called Light field descriptor. The input is a set of 3D shapes although two would do.The shapes are aligned by applying a transformation using a rotation matrix and a translation vector. Then they are projected from k viewpoints to generate projection images
The CNN uses this distance metric to form a pairwise comparison between the 3D models. Since the metric is informative and accurate, the models can be organized in space along increasing dimensions.
The 2D distance matrix formed from word embeddings in text documents that is dimensionality reduced and classified using the softmax function  is similarly put to use with the distance matrix between 3D models although the feature vector, distance calculation, algorithm and error function are different. Neural nets are applied to embedding in both text documents and images.
CNN has the ability to separate an image into various layers of abstraction while capturing different features and elements. This lets CNN to be utilized for different learning tasks where the tasks may differ on the focus they require. It is this adaptive ability of CNN that is leveraged for joint embedding.  The CNN is first trained to map an image depicting an object similar to a shape to a corresponding point in the embedding space such that the position of the point for the image is closer to the point for the shape. During this training, the CNN discovers latent connection between that exists between an image and the object it features. Then when a test image is presented, the latent connection helps to place that image in the embedding space closer to the object it features.
Moreover, CNN can generalize from different tasks. This makes it useful to repurpose a well-trained network. Since it learns from a high dimensional space, CNN can differentiate even similar images for a variety of tasks.
#proposal for login screens:  https://1drv.ms/w/s!Ashlm-Nw-wnWtWiX5uxOG6zc4a8K
Thumbnail images instead of literals can also enhance the login screens. Avatars are an example of this.

Tuesday, March 20, 2018

Today also we continue discussing  Convolutional Neural Network (CNN) in image and object embedding in a shared space, shape signature and image retrieval.
The CNN approach consists of four major components: embedding space construction, training image synthesis, CNN training phase, and the final testing phase. In the first phase, a collection of 3D images is embedded into a common space. In the second phase, the training data is synthesized using 3D shapes in a rendering process which yields annotations as well.  In the third phase, a network is trained to learn the mapping between images and 3D shape induced embedding space. Lastly, the trained network is applied on new images to obtain an embedding into the shared space. This facilitates image and shape retrieval.
The embedding space is where both real-world images and shapes co-exist.  The space organizes latent objects between images and shapes. In order to do this, the objects are initialized from a set of 3D models.  The are pure and complete representation of objects. They don't suffer from the noise in images. The distance between 3D models is both informative and precise. With the help of 3D models, the embedding space becomes robust.
The shape distance metric computes the similarity between two shapes by the aggregate of similarities among corresponding views. This method is called Light field descriptor. The input is a set of 3D shapes although two would do.The shapes are aligned by applying a transformation using a rotation matrix and a translation vector. Then they are projected from k viewpoints to generate projection images
The CNN uses this distance metric to form a pairwise comparison between the 3D models. Since the metric is informative and accurate, the models can be organized in space along increasing dimensions.
The 2D distance matrix formed from word embeddings in text documents that is dimensionality reduced and classified using the softmax function  is similarly put to use with the distance matrix between 3D models although the feature vector, distance calculation, algorithm and error function are different. Neural nets are applied to embedding in both text documents and images.
#proposal for login screens:  https://1drv.ms/w/s!Ashlm-Nw-wnWtWiX5uxOG6zc4a8K

Monday, March 19, 2018

Today also we continue discussing  Convolutional Neural Network (CNN) in image and object embedding in a shared space, shape signature and image retrieval.
The CNN approach consists of four major components: embedding space construction, training image synthesis, CNN training phase, and the final testing phase. In the first phase, a collection of 3D images is embedded into a common space. In the second phase, the training data is synthesized using 3D shapes in a rendering process which yields annotations as well.  In the third phase, a network is trained to learn the mapping between images and 3D shape induced embedding space. Lastly, the trained network is applied on new images to obtain an embedding into the shared space. This facilitates image and shape retrieval.
The embedding space is where both real-world images and shapes co-exist.  The space organizes latent objects between images and shapes. In order to do this, the objects are initialized from a set of 3D models.  The are pure and complete representation of objects. They don't suffer from the noise in images. The distance between 3D models is both informative and precise. With the help of 3D models, the embedding space becomes robust.
The shape distance metric computes the similarity between two shapes by the aggregate of similarities among corresponding views. This method is called Light field descriptor. The input is a set of 3D shapes although two would do.The shapes are aligned by applying a transformation using a rotation matrix and a translation vector. Then they are projected from k viewpoints to generate projection images
The CNN uses this distance metric to form a pairwise comparison between the 3D models. Since the metric is informative and accurate, the models can be organized in space along increasing dimensions.
The 2D distance matrix formed from word embeddings in text documents that is dimensionality reduced and classified using the softmax function  is similarly put to use with the distance matrix between 3D models although the feature vector, distance calculation, algorithm and error function are different. Neural nets are applied to embedding in both text documents and images. 

Sunday, March 18, 2018

Today also we continue discussing  Convolutional Neural Network (CNN) in image and object embedding in a shared space, shape signature and image retrieval.
The CNN approach consists of four major components: embedding space construction, training image synthesis, CNN training phase, and the final testing phase. In the first phase, a collection of 3D images is embedded into a common space. In the second phase, the training data is synthesized using 3D shapes in a rendering process which yields annotations as well.  In the third phase, a network is trained to learn the mapping between images and 3D shape induced embedding space. Lastly, the trained network is applied on new images to obtain an embedding into the shared space. This facilitates image and shape retrieval.
The embedding space is where both real-world images and shapes co-exist.  The space organizes latent objects between images and shapes. In order to do this, the objects are initialized from a set of 3D models.  The are pure and complete representation of objects. They don't suffer from the noise in images. The distance between 3D models is both informative and precise. With the help of 3D models, the embedding space becomes robust.
The shape distance metric computes the similarity between two shapes by the aggregate of similarities among corresponding views. This method is called Light field descriptor. The input is a set of 3D shapes although two would do.The shapes are aligned by applying a transformation using a rotation matrix and a translation vector. Then they are projected from k viewpoints to generate projection images
The CNN uses this distance metric to form a pairwise comparison between the 3D models. Since the metric is informative and accurate, the models can be organized in space along increasing dimensions.
The 2D distance matrix formed from word embeddings in text documents that is dimensionality reduced and classified using the softmax function  is similarly put to use with the distance matrix between 3D models although the feature vector, distance calculation, algorithm and error function are different.

Saturday, March 17, 2018

Today also we continue discussing  Convolutional Neural Network (CNN) in image and object embedding in a shared space, shape signature and image retrieval.
The purpose of the embedding is to map an image to a point in the embedding space so that it is close to a point attributed to a 3D model of a similar object. A large amount of training data consisting of images synthesized from 3D shapes is used to train the CNN.
By using synthesized images, the embedding space is computed from clean 3D models without noise. This enables better object similarities. In addition 2D shape views are tossed in which boost the shape matching. This use of embedding space is a novel approach and enables a better domain for subsequent image and shape retrievals. Moreover the embedding space does away with linear classifiers. This yields robust comparision of real world images to 3D models. Previously, this was susceptible to image nuisance factors from real world images and the linear classifiers could not keep up. On the other hand the use of CNN mitigates this because it does better with image invariance learning  - a technique that focuses on the salient invariant embedded objects rather than noise.
The CNN approach consists of four major components: embedding space construction, training image synthesis, CNN training phase, and the final testing phase. In the first phase, a collection of 3D images is embedded into a common space. In the second phase, the training data is synthesized using 3D shapes in a rendering process which yields annotations as well.  In the third phase, a network is trained to learn the mapping between images and 3D shape induced embedding space. Lastly, the trained network is applied on new images to obtain an embedding into the shared space. This facilitates image and shape retrieval.
The shared embedding space allows new images to be introduced into the space anytime. The CNN merely takes the new image as input and uses the output.  If we have to embed new shapes, then it must preserve the pairwise distances between the added shape and the existing shapes within the space. The embedding space is constructed from an initial collection of 3D shapes. Introducing new shapes subsequently tends to violate the space. Instead if we could treat this an optimization problem, then we can preserve the criteria for the embedding space.
The embedding space is where both real-world images and shapes co-exist.  The space organizes latent objects between images and shapes. In order to do this, the objects are initialized from a set of 3D models.  The are pure and complete representation of objects. They don't suffer from the noise in images. The distance between 3D models is both informative and precise. With the help of 3D models, the embedding space becomes robust.
The shape distance metric computes the similarity between two shapes by the aggregate of similarities among corresponding views. This method is called Light field descriptor. The input is a set of 3D shapes although two would do.The shapes are aligned by applying a transformation using a rotation matrix and a translation vector. Then they are projected from k viewpoints to generate projection images
The CNN uses this distance metric to form a pairwise comparison between the 3D models. Since the metric is informative and accurate, the models can be organized in space along increasing dimensions.

Friday, March 16, 2018

Today also we continue discussing  Convolutional Neural Network (CNN) in image and object embedding in a shared space, shape signature and image retrieval.
The purpose of the embedding is to map an image to a point in the embedding space so that it is close to a point attributed to a 3D model of a similar object. A large amount of training data consisting of images synthesized from 3D shapes is used to train the CNN.
By using synthesized images, the embedding space is computed from clean 3D models without noise. This enables better object similarities. In addition 2D shape views are tossed in which boost the shape matching. This use of embedding space is a novel approach and enables a better domain for subsequent image and shape retrievals. Moreover the embedding space does away with linear classifiers. This yields robust comparision of real world images to 3D models. Previously, this was susceptible to image nuisance factors from real world images and the linear classifiers could not keep up. On the other hand the use of CNN mitigates this because it does better with image invariance learning  - a technique that focuses on the salient invariant embedded objects rather than noise.
The CNN approach consists of four major components: embedding space construction, training image synthesis, CNN training phase, and the final testing phase. In the first phase, a collection of 3D images is embedded into a common space. In the second phase, the training data is synthesized using 3D shapes in a rendering process which yields annotations as well.  In the third phase, a network is trained to learn the mapping between images and 3D shape induced embedding space. Lastly, the trained network is applied on new images to obtain an embedding into the shared space. This facilitates image and shape retrieval.
The shared embedding space allows new images to be introduced into the space anytime. The CNN merely takes the new image as input and uses the output.  If we have to embed new shapes, then it must preserve the pairwise distances between the added shape and the existing shapes within the space. The embedding space is constructed from an initial collection of 3D shapes. Introducing new shapes subsequently tends to violate the space. Instead if we could treat this an optimization problem, then we can preserve the criteria for the embedding space.
The embedding space is where both real-world images and shapes co-exist.  The space organizes latent objects between images and shapes. In order to do this, the objects are initialized from a set of 3D models.  The are pure and complete representation of objects. They don't suffer from the noise in images. The distance between 3D models is both informative and precise. With the help of 3D models, the embedding space becomes robust.
The shape distance metric computes the similarity between two shapes by the aggregate of similarities among corresponding views. This method is called Light field descriptor. The input is a set of 3D shapes although two would do.The shapes are aligned by applying a transformation using a rotation matrix and a translation vector. Then they are projected from k viewpoints to generate projection images

Thursday, March 15, 2018

Today also we continue discussing  Signature verification methods. We reviewed the stages involved with Signature verification. Then we also enumerated the feature extraction techniques. After that, we compared online and offline verification techniques. We also discussed the limitations of image processing and the adaptations for video processing.Then we proceeded to discussing image embedding in general.
We discussed Convolutional Neural Network (CNN) in image and object embedding in a shared space followed by shape signature and image retrieval.
The purpose of the embedding is to map an image to a point in the embedding space so that it is close to a point attributed to a 3D model of a similar object. A large amount of training data consisting of images synthesized from 3D shapes is used to train the CNN.
By using synthesized images, the embedding space is computed from clean 3D models without noise. This enables better object similarities. In addition 2D shape views are tossed in which boost the shape matching. This use of embedding space is a novel approach and enables a better domain for subsequent image and shape retrievals. Moreover the embedding space does away with linear classifiers. This yields robust comparision of real world images to 3D models. Previously, this was susceptible to image nuisance factors from real world images and the linear classifiers could not keep up. On the other hand the use of CNN mitigates this because it does better with image invariance learning  - a technique that focuses on the salient invariant embedded objects rather than noise.
The CNN approach consists of four major components: embedding space construction, training image synthesis, CNN training phase, and the final testing phase. In the first phase, a collection of 3D images is embedded into a common space. In the second phase, the training data is synthesized using 3D shapes in a rendering process which yields annotations as well.  In the third phase, a network is trained to learn the mapping between images and 3D shape induced embedding space. Lastly, the trained network is applied on new images to obtain an embedding into the shared space. This facilitates image and shape retrieval.
The shared embedding space allows new images to be introduced into the space anytime. The CNN merely takes the new image as input and uses the output.  If we have to embed new shapes, then it must preserve the pairwise distances between the added shape and the existing shapes within the space. The embedding space is constructed from an initial collection of 3D shapes. Introducing new shapes subsequently tends to violate the space. Instead if we could treat this an optimization problem, then we can preserve the criteria for the embedding space.
The embedding space is where both real-world images and shapes co-exist.  The space organizes latent objects between images and shapes. In order to do this, the objects are initialized from a set of 3D models.  The are pure and complete representation of objects. They don't suffer from the noise in images. The distance between 3D models is both informative and precise. With the help of 3D models, the embedding space becomes robust. 

Wednesday, March 14, 2018

We continue discussing  Signature verification methods. We reviewed the stages involved with Signature verification. Then we also enumerated the feature extraction techniques. After that, we compared online and offline verification techniques. We also discussed the limitations of image processing and the adaptations for video processing.Then we proceeded to discussing image embedding in general.
We discussed Convolutional Neural Network (CNN) in image and object embedding in a shared space followed by shape signature and image retrieval.
The purpose of the embedding is to map an image to a point in the embedding space so that it is close to a point attributed to a 3D model of a similar object. A large amount of training data consisting of images synthesized from 3D shapes is used to train the CNN.
By using synthesized images, the embedding space is computed from clean 3D models without noise. This enables better object similarities. In addition 2D shape views are tossed in which boost the shape matching. This use of embedding space is a novel approach and enables a better domain for subsequent image and shape retrievals. Moreover the embedding space does away with linear classifiers. This yields robust comparision of real world images to 3D models. Previously, this was susceptible to image nuisance factors from real world images and the linear classifiers could not keep up. On the other hand the use of CNN mitigates this because it does better with image invariance learning  - a technique that focuses on the salient invariant embedded objects rather than noise.
The CNN approach consists of four major components: embedding space construction, training image synthesis, CNN training phase, and the final testing phase. In the first phase, a collection of 3D images is embedded into a common space. In the second phase, the training data is synthesized using 3D shapes in a rendering process which yields annotations as well.  In the third phase, a network is trained to learn the mapping between images and 3D shape induced embedding space. Lastly, the trained network is applied on new images to obtain an embedding into the shared space. This facilitates image and shape retrieval.
The shared embedding space allows new images to be introduced into the space anytime. The CNN merely takes the new image as input and uses the output.  If we have to embed new shapes, then it must preserve the pairwise distances between the added shape and the existing shapes within the space. The embedding space is constructed from an initial collection of 3D shapes. Introducing new shapes subsequently tends to violate the space. Instead if we could treat this an optimization problem, then we can preserve the criteria for the embedding space.

Tuesday, March 13, 2018

We continue discussing  Signature verification methods. We reviewed the stages involved with Signature verification. Then we also enumerated the feature extraction techniques. After that, we compared online and offline verification techniques. We also discussed the limitations of image processing and the adaptations for video processing.Then we proceeded to discussing image embedding in general.
We discussed Convolutional Neural Network (CNN) in image and object embedding in a shared space followed by shape signature and image retrieval.
The purpose of the embedding is to map an image to a point in the embedding space so that it is close to a point attributed to a 3D model of a similar object. A large amount of training data consisting of images synthesized from 3D shapes is used to train the CNN.
By using synthesized images, the embedding space is computed from clean 3D models without noise. This enables better object similarities. In addition 2D shape views are tossed in which boost the shape matching. This use of embedding space is a novel approach and enables a better domain for subsequent image and shape retrievals. Moreover the embedding space does away with linear classifiers. This yields robust comparision of real world images to 3D models. Previously, this was susceptible to image nuisance factors from real world images and the linear classifiers could not keep up. On the other hand the use of CNN mitigates this because it does better with image invariance learning  - a technique that focuses on the salient invariant embedded objects rather than noise.


#codingexercise
Get the maximum contiguous product for short integer arrays
double maxProduct(List <int> A)
{
double max = Int_min;
if ( A == null || A.Count == 0)
    return max;
for (int I = 0; I  < A.Count ; i++)
{
   double product = A [i];
   max = Math.max (product, max);
   for ( int j = i-1; j >=0; j--)
   {
         product = product × A [j];
         max = Math.max (product, max);
   }
}
return max;
}

Monday, March 12, 2018

An essay in the techniques of innovation
I have explored some common themes for innovations in design as shown in the documents under references section. I would like to enumerate some that resonate with me a lot:
Simplicity – A jigsaw puzzle, a screw driver, a simpler user interface like Bose audio speak eloquently to their functionality by virtue of simplicity. Sometimes minimal design works well not only to highlight the purpose of the item but also to conserve what doesn’t need to be expressed.
Multi-functionality – Nothing delights a customer than offering value in terms of more than what was required from the product. A Swiss army knife is handy and sought after for this purpose. This is probably a direct opposite of the example cited above but it is as essential to design as the first one.  Japanese restrooms genuinely feel like a spaceship. Combining values into a product also delights customers.
Automation – Anything that improves convenience for a user and improves the experience is welcome to design ideas. This does not mean reducing the touchpoints with the customer but rewarding those that the customer likes and reducing the others that gets in the way or increases onus. A fidget spinner or a top is popular because it entices the player to try it again with minimal involvement.
Elegance – This comes from thinking through all the usages for the product and playing up the features that matter most in an aesthetic way. I borrow this idea from professionals who go above and beyond to realize a design. As an example, perhaps any stick would do for a door handle, but a rubbery grip would do more for the user if the door and the users remain the same.
The techniques for innovation must have a lot of textbooks and curated articles online. I found them to be exhaustive and these are merely those that I could apply.
References:
1. https://1drv.ms/w/s!Ashlm-Nw-wnWs130GlgiZX7PI1RR
2. https://1drv.ms/w/s!Ashlm-Nw-wnWsXwBts-27OY5yxqw
3. https://1drv.ms/w/s!Ashlm-Nw-wnWsXDw5eIxh5v_TSsQ
4. https://1drv.ms/w/s!Ashlm-Nw-wnWkTzXCEGrAysjDkzk

#codingexercise
Get the maximum contiguous product for short integer arrays
double maxProduct(List <int> A)
{
double max = Int_min;
if ( A == null || A.Count == 0)
    return max;
for (int I = 0; I  < A.Length ; i++)
{
   double product = A [i];
   max = Math.max (product, max);
   for ( int j = i +1; j < A.Length; j++)
   {
         product = product × A [j];
         max = Math.max (product, max);
   }
}
return max;
}

Sunday, March 11, 2018

#codingexercise
A step array is an array of integer where each element has a difference of atmost k with its neighbor. Given a key x, we need to find the index value of k if multiple element exist return the index of any  occurrence of key.
Input : arr[] = {4, 5, 6, 7, 6}
           k = 1
           x = 6
Output : 2


int GetIndex(List<int> A, int x, int k)
{
int i = 0;
while (i < A.Length)
{
if (A[i] == x)
   return i;
i = i + Math.Max(1, Math.Abs(A[i]-x)/k);
}
return -1;
}

Get the nth number of the Golomb sequence

int a (uint n)
{
if ( n == 0)
      return 0;
if  (n == 1) 
      return 1;
return 1 + a(n – a(a(n-1)));
}


Saturday, March 10, 2018

 Get Max product from an integer array
public int maxProduct(List<int> input)
{
int lo = 1;
int hi = 1;
int result = 1;
for (int i = 0; i < input.Count; i++)
{
if (input[i]  == 0)
{
lo = 1;
hi = 1;
}
else{
   int temp = hi * input[i];
   if (input[i] > 0)
   {
       hi = temp;
       lo = Math.Min(lo * input[i], 1);
   }
   else
   {
      lo = temp;
      hi = Math.Max(lo * input[i], 1);
   }
}
if (result < hi)
    result = hi;
}
return result;
}
This is for contiguous subsequence.
We could also write the above if else in dynamic programming with recursion to include either the current element or exclude it. If an element is not included we multiply 1. If it is included we multiply the element to the maximum  of the recursion found as for example in positive integer array
int getMaxProduct(List<int> A, int index)
{
assert (A.all (x=> x>=0));
if (index  == A.Length)
     return 1;
int remaining = GetMaxProduct (A, index +1);
return Math.max (A [index] * remaining,  1 * remaining) ;
}
which is the same as saying replace all 0 with 1 in the sorted positive integer array
Therefore we can even sort the integer array replace all 0 with 1 and trim the odd count negative number to the left of zero
double GetMaxProduct (List <int> A)
{
A.sort ();
A.replaceAll(0, 1);
int count = A.Count (x => x < 0);
if (count > 0 && count % 2 == 1)
{
    A.replace (A.min (x => x < 0), 1);
}
return A.Product ();
}

Friday, March 9, 2018

We were discussing Signature verification methods. We reviewed the stages involved with Signature verification. Then we also enumerated the feature extraction techniques. After that, we compared online and offline verification techniques. We also discussed the limitations of image processing and the adaptations for video processing.Then we proceeded to discussing image embedding in general.
We discussed Convolutional Neural Network (CNN) in image and object embedding in a shared space followed by shape signature and image retrieval.
The purpose of the embedding is to map an image to a point in the embedding space so that it is close to a point attributed to a 3D model of a similar object. A large amount of training data consisting of images synthesized from 3D shapes is used to train the CNN.
By using synthesized images, the embedding space is computed from clean 3D models without noise. This enables better object similarities. In addition 2D shape views are tossed in which boost the shape matching. This use of embedding space is a novel approach and enables a better domain for subsequent image and shape retrievals. Moreover the embedding space does away with linear classifiers. This yields robust comparision of real world images to 3D models. Previously, this was susceptible to image nuisance factors from real world images and the linear classifiers could not keep up. On the other hand the use of CNN mitigates this because it does better with image invariance learning  - a technique that focuses on the salient invariant embedded objects rather than noise.

#codingexercise
We covered a few probability distributions earlier - uniform, normal, exponential. Let us describe the Poisson distribution for x success in n trials with success probability p each:
double GetProbabilityXevents(double n, double x. double p)
{
Debug.Assert( n  >= 0 && x >= 0 && p >= 0);
return GetNChooseX(n,x)*Math.pow(p,x)*Math.pow((1-p), (n-x));
}