Thursday, May 24, 2018

We were discussing dimensionality reduction using both linear and non-linear transformations.
This technique eliminates noise by  choosing the dimensions as salient features and lowers cost without significant loss of information. Given a data set X  with n data points that needs to be reduced to d dimensions, a linear transformation proceeds by selecting V data set that have d dimensions corresponding to each other and matrix multiplying their transpose to each of the points in the X data set to get Y. Since the V has d dimensions the resulting linear transformations also have d dimensions. Non linear dimensionality reduction techniques may even learn an internal model within the data as in the case of manifold learning. In this case, a high dimensionality data set may be projected onto smaller dimension while trying to preserve the structure of inter-point distances from the high dimensional space in the lower dimension projection. It is called non-linear because the mapping cannot be represented as a linear combination of original variables.
Different set of dimensions for the reduced space results in different perspectives which yields different visualizations.
Interestingly graphs can also be reduced. It is called hypergraph coarsening which is an approximation of the original structure of the graph. Coarsening can be iterative.  Succession of smaller hypergraphs tend to make incremental progress towards a coarser graph with less overall loss of information.  There are several methods. Pairs of vertices that are similar can be merged. Skip edges may be overlayed on the same graph to represent the coarser graph. Centrality may be adjusted based on weights of the edges removed from vertices that are removed without loss of path or connectivity among the fewer vertices.
#book summary : https://1drv.ms/b/s!Ashlm-Nw-wnWtnZC3FVGlkh-m47E

No comments:

Post a Comment