Wednesday, May 23, 2018

Graph analysis is often compared with vector analysis. In the latter case,
dimensionality reduction is often used. Increase in dimensions may even have marginal benefits. On the other hand, the cost savings in using lesser dimensions is actually quite significant.
Dimensionality reduction can involve both linear and non-linear transformations.
This technique also eliminates noise by  choosing the dimensions as salient features. Given a data set X  with n data points that needs to be reduced to d dimensions, a linear transformation proceeds by selecting V data set that have d dimensions corresponding to each other and matrix multiplying their transpose to each of the points in the X data set to get Y. Since the V has d dimensions the resulting linear transformations also have d dimensions. Non linear dimensionality reduction techniques may even learn an internal model within the data as in the case of manifold learning. In this case, a high dimensionality data set may be projected onto smaller dimension while trying to preserve the structure of inter-point distances from the high dimensional space in the lower dimension projection. It is called non-linear because the mapping cannot be represented as a linear combination of original variables.
Different set of dimensions for the reduced space results in different perspectives which yields different visualizations.
Vectors have the advantage that they can participate in a change of point of reference which again can be used to improve visualization.
Eigen values and even vectors can be found for vectors which gives a totally different visualization and often simpler to view.
Dimensionality can also be reduced at multiple levels.
Interestingly graphs can also be reduced. It is called hypergraph coarsening which is an approximation of the original structure of the graph. Coarsening can be iterative.  Succession of smaller hypergraphs tend to make incremental progress towards a coarser graph with less overall loss of information.  There are several methods. Pairs of vertices that are similar can be merged. Skip edges may be overlayed on the same graph to represent the coarser graph. Centrality may be adjusted based on weights of the edges removed from vertices that are removed without loss of path or connectivity among the fewer vertices.
Thus graph analysis gives a similar form of visualization while dimensionality reduction and vector analysis can enable myriad forms of visualization.
Courtesy Saad from umn.edu

No comments:

Post a Comment