Sunday, April 7, 2019

Today we take a break from discussing the best practice from storage engineering.

Sequence Analysis:
Data is increasing more than ever and at a fast pace. Algorithms for data analysis are required to become more robust, efficient and accurate. Specializations in databases, higher end processors suitable for artificial intelligence have contributed to improvements in data analysis. Data mining techniques discover patterns in the data and are useful for predictions but they tend to require traditional databases.
Sequence databases are highly specialized and even though they can be supported by B-Tree data structure that many contemporary databases use, they tend to be larger than many commercial databases.  In addition, algorithms for mining non-sequential rules focus on generating all sequential rules. These algorithms produce an enormous number of redundant rules. The large number not only makes mining inefficient, it also hampers iterations. Such algorithms depend on patterns obtained from earlier frequent pattern mining algorithms. However, if the rules are normalized and redundancies removed, they become efficient to be stored and used with a sequence database.
The data structures used for sequence rules have evolved. The use of a dynamic bit vector data structure is now an alternative. The data mining process involves a prefix tree. Early data processing stages tend to prune, clean and perform canonicalization and these have reduced the rules.
In the context of text mining, sequences have had limited applications because the ordering of words has never been important for determining the topics. However, salient keywords regardless of their locations and coherent enough for a topic tend to form sequences rather than groups. Finding semantic information with word vectors does not help with this ordering.  They are two independent variables. And the word vectors are formed only with a predetermined set of dimensions. These dimensions do not increase significantly with progressive text. There is no method for vectors to be redefined with increasing dimensions as text progresses.
The number and scale of dynamic groups of word vectors can be arbitrarily large. The ordering of the words can remain alphabetical. These words can then map to a word vector table where the features are predetermined giving the table a rather fixed number of columns instead of leaving it to be a big table.
Since there is a lot of content with similar usage of words and phrases and almost everyone uses the language in day to day simple English, there is a higher likelihood that some of these groups will stand out in terms of usage and frequency. When we have exhaustively collected and persisted frequently co-occuring words in a groupopedia as interpreted from large corpus with no limit to the number of words in a group and the groups-persisted in the sorted order of their frequencies, then we have a two-fold ability to shred a given text into pre-determined groups there-by instantly recognizing topics and secondly adding to pseudo word vectors where groups translate as vectors in the vector table.

#codingexercise
Yesterday's coding question continued:
Since we have A2 as a small array, we can directly use it to sort the elements
public List <Integer> relativeSort (List <Integer> A1, List <Integer> A2)  {
       List <Integer> result = new ArrayList <>();
       int start = 0;
       for (Integer a: A2) {
               for (Integer index = findFirst (A1, a, start); start < A1.size () && index != -1; start = index +1) {
                       result.add (A1 [index]);
                }
                start = 0;
        }
         if ( result.size () < A1.size () ) {
                result.addAll (A1.getRange (result.size ()).sort ());
         }
         return result;
}

public int findFirst (List <int> A1, int a, int start) {
         for ( int i = start; i  < A1.size (); i++) {
                if ( A1 [i] == a ) {
                      return i;
                }
         }
         return -1;
}


No comments:

Post a Comment