Monday, January 4, 2021

Performing Association Data mining on IT Service requests continued ...

Other forms of associations including sequential associations can also be performed but the association rules are a sort of recommendation and something that is lateral and helpful to the users in their single-minded case request. User generally don’t have access to past request and mitigations to see the help that others received for the cases that are like theirs. Cases opened with IT contain internal and confidential information and number hundreds if not thousands. A public facing summary of request descriptions and resolutions database or knowledge-based articles are helpful too, but they are usually time-consuming and secondary to the case deluge that IT teams are faced with. Building an association rule set and evaluating it with the incoming request on the other hand requires less effort and time. 

Sample query to see the association set for problem type: 

SELECT TOP 10 (Node_Support),Node_NameNode_Caption   

FROM Association 

WHERE Node_Type = 7 

To perform this query, we will need a one-hot encoding. We rearrange the data with each problem/product category one hot encoded and one transaction/service request per row. One hot encoding refers to data transformation technique where categorical values are converted into columns. If the categorical value is present, the column is given a value of 1. This is just like pivoting and the columns expand from the original set by a number equal to the number of categories. 

Support can be determined for any category based on 

SELECT COUNT(*) / (SELECT COUNT(*) FROM ONE_HOT_ENCODED_Transactions) as SUPPORT FROM ONE_HOT_ENCODED_Transactions where category_A = 1; 

Each of the metrics described above can be calculated with sample SQL queries as shown below 

SELECT COUNT(*) as Support_x  

FROM Transactions  

WHERE Product = ‘x’; 

 

SELECT Support_x _y/Support_x as Confidence_x 

FROM Transactions 

WHERE Product = ‘x’; 

 

SELECT Confidence_x / Support_y as Lift_y_x 

FROM Transactions 

WHERE Product = ‘x’; 

 

SELECT A.name, B.name into associations  

FROM Products as A  

CROSS JOIN Products as B 

WHERE A.name != B.name  

Evaluating the three metrics for each of the association results in an Association.content table where product pairs have support, confidence and lift. Then the associations can be filtered to have a lift > 1.0 

Sunday, January 3, 2021

Performing Association Data mining on IT Service requests


  

Introduction: Service requests are opened by customers who report a problem and request mitigation. The IT department is a magnet for virtually all computing, storage, and networking related tasks requested by its customers. It is usually far more than the IT team can resolve in an easy manner. In this regard, the IT teams look for automation that can provide self-service capabilities to users. Association data mining allows these users to see helpful messages such as “users who opened a ticket for this problem type also opened a ticket for this other problem type”. This article describes the implementation aspect of this data mining technique. 

Description: 

The centerpiece of this solution relies on the computation of two columns namely Support and Probability. Support defines the percentage of cases in which a rule must exist before it is considered valid. We define that a rule must be found in at least 1 percent of cases. 

Probability defines how likely an association must be before it is considered valid. We will consider any association with a probability of at least 10 percent. 

Bayesian conditional probability and confidence can also be used. Associations have association rules formed with a pair of antecedent and consequent item-sets, so named, because we want to find the value of taking one item with another. Let I be a set of items, T be a set of transactions. Then an association A is defined as a subset of I that occurs together in T. Support (S1) is a fraction of T containing S1. Let S1 and S2 be subsets of I, then association rule to associate S1 to S2 has a support(S1->S2) defined as Support(S1 union S2) and a confidence (S1->S2) = Support(S1 union S2)/ Support(S1).  A third metric Lift is determined as Confidence(S1->S2)/Support(S2) and is preferred because a popular S1 gives high confidence for any S2 and lift corrects that by having a value greater than 1.0 when S2 is also significant. 

Certain databases allow the creation of association models that can be persisted and evaluated against each incoming request. Usually, a training/testing data split of 70/30% is used in this regard. 

Without the predictions, association rules can be evaluated with a Cartesian product of all known problem types and evaluating their probability and support. The static rules can then be selected based on their support for the top ten and can even be included in the display to the customers. 

Saturday, January 2, 2021

How to perform text summarization with sequence-to-sequence RNNs

Recurrent Neural Networks (RNNs) are a special kind of neural network that work with sequences rather than symbols that constitute the sequence. In fact, this technique does not need to know what the parts of the sequence represent whether they are words or video frames. It can infer the meaning of those symbols. When raw data is shredded into sequences, the RNN keeps a state information per sequence that it infers from that sequence. This state is the essence of the sequence. Using this state, the RNN can simply translate input sequences (text) to output sequences (summary). It can also be used to interpret the input sequence to generate an output sequence (like a chatbot). The RNN encoder-decoder model was proposed by Bahdanau et al in 2014 and it can be used to write any kind of decoder that generates custom output. Text summarization merely restricts the scope of this approach to machine translation with its use of the decoder. 

There are a few differences between machine translation and sequence-to-sequence RNNs.  Summarization is a lossy conversion where only the key concepts are retained. Machine translation is a lossless translation with no restriction to size. Summarization restricts the size of the output regardless of the size of the input. Rush et al, in 2015, proposed a convolutional model that encodes the source and uses a context sensitive additional feed-forward neural network to generate the summary.   

The annotated Gigaword corpus has been popular in training the models used in both 2014 and 2015. Mikolov’s 2013 word2vec model makes use of a different dataset for creating a word-embeddings matrix but this 100-features as dimensions word-embeddings can still be further updated by training it on the Gigaword corpus. This has been the approach by Nallapati et al in 2016 for a similar task with the deviation being that the input size is not restricted to 1 or 2 sentences from each sample. The RNN itself uses a 200-dimension hidden state with the encoder being bidirectional and the decoder being uni-directional. The vocabularies for the source and target sequences can be kept separate although the words from the source along with some frequent words may re-appear in the target vocabulary. Using the words from the source cuts down on the number of epochs considerably. 

The summary size is usually set to about 30 words at maximum while the input size may vary. The encoder itself can be hierarchical with a second bi-directional RNN layer running at the sentence level. The use of pseudo-words and sentences as sequences are left outside the scope of this article. 

The sequence length for this model is recommended to be in the 10~20 range and for that purposes and the timesteps are per word, so it is best to sample them from a few sentences. 

 

 

Friday, January 1, 2021

 Introduction: TensorFlow is a machine learning framework for JavaScript applications. It helps us build models that can be directly used in the browser or in the node.js server. We use this framework for building an application that can find similar requests so that they might be used for prediction. 

Description: The JavaScript application uses data from a CSV that has categorizations of requests and the resolution time. The attributes of each request include a category_id, customer id, a pseudo parameter attribute, and the execution time. The data used in this sample has 1200 records but the attributes are minimum to keep the application simple. 

As with any ML learning example, the data is split into 70% training set and 30% test set. There is no order to the data and the split is taken over a random set.  

The model chosen is a Recurrent Neural Network model. This is used for finding groups via paths in sequences. A Sequence Clustering algorithm is like a clustering algorithm mentioned above but instead of finding groups based on similar attributes, it finds groups based on similar paths in a sequence.  A sequence is a series of events. For example, a series of web clicks by a user is a sequence. It can be also be compared to the IDs of any sortable data maintained in a separate table. Usually, there is support for a sequence column. The sequence data has a nested table that contains a sequence ID which can be any sortable data type. 

This is very useful to find sequences of service requests opened across customers. Generally, a network failure could result in a database connection failure which could lead to an application failure. This sort of sequence determination in a data-driven manner helps find new sequences and target them actively even suggesting the same to the customers who open the request so that they can be better prepared. 

Recurrent Neural Networks (RNNs) are a special kind of neural network that works with sequences rather than symbols that constitute the sequence. In fact, this technique does not need to know what the parts of the sequence represent whether they are words or video frames. It can infer the meaning of those symbols. When raw data is shredded into sequences, the RNN keeps state information per sequence that it infers from that sequence. This state is the essence of the sequence. Using this state, the RNN can simply translate input sequences (text) to output sequences (summary). It can also be used to interpret the input sequence to generate an output sequence (like a chatbot). The RNN encoder-decoder model was proposed by Bahdanau et al in 2014 and it can be used to write any kind of decoder that generates custom output.  

TensorFlow makes it easy to construct this model using an API. It can only present the output after the model is executed. In this case, the model must be run before the weights are available.  The output of each layer can be printed using the summary () method.  

With the model and training/test sets defined, it is now as easy to evaluate the model and run the inference.  The model can also be saved and restored. It is executed faster when there is GPU added to the computing. 

The features are available with the feature_extractor. It is evaluated on the training set using model.compile() and model.fit(). The model can then be called on a test input. Additionally, if a specific layer was to be evaluated, we can call just that layer on the test input. 

When the model is tested, it predicts the resolution time for the given attributes of category_id and parameter attribute 

Conclusion: Tensorflow.js is becoming a standard for implementing machine learning models. Its usage is simple, but the choice of model and the preparation of data takes significantly more time than setting it up, evaluating, and using it. 
https://1drv.ms/w/s!Ashlm-Nw-wnWw1gSFq5VLqlNswb5?e=dyLu7I