Thursday, December 31, 2020

Applying Naïve Bayes data mining technique for IT service request

Naïve Bayes algorithm is a statistical probability-based data mining algorithm and is considered somewhat easier to understand and visualize as compared to others in its family.

The probability is a mere fraction of interesting cases to total cases. Bayes probability is a conditional probability that adjusts the probability based on the premise. If the premise is to take a factor into account, we get one conditional probability and if we don’t take the factor into account, we get another probability. Naïve Bayes builds on conditional states across attributes and are easy to visualize. This allows experts to show the reasoning process and it allows users to judge the quality of prediction. All these algorithms need training data in our use case, but Naïve Bayes uses it for explorations and predictions based on earlier requests such as to determine whether the self-help was useful or not – evaluating both probabilities conditionally.

This is widely used for cases where conditions apply, especially binary conditions such as with or without. If the input variables are independent, if their states can be calculated as probabilities, and if there is at least a predictable output, this algorithm can be applied. The simplicity of computing states by counting for a class using each input variable and then displaying those states against those variables for a given value makes this algorithm easy to visualize, debug and use as a predictor.

The conditional probability can be used both for exploration as well as for prediction. Each input column in the dataset has a state calculated by this algorithm which is then used to assign a state to the predictable column. For example, the availability of a Knowledge Base article might show a distribution of input values significantly different from others which indicates that this is a potential predictor.

The viewer also provides values for the distribution so that KB articles that suggest opening service requests with specific attributes will be easier to follow, act upon, and get resolution. The algorithm can then compute a probability both with and without that criteria.

All that the algorithm requires is a single key column, input columns, independent variables, and at least one predictable column. A sample query for viewing the information maintained by the algorithm for a particular attribute would look like this:

SELECT NODE_TYPE, NODE_CAPTION, NODE_PROBABILITY, NODE_SUPPORT, NODE_SCORE FROM NaiveBayes.CONTENT WHERE ATTRIBUTE_NAME = 'KB_PRESENT';

The use of Bayesian conditional probability is not restricted just to this classifier. It can be used in Association data mining as well.

Implementation:

https://jsfiddle.net/za52wjkv/

Wednesday, December 30, 2020

Building a k-nearest neighbors tensorflow.js application:

 Introduction: TensorFlow is a machine learning framework for JavaScript applications. It helps us build models that can be directly used in the browser or in the node.js server. We use this framework for building an application that can find similar requests so that they might be used for prediction. 

Description: The JavaScript application uses data from a CSV that has categorizations of requests and the resolution time. The attributes of each request include a category_id, a pseudo parameter attribute, and the execution time. The data used in this sample has 1200 records but the attributes are minimum to keep the application simple. 

As with any ML learning example, the data is split into 70% training set and 30% test set. There is no order to the data and the split is taken over a random set.  

The model chosen is a KNN model. This model is appropriate for finding the k nearest neighbors to those it was previously shown. The default number of neighbors is 3. This model is suitable for one input and one output and where the tensors are distinct and not affecting each other. The output consists of a label with the most confidence which is a statistical parameter based on the support for the label, a class index, and a score set for the confidence associated with each label. 

TensorFlow makes it easy to construct this model using an API. It can only present the output after the model is executed. In this case, the model must be run before the weights are available.  The output of each layer can be printed using the summary () method.  

With the model and training/test sets defined, it is now as easy to evaluate the model and run the inference.  The model can also be saved and restored. It is executed faster when there is GPU added to the computing. 

The features are available with the feature_extractor. It is evaluated on the training set using model.compile() and model.fit(). The model can then be called on a test input. Additionally, if a specific layer was to be evaluated, we can call just that layer on the test input. 

When the model is trained, it can be done in batches of predefined size. The number of passes of the entire training dataset called epochs can also be set upfront. It is helpful to visualize the training with the help of a high chart that updates the chart with the loss after each epoch 

When the model is tested, it predicts the resolution time for the given attributes of category_id and parameter attribute 

Conclusion: Tensorflow.js is becoming a standard for implementing machine learning models. Its usage is fairly simple but the choice of model and the preparation of data takes significantly more time than setting it up, evaluating, and using it. 

 

Tuesday, December 29, 2020

Building a tensorflow.js application with high chart visualization for service request analysis:


Introduction: TensorFlow is a machine learning framework for JavaScript applications that can use both tensors/vectors and scalars. It helps us build models that can be directly used in the browser or in the node.js server. We use this framework for building an application that can predict request resolution time. 

Description: This JavaScript application uses data from a csv that has categorizations of service requests, with their attributes and the resolution time.  The attributes of each request include a category_id, a pseudo parameter attribute and the execution time and these helps define the tensor. This data is sampled from an inventory and it is also simplified to keep this analysis simple. 

As with any ML learning example, the data is split into 70% training set and 30% test set. There is no order to the data and the split is taken over a random set.  

The model chosen is a Sequential model. This model is appropriate for a plain stack of layers where each layer has exactly one input tensor and one output tensor.   Layers act in succession taking output of one as the input of another. Thus, this model is suitable for one input and one output and where the layers are distinct and not sharing any input.  

Another model that can be used is one that is more generic and loads an acyclic graph.  A sequential model only uses a linear stack of layers. The input or output of the layers must be specified. A convolutional layer creates convolution kernel which is a small matrix of weights. The kernel slides over the input layer and performs an element wise multiplication with the part of the input the kernel is on. The resulting scalar forms the element of a new kernel. This resulting kernel is called the convolution kernel. The graph model and sequential model can both be used with a tensor or a scalar. 

TensorFlow makes it easy to construct this model using an API keras.Sequential. It can only present the hidden weight matrix after the model is executed. In this case, the model must be run before the weights are available.  The output of each layer can be printed using the summary() method.  

With the model and training/test sets defined, it is now as easy to evaluate the model and run the inference.  The model can also be saved and restored. It executed faster when there is GPU added to the compute. 

The features are available with the feature_extractor. It is evaluated on the training set using model.compile() and model.fit(). The model can then be called on a test input. Additionally, if a specific layer was to be evaluated, we can call just that layer on the test input. 

When the model is trained, it can be done in batches of predefined size. The number of passes of the entire training dataset called epochs can also be set up front. It is helpful to visualize the training with the help of a high chart that updates the chart with the loss after each epoch 

When the model is tested, it predicts the resolution time for the given attributes of category_id and parameter attribute 

Creating the highchart is a simple javascript call to their api: 

Highcharts.chart('container', { 

    chart: { 

        type: 'bar’ 

    }, 

    title: { 

        text: 'relief time' 

    }, 

    subtitle: { 

        text: 'Source: <a href="">Response time</a>' 

    }, 

    xAxis: { 

        title: { 

            text: "Time (seconds)" 

        } 

    }, 

    yAxis: { 

        min: 0, 

        title: { 

            text: 'Service requests', 

            align: 'high' 

        }, 

        labels: { 

            overflow: 'justify' 

        } 

    }, 

    tooltip: { 

        valueSuffix: ' time’ 

    }, 

    plotOptions: { 

        bar: { 

            dataLabels: { 

                enabled: true 

            } 

        } 

    }, 

    legend: { 

        layout: 'horizontal', 

        align: 'top', 

        verticalAlign: 'right', 

        x: -40, 

        y: 80, 

        floating: true, 

        borderWidth: 1, 

        backgroundColor: 

            Highcharts.defaultOptions.legend.backgroundColor || '#FFFFFF', 

        shadow: true 

    }, 

    credits: { 

        enabled: false 

    }, 

    series: [{ 

        name: 'Service requests', 

        data: [13, 19, 17, 123, 116, 14, 13, 18, 12, 17, 19, 110, 112, 16, 15, 11, 17] 

    }] 

});  

Conclusion: Tensorflow.js is becoming a standard for implementing machine learning models. Its usage is fairly simple but the choice of model and the preparation of data take significant more time than setting it up, evaluating and using it.