Thursday, January 7, 2021

Applying Naive Bayes ... (continued)

 Comparing the decision tree and the time-series to the Naïve Bayes Classifier, it is easy to see that while these two algorithms work with new rows, the Bayes classifier works with attributes of the rows against the last columns as the predictor. Although linear regressions are useful in the prediction of a variable, Naïve Bayes build on conditional states across attributes and are easy to visualize which allows experts to show the reasoning process and allows users to judge the quality of prediction. All these algorithms need training data in our use case, but Naïve Bayes uses it for explorations and predictions based on earlier requests such as to determine whether the self-help was useful or not – evaluating both probabilities conditionally.  

The conditional probability can be used both for exploration as well as for prediction. Each input column in the dataset has a state calculated by this algorithm which is then used to assign a state to the predictable column.  For example, the availability of a Knowledge Base article might show a distribution of input values significantly different from others which indicates that this is a potential predictor. 

The viewer also provides values for the distribution so that KB articles that suggest opening service requests with specific attributes will be easier to follow, act upon and get resolution. The algorithm can then compute a probability both with and without that criteria.  

All that the algorithm requires is a single key column, input columns, independent variables, and at least one predictable column. A sample query for viewing the information maintained by the algorithm for a particular attribute would look like this: 

SELECT NODE_TYPE, NODE_CAPTION, NODE_PROBABILITY, NODE_SUPPORT, NODE_SCORE FROM NaiveBayes.CONTENT WHERE ATTRIBUTE_NAME = 'KB_PRESENT';  

No comments:

Post a Comment