Applications of Data Mining to Reward points collection service
Continuation of use cases: Outliers can also be detected by data mining algorithms where the choices for similarity measures between rows could include distance functions such as Euclidean distance, Manhattan distance, graph-distance, and L2 metrics. The choices for aggregate dissimilarity measures are the distance of K nearest neighbors, the density of neighborhood outside the expected range, and the attribute differences with nearby neighbors. Outliers are important to discover new strategies to encompass them. If there are numerous outliers, they will significantly increase organizational costs. If they were not, then the patterns help identify efficiencies. A Decision Tree algorithm uses the attributes of the service requests to make a prediction such as the relief time on a case resolution. The ease of visualization of split at each level helps throw light on the importance of those attributes. This information becomes useful to prune the tree and to draw the tree. Logistic regression helps with the determination of user appreciations based on demographics and it can be used for finding repetitions in requests. Neural networks can be used with softmax classifier to classify appreciation terms in chat text on channels.
Naiive Bayes algorithm can be used for use-cases where binary conditions apply. The set of policies determined by organizations for reward point grants as employee appreciation is usually authored with the help of some conditions. These conditions can be maintained in the service. When these conditions pertain to attributes from the source where the reward points are published, then their probabilities become relevant to this algorithm especially when the input states are taken on a with or a without basis and when the input variables are independent. The simplicity of counting or summing reward points that meet the binary condition, together with the ease to visualize, debug and use as a predictor makes this algorithm quite popular. For example, the reward points can be counted based on whether the appreciation came from a specific person or otherwise.
Collaborative filtering is another use case where the binary conditions apply. This is particularly useful when there are multiple participants in a group whose opinions determine the best grant of reward points. In the earlier approaches, the algorithms were articulating conditions. In this algorithm, we avoid the use of conditions and replace it with ratings. The participants in the group can be selected such that they form a diverse set or a cohesive set depending on the purpose. The calculation of grants based on existing reward points can be determined with the help of this opinion group and it helps to avoid many of the pitfalls with the logic associated with conditions. Some of these include the disclosure of rules, taking advantage of the rules, and circumventing them.
Hierarchical clustering is helpful when we want to cluster the reward points to match with the organizational hierarchy to give credit to the manager when their reporting employees do well. This is a standard practice in many companies. It may not be evident from the flat independent grants assigned to individuals that the reward points can be grouped based on the hierarchy to which the user belongs. Distance between members based on organizational hierarchy can also be used as a metric to determine the hierarchical clustering of reward point grants.
Conclusion: There are several algorithms in data mining that are applicable to the Reward points repository.