Sunday, November 22, 2015



1 / 1

In Naïve Bayes, the classic algorithm is as follows:  

Estimate probability of a training vector given a condition exists, (Aka likelihood) 

Estimate Probability of a training vector given  condition does not exist. 

We also calculate probability that the condition exists (Aka prior) and the condition doesn't exists. These are referred to as weights to the above. 

this is applied to entire data.  

Now we modify the previous algorithm for incremental data as follows: 


For new data: 

Calculate As above 


For old data: 

Sum the previous calculated Target directly with the new target 


This is because the denominator remains the same and the weights remain the same.







No comments:

Post a Comment