Thursday, December 15, 2016

Today we continue to discuss the difference between the frequentist and the Bayesian approach. We said they perceive the probability differently.  A frequentist defines probability as the long run frequency of a certain measurement or observation. The frequentist says there is a single source of truth and measurement samples may be noisy instances of this truth. Bayesians define probability as the chance of a hypothesis given incomplete knowledge.  Bayesians don't see data as a source of truth. A frequentist can calculate probabilities precisely, but often not the probabilities we want. A Bayesian can calculate the probabilities we want but often cannot do it precisely.
The example most often used for this is the repeated coin tossing. When we can't tell whether the coin was biased or fair, the probabilites calculated for k heads and n-k tails could be such that they are further away from the expected truth. This is why we move into the Bayesian domain.  The chance of getting say 13 heads and 7 tails with a fair coin is about 7.4%. Here's how The number of ways to choose k heads in n tosses is n choose k the binomial coefficient of x^k and since each one of these sets would suffice, we write the probability as n-choose-k times p^k times (1-p)^(n-k) where n-choose-k is (n!)/((k!)((n-k)!)). Substituting n = 20, k = 13 we get 7.4%. If we assumed on the other hand that the coin was biased and the long running frequency was 13/20 instead of 1/2, then the chance of getting 13 heads and 7 tails is about 18.4%, a factor of 2.5 higher compared to the fair coin. To really tell apart whether the coin was biased or the experimental evidence just happened to come out this way, we need to include some prior idea that it is more likely that the coin was fair. In other words, to answer the question on the fairness of the coin, we need to have prior information about how likely we think it is. A Bayesian sees an experiment and uses it to test a hypothesis. He now writes the probability calculated as a fair coin to be a conditional probability for his belief that the experiment was done with a fair coin by calculating the sum of all possible complimentary truths i for each of which there is a probability of 13h7t.  This is equivalent to normalizing the probability calculated as a fair coin over all possible truths. While this is adding complexity, it is in fact articulating beliefs. The frequentist calculation was algebraic and therefore has application in significance testing. Majority of the statistics are taught with the frequentist approach. This does not mean Bayesian is not algebraic. Many would argue it has more powerful framework which is why it is considered advanced. And both advanced and simple will give you the same answer in some simple cases. Bayesian approach is considered more practical because it does not add the dimension of frequently repeating something and also answers questions that have practical pertinence. For example, if we have one set of measurements for the time period of a pendulum, then most likely the distribution is a Gaussian with a measured mean and spread and this follows from making certain assumptions on prior distributions. Since science is about gathering evidence that supports certain models of reality, the author Maarten Ambaum concludes that "the Bayesian viewpoint is very likely the most relevant probabilistic framework for science"

#codingexercise
Given a binary tree, you need to prune it such that the final tree contains all root-to-leaf sums more than equal to k

Node PruneKHeavy(Node root, int k, ref sum)
{
 if (root == null) return null;
 sum += root.data;
root.left = PruneKHeavy(root.left, k, sum );
 root. right = PruneKHeavy(root.right, k, sum);
 if (root.left == null && root.right == null)
 {  
   if (sum < k){
       sum -= root.data;
      delete root;
      root = null;
      return root;
   }
 }
sum-= root.data;
return root;



The same strategy works for K light nodes as well.

No comments:

Post a Comment