Today we discuss the difference between Frequentist and Bayesian approach. These are two different camps who need to use probabilities to make sense of the world but differ in the way they perceive probability. I will state their well known definition up front and elaborate afterwards. A frequentist defines probability as the long run frequency of a certain measurement or observation. The frequentist says there is a single source of truth and measurement samples may be noisy instances of this truth. Bayesians define probability as the chance of a hypothesis given incomplete knowledge. Bayesians don't see data as a source of truth. They just see data as evidence for a particular hypothesis. For Bayesian, there is just data which we can use as evidence for a particular hypothesis.
The archetypal example for this is the repeated tossing of a coin to see what the long run frequency is of heads or tails, this long run frequency then asymptotes to the truth. Bayesians just observe the series of coin tosses and then uses this information to make deductions about, for example, how likely it is that the coin is fair. The frequentist view is best challenged by the question whether there is a true fraction to which the frequency in the coin tossing experiment should asymptote. The suggestion is that as a frequentist we have an idealized view even if it is specific. It can be argued that the coin tosses happen under the same circumstance but in such a way that the outcome is maximally random. There is contradiction here. The coin tosses are the same in so far that each experiment must have the same validity and samples the same set of possible outcomes in the same way. But the coin tosses have to be different in some sense, otherwise the outcomes would be the same. The long run frequency for a real coin tossing is as much a function of the way the experiment is performed as it is a property of the coin. Therefore we need to allow some uncertainity. If we get an asymptotic frequency of 1/2 for heads then we either have done a truly randomized experiment that the coin was fair or we may have done a biased experiment with an unfair coin and the frequentist view cannot differentiate. In the bayesian view, we may be able to say so. In a twenty toss of a coin, we may have 13 heads and 7 tails with a fair coin and the chance for getting this is about 7.4%. With an unbiased coint the frequency is 13/20 instead of 1/2 and the chance increases to 18.4% The Bayesian includes a non-frequentist prior information about how likely we think that the coin was fair to start with.
To state how fair the coin is, Bayesian's use conditional probability as p(fair|13h7t) = p(13h7t|fair)p(fair)/p(13h7t) where the denominator is the sum of all truths with variations of the fairness of the coin. The frequentist gives a probability of an event given a truth which in this case is p(13h7t|fair) and tries to use this information for any statements. Therefore the difference between the two camps is described as follows: A frequentist can calculate probabilities precisely, but often not the probabilities we want. A Bayesian can calculate the probabilities we want but often cannot do it precisely.
Courtesy: Maarten Ambaum
#puzzle
assuming a fair coin and a controlled toss each time, what is the probability of getting k heads in n tosses?
The number of ways to choose k heads in m tosses is n choose k the binomial coefficient of x^k and since each one of these sets would suffice, we write the probability as n-choose-k times p^k times (1-p)^(n-k) where n-choose-k is (n!)/((k!)((n-k)!))
#codingexercise
Given a continuous stream of alphabets determine if we have a palindrome or not. The stream is sending one character at a time.
The key to solving this is to keep track of the count of each character seen.
If the length of the string is even, then each character should appear a multiple of two times.
Else there can be at most one character with an odd count of 1
Bool isPalindrome ( stream s)
{
var h = new Hashtable ();
Char c = s.Read ();
While ( c != EOF)
{
If (h.keys.contains (c)) {
h [c] += 1;
}
Else {
h.Add (c);
}
If (h.Values.Sum () %2 == 0)
{
Foreach (int val in Values) {
If (val % 2 != 0) {
Return false;
}
}
}
else
{
Int count = 0;
Foreach (int val in Values) {
If (val % 2 != 0) {
count += 1;
}
if (count > 1)
return false;
}
}
return true;
}
The archetypal example for this is the repeated tossing of a coin to see what the long run frequency is of heads or tails, this long run frequency then asymptotes to the truth. Bayesians just observe the series of coin tosses and then uses this information to make deductions about, for example, how likely it is that the coin is fair. The frequentist view is best challenged by the question whether there is a true fraction to which the frequency in the coin tossing experiment should asymptote. The suggestion is that as a frequentist we have an idealized view even if it is specific. It can be argued that the coin tosses happen under the same circumstance but in such a way that the outcome is maximally random. There is contradiction here. The coin tosses are the same in so far that each experiment must have the same validity and samples the same set of possible outcomes in the same way. But the coin tosses have to be different in some sense, otherwise the outcomes would be the same. The long run frequency for a real coin tossing is as much a function of the way the experiment is performed as it is a property of the coin. Therefore we need to allow some uncertainity. If we get an asymptotic frequency of 1/2 for heads then we either have done a truly randomized experiment that the coin was fair or we may have done a biased experiment with an unfair coin and the frequentist view cannot differentiate. In the bayesian view, we may be able to say so. In a twenty toss of a coin, we may have 13 heads and 7 tails with a fair coin and the chance for getting this is about 7.4%. With an unbiased coint the frequency is 13/20 instead of 1/2 and the chance increases to 18.4% The Bayesian includes a non-frequentist prior information about how likely we think that the coin was fair to start with.
To state how fair the coin is, Bayesian's use conditional probability as p(fair|13h7t) = p(13h7t|fair)p(fair)/p(13h7t) where the denominator is the sum of all truths with variations of the fairness of the coin. The frequentist gives a probability of an event given a truth which in this case is p(13h7t|fair) and tries to use this information for any statements. Therefore the difference between the two camps is described as follows: A frequentist can calculate probabilities precisely, but often not the probabilities we want. A Bayesian can calculate the probabilities we want but often cannot do it precisely.
Courtesy: Maarten Ambaum
#puzzle
assuming a fair coin and a controlled toss each time, what is the probability of getting k heads in n tosses?
The number of ways to choose k heads in m tosses is n choose k the binomial coefficient of x^k and since each one of these sets would suffice, we write the probability as n-choose-k times p^k times (1-p)^(n-k) where n-choose-k is (n!)/((k!)((n-k)!))
#codingexercise
Given a continuous stream of alphabets determine if we have a palindrome or not. The stream is sending one character at a time.
The key to solving this is to keep track of the count of each character seen.
If the length of the string is even, then each character should appear a multiple of two times.
Else there can be at most one character with an odd count of 1
Bool isPalindrome ( stream s)
{
var h = new Hashtable ();
Char c = s.Read ();
While ( c != EOF)
{
If (h.keys.contains (c)) {
h [c] += 1;
}
Else {
h.Add (c);
}
If (h.Values.Sum () %2 == 0)
{
Foreach (int val in Values) {
If (val % 2 != 0) {
Return false;
}
}
}
else
{
Int count = 0;
Foreach (int val in Values) {
If (val % 2 != 0) {
count += 1;
}
if (count > 1)
return false;
}
}
return true;
}
No comments:
Post a Comment