We continue discussing the paper "Nested sampling for general Bayesian computation" by Skilling
With this technique, nested sampling estimates directly how the likelihood function relates to previous value of the discrete random variable. And it can be used to compare two data models for the same set by comparing the conditional probability that is assigned after the relevant evidence is taken into account.
This method directly computes the evidence. This technique simplifies the evidence calculation by not summing over the parameters directly but instead performing it on the cumulated prior mass that cover likelihood values greater than say lambda. As Lambda increases, the enclosed mass X decreases from one to zero. We were looking at the transformation of computing the evidence based on the prior mass instead of the parameters. We looked at the integration performed. This paper essentially says don't navigate the parameter space. It is sufficient to explore a likelihood weighted space
This method fits nicely into the frequentist versus Bayesian arguments. As we know these are two different schools of thought and vary in their methodology. A frequentist takes a model and reports the outcome. A Bayesian applies apriori information alongwith the model. But that's not all. A Bayesian defines a probability as an indication of the chance of a happening. Therefore he computes the possible outcomes in terms of probabilities. He samples directly from the data and assumes the parameters are unknown. A frequentist is someone who believes that the probabilities represent long run frequencies from which events occur. The parameters are fixed throughout the sampling. He invents a fictitious population from which your particular situation is considered a random sample.
There's more to the Frequentist versus Bayesian arguments but an example might help.
If we take an example of tossing coins repeatedly and saying the heads fall with a probability p and tails with a probability 1-p then in 100 tosses we examine 71 heads, Then what is the probability of the next two consective tosses to be heads. A frequentist bases the answer on frequency where as a Bayesian views the probability as degrees of beliefs in the proposition.
Monte Carlo integration was criticized as being frequentist. But the same criticism does not hold for nested sampling because it involves a multidimensional integration.
#codingexercise
find distinct palindrome substrings from a given string
void PrintAllPalindromesIn(String a)
{
for( int i =0; i < a.length; i++)
printPalindromesWithCenterAt(a,i);
}
void printPalindromesWithCenterAt(string a, int c)
{
int start = c-1;
int end = c+1;
while ( start >= 0 && end <= a.length && a[start] == a[end])
{
console.write(a.substring(start, end-start+1);
start--;
end++;
}
// now repeat with center as c and c+1
}
With this technique, nested sampling estimates directly how the likelihood function relates to previous value of the discrete random variable. And it can be used to compare two data models for the same set by comparing the conditional probability that is assigned after the relevant evidence is taken into account.
This method directly computes the evidence. This technique simplifies the evidence calculation by not summing over the parameters directly but instead performing it on the cumulated prior mass that cover likelihood values greater than say lambda. As Lambda increases, the enclosed mass X decreases from one to zero. We were looking at the transformation of computing the evidence based on the prior mass instead of the parameters. We looked at the integration performed. This paper essentially says don't navigate the parameter space. It is sufficient to explore a likelihood weighted space
This method fits nicely into the frequentist versus Bayesian arguments. As we know these are two different schools of thought and vary in their methodology. A frequentist takes a model and reports the outcome. A Bayesian applies apriori information alongwith the model. But that's not all. A Bayesian defines a probability as an indication of the chance of a happening. Therefore he computes the possible outcomes in terms of probabilities. He samples directly from the data and assumes the parameters are unknown. A frequentist is someone who believes that the probabilities represent long run frequencies from which events occur. The parameters are fixed throughout the sampling. He invents a fictitious population from which your particular situation is considered a random sample.
There's more to the Frequentist versus Bayesian arguments but an example might help.
If we take an example of tossing coins repeatedly and saying the heads fall with a probability p and tails with a probability 1-p then in 100 tosses we examine 71 heads, Then what is the probability of the next two consective tosses to be heads. A frequentist bases the answer on frequency where as a Bayesian views the probability as degrees of beliefs in the proposition.
Monte Carlo integration was criticized as being frequentist. But the same criticism does not hold for nested sampling because it involves a multidimensional integration.
#codingexercise
find distinct palindrome substrings from a given string
void PrintAllPalindromesIn(String a)
{
for( int i =0; i < a.length; i++)
printPalindromesWithCenterAt(a,i);
}
void printPalindromesWithCenterAt(string a, int c)
{
int start = c-1;
int end = c+1;
while ( start >= 0 && end <= a.length && a[start] == a[end])
{
console.write(a.substring(start, end-start+1);
start--;
end++;
}
// now repeat with center as c and c+1
}
No comments:
Post a Comment