Monday, November 21, 2016

Today we start discussing the paper "Nested sampling  for general Bayesian computation" by Skilling. Nested sampling estimates directly how the likelihood function relates to previous value of the discrete random variable. It can be used to compare two data models for the same set by comparing the conditional probability that is assigned after the relevant evidence is taken into account.
This has several advantages. First, this method computes the marginal likelihood directly by integration. Moreover samples from the distribution of the unobserved observations as conditionals on observed data can also be obtained optionally. This method relies on sampling within a hard constraint on likelihood value as opposed to the softened likelihood  of annealing methods. The sampling proceeds based on the shape of nested contours of likelihood, and not on the likelihood values. This technique allows the method to overcome the limitations that creep into annealing methods.
From the Bayes theorem, we often write that a product form of the model as
Likelihood x Prior  = Evidence x Posterior
which are expressed using parameters of the model.
The likelihood is the probability of the acquired data given the parameters and the model assumptions.
The Prior represents the uncertainity over the unknown parameters given the model assumptions and is taken before we have sampled any data and estimated it.
The posterior represents the uncertainity over the unknown parameters after the data has been sampled. The posterior therefore involves the sampled data D which was not considered in the prior.  The prior and the posterior lets us start with a few beliefs about the world, interact with it and then update the beliefs The computation of the posterior with sampled data is in fact an update of our beliefs about the world. The posterior is a conditional distribution on the sampled data which lets us modulate the prior. The prior and the posterior are usually normalized to unit total. With the likelihood function and the beliefs, we can now estimate the marginal likelihoods from the observed data and their probabilities.
When the equation is written in the product form, it lets us find the evidence as  a summation over the prior mass elements.
#codingexercise
Find the maximum area of a rectangle under the histogram of unit widths
public static int MaxArea(ref List<int> h)
{
            if (h == null || h.Length <= 0) return 0;
            var areas = new List<int>();
            for (int i = 0; i < h.Length; i++)
            {
                areas.add(i, h[i]);
                for (int k = i+1; k < h.Length; k++)
                   {
                     if (h[k] >= h[i])
                         areas[i] += h[i];
                     else
                     {                    
                         break;
                     }
                   }          
            }
            return areas.Max();
}
#Find max area under the histogram 
Int MaxAreaOfARectangle(List<int>histogram) 
{ 
Int max = 0; 
For(int I = 0; I < histogram.count; i++){ 
Int area = GetArea(histogram,i); 
If (area > max) 
    max = area; 
} 
Return max; 
} 
Int GetArea(List<int>histogram, int center) 
{ 
Int area = histogram[center]*1; 
For (int I = center-1; 
         i>=0 && histogram[i] > histogram[center]; i++){ 
         area += histogram[center]*I; 
} 
For (int I = center+1; 
         I<histogram[count] && histogram[i] > histogram[center]; i++){ 
         area += histogram[center]*I; 
} 
return area; 
} 

Alternatively,
Int MaxArea (ref int [] h, int start, int end, ref int min)
{
If (start == end)
{
min = h [start];
return min × 1;
}
If (start < end)
{
Int mid = (start + end)/2;
Int minleft = 0;
Int minright = 0;
Int left = MaxArea (c, ref h, start, mid, ref minleft);
Int right = MaxArea (c,ref h, mid +1, end, ref minright);
min = min (minleft, minright) ;
Int minArea= min × (end-start+1);
Return max (left,right, minArea);
}
Return 0;
}

No comments:

Post a Comment