Friday, May 31, 2013

Pixels in a photograph are translated to color codes and the entire image can be considered a matrix. With this matrix and some matrix manipulation software such as Matlab, these can be toyed with identify edges and regions. There are smoothing techniques and normalization we may need to perform on the images, and use algorithms for segmentation and then we can transform the image into something we can work with. We may not be able to eliminate noise but we can do a lot with the transformed image. 
Some examples of such image processing techniques include image enhancement, image restoration and image compression. Image enhancement is the technique by which different features of the image are accentuated such that they can be prepared for further analysis. Increasing contrast, gray level, contrast manipulation, noise reduction, edge detection and sharpening, filtering, interpolation and magnification are all part of enhancing the image. Image restoration is a technique that works the other way in that it tries to reduce the changes to the image by studying the extent and kinds of changes that have happened. Image  compression is about storing the image with a reduced number of bits. This is very helpful when image size is a concern for example in the storage and retrieval of a large number of images such as for broadcasting, teleconferencing, medical images and other transmissions. If you have taken pictures in different formats, you would have noticed the significant improvement in size with JPEG format. The Joint Photography group came up with this format to reduce the image size among other things.
Among the image enhancement techniques, edge detection is commonly used to quickly identify the objects of interest from a still image. This is helpful in tracking the changes to the edges in a series of frames from a moving camera. Such cameras capture over 30 frames per second and the algorithms used for image processing are costly. So we make adjustments with what we want to detect and make interpretations from that.
Another such application is region growing where we decompose the picture into regions of interest. In the seeded region growing method for example, a set of data points are taken as starting points for the objects to be demarcated along with the input image.  The regions are iteratively grown by comparing all unmarked pixels and including them into the regions. The difference between the pixel intensity and the region's mean is used for the measure of similarity. This way the pixels are included in the regions and the regions grow. 
Another interesting technique is called balanced histogram threshold method in which the image foreground and background are hued differently so that we see the outline of the foreground. The entire image is converted to a histogram of intensities of all the pixels. Then the method tries to find the threshold for which the histogram divides into two groups. This method literally balances the histogram to find the threshold value.  It weights which of the two groups is heavier and adjusts the weights iteratively until the histogram balances. This method is appealing for its simplicity but it does not work well with very noisy images because there are outliers that distort finding the threshold.  This we can workaround by ignoring the outliers.
Thus we have seen some interesting applications of image processing. 

No comments:

Post a Comment