Edges characterise boundaries and are therefore a problem of fundamental importance in image processing. Edge detection helps suppress data, while at the same time maintaining the structure of the image. It is also helpful in image-processing routines lik
Before we jump to the main part of the article, let’s take a quick look at filters. ,n computing, filters are nothing but simple programs, which take the input image, operate on it and give an output image. The image is convolved with another signal. For images, we use 2D convolution to apply the filters. The concept of 2D convolution is explained below.
To the computer, an image is a simple two-dimensional matrix. For example, a 640x480 gray scale image is a 2D matrix with 640 rows and 480 columns. The filter to be applied is also a small two-dimensional filter. The small matrix is ‘run’ over the input image. This means that the small matrix is centred at every pixel in the image, and the corresponding elements are multiplied and summed up to give the new pixel value. it simply, edge detection is applying a high-pass filter to the image. A few edge-detection algorithms are listed below.
Laplace: Differentiation, as we know, is a high-pass filter. The laplace operator gives the second-order two-dimensional differentiation of the image. The drawback is that the output edge map is not single-pixel thick. The convolution matrix for the laplace operator is: 0 -1
0
Prewitt: This edge detection is a composition of two edge maps. One is the original image differentiated along the horizontal axis, and the other differentiated along the vertical axis. Adding these two as two perpendicular vectors will give the final edge map. The two differentiation matrices are:
Horizontal matrix -1 4
-1 0 -1
0