Some Image Processing and Computational Photography: Convolution, Filtering and Edge Detection with Python and OpenCV

The following problems appeared as an assignment in the coursera course Computational Photography (by Georgia Institute of Technology). The following descriptions of the problems are taken directly from the assignment’s descriptions.

Introduction

In this article, we shall be playing around with images, filters, and convolution. We will begin by building a function that performs convolution. We will then experiment with constructing and applying a variety of filters. Finally, we will see one example of how these ideas can be used to create interesting effects in our photos, by finding and coloring edges in our images.

Filtering

In this section, let’s apply a few filters to some images. Filtering basically means replacing each pixel of an image by the linear combination of its neighbors. We need to understand the following concepts in this context:

  1. Kernel (mask) for a filter: defines which neighbors to be considered and what weights are to be given.
  2. Cross-Correleation vs. Convolution: determines how the kernel is going to be applied on the neighboring pixels to compute the linear combination.

Convolution

The following figure describes the basic concepts of cross-correlation and convolution. Basically convolution flips the kernel before applying it to an image.

im1.png

Cross-Correlation vs. Convolution

The following figures show a custom kernel applied on an impulse response image changes the image both with cross-correlation and convolution.

The impulse response image is shown below:

impulse_response

The below figure shows a 3×3 custom kernel to be applied on the above impulse response image.

kernel_custom.png
The following figure shows the output images after applying cross-correlation and convolution with the above kernel on the above impulse response image. As can be seen, convolution produces the desired output.
cc.png

The following figures show the application of the same kernel on some grayscale images and the output images after convolution.

all_conv_kernel_custom.png

Now let’s apply the following 5×5 flat box2 kernel shown below.

kernel_box2.png

The following figures show the application of the box kernel above on some grayscale images and the output images after convolution. on some images and notice how the output images are blurred.

all_conv_kernel_box2.png

The following figures show the application of the box kernels of different size on the grayscale image lena and the output images after convolution. As expected, the blur effect increases as the size of the box kernel increases.

lena_conv_kernel_box.png

Gaussian Filter

The following figure shows a 11×11 Gaussian Kernel generated by taking outer product of the densities of two 1D i.i.d. Gaussians with mean 0 and s.d. 3.

kernel_gaussian_5_3.png

Here is how the impulse response image (enlarged) looks like after the application of the above Gaussian Filter.

Impulse_gaussian_3.png

The next figure shows the effect of Gaussian filtering / smoothing (blur) on some images.

all_conv_kernel_gaussian3.png

The following figure shows the 11×11 Gaussian Kernels generated with 1D i.i.d. Gaussians with different bandwidths (s.d. values).

gaussian_kernels.png
The following figures show the application of the Gaussian kernels of different bandwidth on the following grayscale image and the output images after filtering. As expected, the blur effect increases as the bandwidth of the Gaussian kernel increases.
a_gaussian_kernel.png

Sharpen Filter

The following figure shows a 11×11 Sharpen Kernel generated by subtracting a gaussian kernel (with s.d. 3) from a scaled impulse response kernel with 2 at center.

sharpen_kernel.png

The next figure shows the effect of Sharpen flitering on some images.

all_conv_kernel_sharpen3.png
The following figure shows the 11×11 Sharpen Kernels with different bandwidths (s.d. values).
sharpen_kernels.png
The following figures show the application of the Sharpen kernels of different bandwidth on the following grayscale image and the output images after filtering. As expected, the sharpen effect increases as the bandwidth of the Gaussian kernel increases.
mri_sharpen_kernel.png

Median filter

One last thing we shall do to get a feel for is nonlinear filtering. So far, we have been doing everything by multiplying the input image pixels by various coefficients and summing the results together. A median filter works in a very different way, by simply choosing a single value from the surrounding patch in the image.

The next figure shows the effect of Median flitering on some images. As expected, with a 11×11 mask, some of the images are getting quite blurred.

all_median.png

Drawing with edges

At a high level, we will be finding the intensity and orientation of the edges present in the image using convolutional filters, and then visualizing this information in an aesthetically pleasing way.

Image gradients

The first thing it does is to find the gradient of the image. A gradient is a fancy way of saying “rate of change”. The following figure shows the basic concepts about image gradients.

im6.png

Edge Detection

In order to find the edges in our image, we are going to look for places where pixels are rapidly changing in intensity. The following figure shows the concept:

im7.png

The Sobel Filter

There are a variety of ways for doing this, and one of the most standard is through the use of Sobel filters, which have the following kernels:

im2.png

 If we think about the x sobel filter being placed on a strong vertical edge:

im3

Considering the yellow location, the values on the right side of the kernel get mapped to brighter, and thus larger, values. The values on the left side of the kernel get mapped to darker pixels which are close to zero. The response in this position will be large and positive.

Compare this with the application of the kernel to a relatively flat area, like the blue location. The values on both sides are about equal, so we end up with a response that is close to zero. Thus, the x-direction sobel filter gives a strong response for vertical edges. Similarly, the y-direction sobel filter gives a strong response for horizontal edges.

The steps for edge detection:

  1. Convert the image to grayscale.
  2. Blur the image with a gaussian kernel. The purpose of this is to remove noise from the image, so that we find responses only to significant edges, instead of small local changes that might be caused by our sensor or other factors.
  3. Apply the two sobel filters to the image.

Edge orientations and magnitudes

Now we have the rate at which the image is changing in the x and y directions, but it makes more sense to talk about images in terms of edges, and their orientations and intensities. As we will see, we can use some trigonometry to transform between these two representations.

First, let’s see how our sobel filters respond to edges at different orientations from the following figures:

im4.png

The red arrow on the right shows the relative intensities of the response from the x and y sobel filter. We see that as we rotate the edge, the intensity of the response slowly shifts from the x to the y direction. We can also consider what would happen if the edge got less intense:

im5.png

Let’s apply the sobel filters on the following butterfly image (taken from the slides of the same computational photography course).

butterfly.png

The following figures show the result of applying the sobel filters on the above butterfly image. Notice that the angle of the edges show the direction (in red) for a few edge vectors for which the magnitude of the edge vectors were high (F is the image).butterfly_sobel.png

Again, let’s apply the sobel filters on the following tiger image (taken from the slides of the same computational photography course).
tiger.png

The following figures show the result of applying the sobel filters on the above tiger image. Notice that the angle of the edges show the direction (in red) for a few edge vectors for which the magnitude of the edge vectors were high.

tiger_sobel.png

Next, let’s apply the sobel filters on the following zebra image (taken from the slides of the same computational photography course).

zebra.png

The following figures show the result of applying the sobel filters on the above zebra image. Notice that the angle of the edges show the direction (in red) for a few edge vectors for which the magnitude of the edge vectors were high.

zebra_sobel.png

Finally, let’s apply the sobel filters on the following image of mine.

me.png

The following figures show the result of applying the sobel filters on my image. Notice that the angle of the edges show the direction (in red) for a few edge vectors for which the magnitude of the edge vectors were high.

me_sobel.png

The following figure shows the horizontal (H_x) and vertical (H_y) kernels for a few more filters such as Prewitt and Roberts along with Sobel.

im8

The Prewitt Filter

The following figures show the result of applying the prewitt filters on the butterfly image. Notice that the angle of the edges show the direction (in red) for a few edge vectors for which the magnitude of the edge vectors were high (F is the image).

butterfly_prewitt.png

The Roberts Filter

The following figures show the result of applying the roberts filters on the butterfly image. Notice that the angle of the edges show the direction (in red) for a few edge vectors for which the magnitude of the edge vectors were high (F is the image).

butterfly_roberts.png

The LOG Filter

The following figure shows the Gaussian Kernels with different bandwidths, a Sharpen Kernel, along with the LOG (Laplacian of Gaussian Kernel), a very useful kernel for edge detection in images and DOG kernel.


gaussian_kernels_cont.png

The following figures show the results of applying the LOG filter on the above images.

all_LOG.png

The DOG Filter


Difference of Gaussian (DOG)
Kernel can also be used to find edges. The following figure shows the results of application of DOG (with sd 2, 5) on the above images:

all_og_2_5.png

The Canny Edge Detector

The following figure shows the Canny Edge Detection algorithm:

im9.png

The following figures show the results of applying the Canny Edge Detection algorithm on the above images (with the intensity thresholds as 50 and 100 respectively).

all_canny.png

Mapping to color

We now have the magnitude and orientation of the edge present at every pixel. However, this is still not really in a form that we can examine visually. The angle varies from 0 to 180, and we don’t know the scale of the magnitude. What we have to do is to figure out a way to transform this information to some sort of color values to place in each pixel.

The edge orientations can be pooled into four bins edges that are roughly horizontal, edges that are roughly vertical, and edges at 45 degrees to the left and to the right. Each of these bins are assigned a color (vertical is yellow, etc). Then the magnitude is used to dictate the intensity of the edge. For example a roughly vertical edge of moderate intensity would be set to a medium yellow color, or the value (0, 100, 100).

The following figures show some of the edges extracted from some of the images previously used as inputs and color-mapped using sobel filter:

edges.png

edges_sobel.png

The markdown file can be found here: https://github.com/sandipan/Blogs/blob/master/comp_photo.md

Advertisements

2 thoughts on “Some Image Processing and Computational Photography: Convolution, Filtering and Edge Detection with Python and OpenCV

  1. Pingback: Sandipan Dey: Some Image Processing and Computational Photography: Convolution and Filtering with Python | Adrian Tudor Web Designer and Programmer

  2. Pingback: Sandipan Dey: Some Image Processing and Computational Photography: Convolution, Filtering and Edge Detection with Python and OpenCV | Adrian Tudor Web Designer and Programmer

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s