# Some Image Processing and Computational Photography: Convolution, Filtering and Edge Detection with Python

The following problems appeared as an assignment in the coursera course Computational Photography (by Georgia Institute of Technology). The following descriptions of the problems are taken directly from the assignment’s descriptions.

## Introduction

In this article, we shall be playing around with images, filters, and convolution. We will begin by building a function that performs convolution. We will then experiment with constructing and applying a variety of filters. Finally, we will see one example of how these ideas can be used to create interesting effects in our photos, by finding and coloring edges in our images.

## Filtering

In this section, let’s apply a few filters to some images. Filtering basically means replacing each pixel of an image by the linear combination of its neighbors. We need to understand the following concepts in this context:

1. Kernel (mask) for a filter: defines which neighbors to be considered and what weights are to be given.
2. Cross-Correleation vs. Convolution: determines how the kernel is going to be applied on the neighboring pixels to compute the linear combination.

### Convolution

The following figure describes the basic concepts of cross-correlation and convolution. Basically convolution flips the kernel before applying it to an image.

## Cross-Correlation vs. Convolution

The following figures show a custom kernel applied on an impulse response image changes the image both with cross-correlation and convolution.

The impulse response image is shown below:

The below figure shows a 3×3 custom kernel to be applied on the above impulse response image.

The following figure shows the output images after applying cross-correlation and convolution with the above kernel on the above impulse response image. As can be seen, convolution produces the desired output.

The following figures show the application of the same kernel on some grayscale images and the output images after convolution.

Now let’s apply the following 5×5 flat box2 kernel shown below.

The following figures show the application of the box kernel above on some grayscale images and the output images after convolution. on some images and notice how the output images are blurred.

The following figures show the application of the box kernels of different size on the grayscale image lena and the output images after convolution. As expected, the blur effect increases as the size of the box kernel increases.

## Gaussian Filter

The following figure shows a 11×11 Gaussian Kernel generated by taking outer product of the densities of two 1D i.i.d. Gaussians with mean 0 and s.d. 3.

Here is how the impulse response image (enlarged) looks like after the application of the above Gaussian Filter.

The next figure shows the effect of Gaussian filtering / smoothing (blur) on some images.

The following figure shows the 11×11 Gaussian Kernels generated with 1D i.i.d. Gaussians with different bandwidths (s.d. values).

The following figures show the application of the Gaussian kernels of different bandwidth on the following grayscale image and the output images after filtering. As expected, the blur effect increases as the bandwidth of the Gaussian kernel increases.

## Sharpen Filter

The following figure shows a 11×11 Sharpen Kernel generated by subtracting a gaussian kernel (with s.d. 3) from a scaled impulse response kernel with 2 at center.

The next figure shows the effect of Sharpen flitering on some images.

The following figure shows the 11×11 Sharpen Kernels with different bandwidths (s.d. values).
The following figures show the application of the Sharpen kernels of different bandwidth on the following grayscale image and the output images after filtering. As expected, the sharpen effect increases as the bandwidth of the Gaussian kernel increases.

## Median filter

One last thing we shall do to get a feel for is nonlinear filtering. So far, we have been doing everything by multiplying the input image pixels by various coefficients and summing the results together. A median filter works in a very different way, by simply choosing a single value from the surrounding patch in the image.

The next figure shows the effect of Median flitering on some images. As expected, with a 11×11 mask, some of the images are getting quite blurred.

## Drawing with edges

At a high level, we will be finding the intensity and orientation of the edges present in the image using convolutional filters, and then visualizing this information in an aesthetically pleasing way.

The first thing it does is to find the gradient of the image. A gradient is a fancy way of saying “rate of change”. The following figure shows the basic concepts about image gradients.

### Edge Detection

In order to find the edges in our image, we are going to look for places where pixels are rapidly changing in intensity. The following figure shows the concept:

### The Sobel Filter

There are a variety of ways for doing this, and one of the most standard is through the use of Sobel filters, which have the following kernels:

If we think about the x sobel filter being placed on a strong vertical edge:

Considering the yellow location, the values on the right side of the kernel get mapped to brighter, and thus larger, values. The values on the left side of the kernel get mapped to darker pixels which are close to zero. The response in this position will be large and positive.

Compare this with the application of the kernel to a relatively flat area, like the blue location. The values on both sides are about equal, so we end up with a response that is close to zero. Thus, the x-direction sobel filter gives a strong response for vertical edges. Similarly, the y-direction sobel filter gives a strong response for horizontal edges.

The steps for edge detection:

1. Convert the image to grayscale.
2. Blur the image with a gaussian kernel. The purpose of this is to remove noise from the image, so that we find responses only to significant edges, instead of small local changes that might be caused by our sensor or other factors.
3. Apply the two sobel filters to the image.

### Edge orientations and magnitudes

Now we have the rate at which the image is changing in the x and y directions, but it makes more sense to talk about images in terms of edges, and their orientations and intensities. As we will see, we can use some trigonometry to transform between these two representations.

First, let’s see how our sobel filters respond to edges at different orientations from the following figures:

The red arrow on the right shows the relative intensities of the response from the x and y sobel filter. We see that as we rotate the edge, the intensity of the response slowly shifts from the x to the y direction. We can also consider what would happen if the edge got less intense:

Let’s apply the sobel filters on the following butterfly image (taken from the slides of the same computational photography course).

The following figures show the result of applying the sobel filters on the above butterfly image. Notice that the angle of the edges show the direction (in red) for a few edge vectors for which the magnitude of the edge vectors were high (F is the image).

Again, let’s apply the sobel filters on the following tiger image (taken from the slides of the same computational photography course).

The following figures show the result of applying the sobel filters on the above tiger image. Notice that the angle of the edges show the direction (in red) for a few edge vectors for which the magnitude of the edge vectors were high.

Next, let’s apply the sobel filters on the following zebra image (taken from the slides of the same computational photography course).

The following figures show the result of applying the sobel filters on the above zebra image. Notice that the angle of the edges show the direction (in red) for a few edge vectors for which the magnitude of the edge vectors were high.

Finally, let’s apply the sobel filters on the following image of mine.

The following figures show the result of applying the sobel filters on my image. Notice that the angle of the edges show the direction (in red) for a few edge vectors for which the magnitude of the edge vectors were high.

The following figure shows the horizontal (H_x) and vertical (H_y) kernels for a few more filters such as Prewitt and Roberts along with Sobel.

### The Prewitt Filter

The following figures show the result of applying the prewitt filters on the butterfly image. Notice that the angle of the edges show the direction (in red) for a few edge vectors for which the magnitude of the edge vectors were high (F is the image).

### The Roberts Filter

The following figures show the result of applying the roberts filters on the butterfly image. Notice that the angle of the edges show the direction (in red) for a few edge vectors for which the magnitude of the edge vectors were high (F is the image).

### The LOG Filter

The following figure shows the Gaussian Kernels with different bandwidths, a Sharpen Kernel, along with the LOG (Laplacian of Gaussian Kernel), a very useful kernel for edge detection in images and DOG kernel.

The following figures show the results of applying the LOG filter on the above images.

### The DOG Filter

Difference of Gaussian (DOG)
Kernel can also be used to find edges. The following figure shows the results of application of DOG (with sd 2, 5) on the above images:

### The Canny Edge Detector

The following figure shows the Canny Edge Detection algorithm:

The following figures show the results of applying the Canny Edge Detection algorithm on the above images (with the intensity thresholds as 50 and 100 respectively).

### Mapping to color

We now have the magnitude and orientation of the edge present at every pixel. However, this is still not really in a form that we can examine visually. The angle varies from 0 to 180, and we don’t know the scale of the magnitude. What we have to do is to figure out a way to transform this information to some sort of color values to place in each pixel.

The edge orientations can be pooled into four bins edges that are roughly horizontal, edges that are roughly vertical, and edges at 45 degrees to the left and to the right. Each of these bins are assigned a color (vertical is yellow, etc). Then the magnitude is used to dictate the intensity of the edge. For example a roughly vertical edge of moderate intensity would be set to a medium yellow color, or the value (0, 100, 100).

The following figures show some of the edges extracted from some of the images previously used as inputs and color-mapped using sobel filter:

The markdown file can be found here: https://github.com/sandipan/Blogs/blob/master/comp_photo.md