Some Computational Photography: Image Quilting (Texture Synthesis) with Dynamic Programming and Texture Transfer (Drawing with Textures) in Python

 

The following problems appeared as a programming assignment in the Computation Photography course (CS445) at UIUC. The description of the problem is taken from the assignment itself. In this assignment, a python implementation of the problems will be described instead of matlab, as expected in the course.

 

The Problems

  • The goal of this assignment is to implement the image quilting algorithm for
    texture synthesis and transfer, described in this SIGGRAPH 2001 paper by Efros
    and Freeman.
  • Texture synthesis is the creation of a larger texture image from a small sample.
  • Texture transfer is giving an object the appearance of having the
    same texture as a sample while preserving its basic shape.
  • For texture synthesis, the main idea is to sample patches and lay them down in overlapping patterns, such that the overlapping regions are similar.
  • The overlapping regions may not match exactly, which will result in noticeable
    edges artifact. To fix this, we need to compute a path along pixels with similar intensities through the overlapping region and use it to select which overlapping patch from which to draw each pixel.
  • Texture transfer is achieved by encouraging sampled patches to have similar appearance to a given target image, as well as matching overlapping regions of already sampled patches.
  • In this project, we need to apply important techniques such as template matching, finding seams, and masking. These techniques are also useful for image stitching, image completion, image retargeting and blending.

 

Randomly Sampled Texture

First let’s randomly samples square patches of size patchsize from sample in order to create an output image of size outsize. Start from the upper-left corner, and tile samples until the image is full. If the patches don’t fit evenly into the output image, we can leave black borders at the edges. This is the simplest but least effective method. Save a result from a sample image to compare to the next two methods.

 

Overlapping Patches

Let’s start by sampling a random patch for the upper-left corner. Then sample new patches to overlap with existing ones. For example, the second patch along the top row will overlap by patchsize pixels in the vertical direction and overlap pixels in the horizontal direction. Patches in the first column will overlap by patchsize pixels in the horizontal direction and overlap pixels in the vertical direction. Other patches will have two overlapping regions (on the top and left) which should both be taken into account. Once the cost of each patch has been computed, randomly choose on patch whose cost is
less than a threshold determined by some tolerance value.

As described in the paper, the size of the block is the only parameter controlled by the user and it depends on the properties of a given texture; the block must be big enough to capture the relevant structures in the texture, but small enough so that the interaction between these structures is left up to the algorithm. The overlap size is taken to be one-sixth of the block size (B/6) in general.

 

Seam Finding

Next we need to find the min-cost contiguous path from the left to right side of the patch according to the cost. The cost of a path through each pixel is the square differences (summed over RGB for color images) of the output image and the newly
sampled patch. Use dynamic programming to find the min-cost path.

The following figure describes the algorithm to be implemented for image quilting.

f20.png

Texture Transfer

The final task is to implement texture transfer, based on the quilt implementation for creating a texture sample that is guided by a pair of sample/target correspondence images (section 3 of the paper). The main difference between this function and
quilt function is that there is an additional cost term based on the difference between
the sampled source patch and the target patch at the location to be filled.

Image quilting (texture synthesis) results

The following figures and animations show the results of the outputs obtained with the quilting algorithm. The input texture images are mostly taken from the paper .

Input sample Texture
q2.png

100 sampled blocks of a fixed size (e.g. 50×50) from the input sample
samples_q2.png

The next animation shows how the large output texture gets created (100 times larger than the input sample texture) with the quilting algorithm.

quilt.gif

Output Texture (10×10 times larger than the input) created with texture synthesis (quilting)
quilt_q2.png

 

Input Texture

q6.png

Output Texture (25 times larger than the input) created with texture synthesis (quilting) with the minimum cost seams (showed as red lines) computed with dynamic programming

quilt_boundary_q6

Output Texture (25 times larger than the input) created with quiltingquilt_q6

 

Input Texture

q1

Output Texture (25 times larger than the input) created with quilting

quilt_q1_.png

Input Texture

q3

Output Texture (25 times larger than the input) created with quilting

quilt_q3.png

Input Texture

q4

Output Texture (12 times larger than the input) created with quilting

quilt_q4.png

Input Texture

q5

Output Texture (25 times larger than the input) created with quilting

quilt_q5.png

Input Texture

q7

Output Texture (25 times larger than the input) created with quilting

quilt_q7_.png

Input Texture

q9

Output Texture (36 times larger than the input) created with quilting

quilt_q9.png

Input Texture

q11

Output Texture (9 times larger than the input) created with quilting

quilt_q11_.png

Input Texture

q12

Output Texture (25 times larger than the input) created with quilting

quilt_q12.png

Input Texture

q13

Output Texture (9 times larger than the input) created with quilting along with the min-cost seams (shown as red lines) computed with dynamic programming 

quilt_boundary_q13

Output Texture (9 times larger than the input) created with quilting

quilt_q13

 

Texture transfer results

The following figures and animations show how the texture from an input image can be transferred to the target image using the above-mentioned simple modification of the quilting algorithm. Again, some of the images are taken from the paper.

Input Texture (milk)

q14.png

Target Image

qt1.png

Output Image after Texture Transfer

text_tran_q14

 

Input Texture (milk)

q14.png

Target Image

mona

Output Image after Texture Transfer

text_tran_q14_

 

The following figures show the output image obtained when a few textures were transferred to my image.

Target Image (me)

me1.png

 

Input Texture (fire)

fire.png

Output Image after Texture Transfer  (with small block size)

text_tran_fire

 

 

Input Texture (cloud)

cloud1.png

Output Image after Texture Transfer  (with small block size)

text_tran_cloud1

 

Input Texture (toast)
toast.png

Output Image after Texture Transfer  (with small block size)text_tran_toast

Now applying some gradient mixing such as Poisson blending  on the original image and the the one after texture transfer with the target toast image we get the following two images respectively.

pe_toast1

pe_toast

Input Texture
q7.png

Output Image after Texture Transfer  (with small block size)

text_tran_q7

 

Input Texture

draw.png

Output Image after Texture Transfer  (with small block size)text_tran_draw

 

 

Drawing with Textures

 

The following figures show the output image obtained when a few textures were transferred to another image of mine.

 

Target Image (me)

me6.png

 

Some textures from a few famous painting-masterpieces (obtained from here, here, here, here and here)



Input Texture (Van Gogh’s The Starry Night)

van.png

Output Images after Texture Transfer  (with 3 different patch sizes)

text_tran_012_van

text_tran_024_van
text_tran_050_van


Input Texture (Van Gogh’s Wheatfield with Cypresses)

van1.png

Output Image after Texture Transfer

text_tran_012_van1.png


Input Texture (Van Gogh’s The Mulberry Tree)

van2

Output Image after Texture Transfer

text_tran_050_van2.png


Input Texture (Van Gogh’s Wheatfield with Crows)

van3

Output Images after Texture Transfer  (with 2 different patch sizes)

text_tran_024_van3text_tran_050_van3


Input Texture (Van Gogh’s Vase with fifteen Sunflowers)

van4

Output Images after Texture Transfer  (with 2 different patch sizes)

text_tran_024_van4text_tran_050_van4


Input Texture (Van Gogh’s Sunflowers (F452))

van5

Output Image after Texture Transfer

text_tran_024_van5


Input Texture (Van Gogh’s Irises)

van6

Output Image after Texture Transfer

text_tran_024_van6.png


Input Texture (Van Gogh’s Almond Blossom)

van7

Output Image after Texture Transfer

text_tran_024_van7



Input Texture (Van Gogh’s Starry Night Over the Rhone)

van8

Output Image after Texture Transfer

text_tran_024_van8.png


Input Texture (Pablo Picasso’s Mediterranean Landscape)

picaso

Output Images after Texture Transfer  (with 2 different patch sizes)

text_tran_024_picasotext_tran_050_picaso


Input Texture (Pablo Picasso’s Minotauromachy)

picaso1

Output Image after Texture Transfer

text_tran_024_picaso1


Input Texture (Pablo Picasso’s Mother and Child 1921)

picaso2

Output Image after Texture Transfer

text_tran_024_picaso2.png


Input Texture (Pablo Picasso’s Factory at Horto de Ebro)

picaso3

Output Images after Texture Transfer  (with 2 different patch sizes)

text_tran_024_picaso3text_tran_050_picaso3.png


Input Texture (Pablo Picasso’s Carafe and Three Bowls) picaso4

Output Image after Texture Transfer
text_tran_024_picaso4



Input Texture (Pablo Picasso’s Bullfight 3) 

picaso6
Output Image after Texture Transfer
text_tran_024_picaso6


Input Texture (Pablo Picasso’s Accordionist) 

picaso7

Output Image after Texture Transfertext_tran_024_picaso7


Input Texture (Pablo Picasso’s Las Meninas) picaso5

Output Image after Texture Transfertext_tran_024_picaso5.png


Input Texture (Claude Monet’s The Artist’s Garden at Giverny

monet

Output Image after Texture Transfertext_tran_024_monet


Input Texture (Claude Monet’s The Poppy Field near Argenteuil

monet1

Output Image after Texture Transfer

text_tran_024_monet1


Input Texture (Claude Monet’s The Magpie

monet2

Output Image after Texture Transfer

text_tran_024_monet2


Input Texture (Claude Monet’s The Garden of Monet at Argenteuil

monet3

Output Images after Texture Transfer  (with 2 different patch sizes)text_tran_050_monet3text_tran_024_monet3


Input Texture (Claude Monet’s Sunset in Venice

monet5

Output Image after Texture Transfer

text_tran_024_monet5


Input Texture (Claude Monet’s Waterloo Bridge

monet4

Output Image after Texture Transfer text_tran_024_monet4.png


Input Texture (Claude Monet’s Water Lilies     monet6

Output Image after Texture Transfer

text_tran_024_monet6


Input Texture (Claude Monet’s Impression Sunrise)

monet7

Output Image after Texture Transfer

text_tran_024_monet7


Input Texture (Claude Monet’s Sunflowers)

monet8

Output Image after Texture Transfer

text_tran_024_monet8



Input Texture (Abanindranath Tagore’s Canal in Shahjadpur)

aban

Output Image after Texture Transfer

text_tran_024_aban


Input Texture (Abanindranath Tagore’s The Victory of Buddha)

aban1

Output Image after Texture Transfer

text_tran_024_aban1

 



Input Texture (Abanindranath Tagore’s Rabindranath in the role of  blind singer)

aban2

Output Image after Texture Transfer

text_tran_024_aban2


Input Texture (Abanindranath Tagore’s Slaying the Tornado Demon)

aban3

Output Image after Texture Transfer

text_tran_024_aban3


Input Texture (Nandalal Bose’s Village Huts)

nandalal

Output Image after Texture Transfer

text_tran_024_nandalal


Some more Textures

Input Texture

epii

Output Images after Texture Transfer  (with 2 different patch sizes)

text_tran_012_epiitext_tran_024_epii

 

Input Texture

ds

Output Images after Texture Transfer  (with 2 different patch sizes)

text_tran_012_dstext_tran_024_ds

 

Input Texture

alphabet.png

Output Image after Texture Transfer

text_tran_024_alphabet.png

 

Input Texture

rabi

Output Images after Texture Transfer  (with 2 different patch sizes)

text_tran_012_rabitext_tran_024_rabi

 

Input Texture

notes

Output Images after Texture Transfer  (with 2 different patch sizes)

text_tran_012_notestext_tran_024_notes

 

Input Texture

india

Output Images after Texture Transfer (with 2 different patch sizes)

text_tran_012_india.png

text_tran_024_india.png

 

 

Input Texture

bangla2.png

Output Image after Texture Transfer

text_tran_024_bangla2.png

 

 

Input Texture

r1

Output Image after Texture Transfer

text_tran_012_r1

 

 

Input Texture

ens

Output Image after Texture Transfer

text_tran_012_ens

 

Input Texture

rabi2

Output Image after Texture Transfer

text_tran_012_rabi2

 

 

Input Texture

toast

Output Image after Texture Transfer

text_tran_012_toast_12.png

 

 

Input Texture

q14

Output Images after Texture Transfer  (with 2 different patch sizes)

text_tran_012_q14_12text_tran_024_q14

 

Input Texture
 stars.png

Output Image after Texture Transfer
star.gif

stars2.gif

 

 

Target Image (from The Mummy 1999)
mm.png

Input Texture (sand)

msky.png

Output Image after Texture Transfer

text_tran_012_msky_1.png

 

 

The following animation shows how the milk texture is being transformed to the target image of mine with the quilting algorithm with modified code.

Input Texture

q14

Target Image (me)

me4

Output Image after Texture Transfer  (with small block size)

ttmea

 

The next figures and animations show the output image obtained after milk texture gets transferred to the target image of mine, for different block size of the samples (shown in red). As can be seen from the following outputs, the target image gets more and more prominent as the sampling block size gets smaller and smaller.

text_tran_036_q14

text_tran_804_q14

ttme

Advertisements

Feature Detection with Harris Corner Detector and Matching images with Feature Descriptors in Python

The following problem appeared in a project in this Computer Vision Course (CS4670/5670, Spring 2015) at Cornell. In this article, a python implementation is going to be described. The description of the problem is taken (with some modifications) from the project description. The same problem appeared in this assignment problem as well. The images used for testing the algorithms implemented are mostly taken from these assignments / projects.

The Problem

In this project, we need to implement the problem of detect discriminating features in an image and find the best matching features in other images.  The features should be reasonably invariant to translation, rotation, and illumination.  The slides presented in class can be used as reference.

Description

The project has three parts:  feature detection, feature description, and feature matching.

1. Feature detection

In this step, we need to identify points of interest in the image using the Harris corner detection method.  The steps are as follows: 

  • For each point in the image, consider a window of pixels around that point.
  • Compute the Harris matrix H for (the window around) that point, defined asharriseq-structuretensor.pngwhere the summation is over all pixels p in the window.   is the x derivative of the image at point p, the notation is similar for the y derivative.
  •  The weights  are chosen to be circularly symmetric,  a 9×9 Gaussian kernel with 0.5 sigma is chosen to achieve this.
  • Note that H is a 2×2 matrix.  To find interest points, first we need to compute the corner strength function

  • Once c is computed for every point in the image, we need to choose points where c is above a threshold.
  • We also want c to be a local maximum in a 9×9 neighborhood (with non-maximum suppression).
  • In addition to computing the feature locations, we need to compute a canonical orientation for each feature, and then store this orientation (in degrees) in each feature element.
  • To compute the canonical orientation at each pixel, we need to compute the gradient of the blurred image and use the angle of the gradient as orientation.

2. Feature description

  • Now that the points of interest are identified,  the next step is to come up with a descriptor for the feature centered at each interest point.  This descriptor will be the representation to be used to compare features in different images to see if they match.
  • In this article, we shall implement a simple descriptor, a 8×8 square window without orientation. This should be very easy to implement and should work well when the images we’re comparing are related by a translation. We also normalize the window to have zero mean and unit variance, in order to obtain illumination invariance.
  • In order to obtain rotational invariance MOPS descriptor, by taking care of the orientation that is not discussed in this article for the time being.

3. Feature matching

  • Now that the features in the image are detected and described, the next step is to write code to match them, i.e., given a feature in one image, find the best matching feature in one or more other images.
  • The simplest approach is the following:  write a procedure that compares two features and outputs a distance between them.  For example, we simply sum the absolute value of differences between the descriptor elements.
  • We then use this distance to compute the best match between a feature in one image and the set of features in another image by finding the one with the smallest distance. The distance used here is the Manhattan distance.

 

The following theory and math for the Harris Corner Detection will be used that’s taken from this youtube video.

f19.png

The following figure shows the structure of the python code to implement the algorithm.

hcode.png

Feature Detection (with Harris Corner Detection): Results on a few images

  • The threshold to be used for the Harris Corner Detection is varied (as shown in the following animations in red, with the value of the threshold being 10^x, where x is shown (the common logarithm of the threshold is displayed).
  • The corner strength function with kappa=0.04 is used instead of the minimum eigenvalue (since it’s faster to compute).
  • As can be seen from the following animations, lesser and lesser corner features are detected when the threshold is increased.
  • The direction and magnitude of the feature is shown by the bounding (green) square’s angle with the horizontal and the size of the square respectively, computed from the gradient matrices.

Input Imageflower.jpg

Harris Corner Features Detected for different threshold values (log10)
hcflw

Input Image
liberty3

The following figure shows the result of thresholding on

  1. the Harris corner strength R values and
  2. the minimum eigenvalue for the Harris matrix respectively,

for each pixel, before applying non-maximum suppression (computing the local maximum).

hc1_liberty1.jpg

The next animation shows the features detected after applying non-maximum suppression, with different threshold values.

Harris Corner Features Detected for different threshold values (log10)
hclib

Input Image
features_small.png

Harris Corner Features Detected for different threshold values (log10)hcfea

Input Image
chess

Harris Corner Features Detected for different threshold values (log10)

hcchs

Input Image
yosemite1.jpg

Harris Corner Features Detected for different threshold values (log10)
hcyos

Computing Feature descriptors

  • In this article, we shall implement a very simple descriptor, a 8×8 square window without orientation. This is expected to work well when the images being compared are related by a translation.
  • We also normalize the window to have zero mean and unit variance, in order to obtain illumination invariance.
  • In order to obtain rotational invariance MOPS descriptor, by taking care of the orientation that is not discussed in this article for the time being.

 

Matching Images with Detected Features: Results on a few images

  • First the Harris corner features and the simple descriptors are computed for each of the images to be compared.
  • Next, distance between each pair of corner feature descriptors is computed, by simply computing the sum the absolute value of differences between the descriptor elements.
  • This distance is then used to compute the best match between a feature in one image and the set of features in another image by finding the one with the smallest distance.
  • The following examples show how the matching works with simple feature descriptors around the Harris corners for images obtained using mutual translations.

Input images (one is a translation of the other) 

liberty2           liberty1
Harris Corner Features Detected for the images
hc_0.000500liberty1hc_0.000500liberty2

Matched Features with minimum sum of absolute distancemhclib.gif

 

Input images
me        me3

Harris Corner Features Detected for the images

hc_0.000010me        hc_0.000010me3

Matched Features with minimum sum of absolute distance
mhcme.gif

The following example shows the input images to be compared being created more complex transformations (not only translation) and as expected, the simple feature descriptor does not work well in this case, as shown. We need some feature descriptors like SIFT to obtain robustness against rotation and scaling too.

Input images
trees_002  trees_003

Harris Corner Features Detected for the images
hc_0.000050trees_002   hc_0.000050trees_003

Matched Features with minimum sum of absolute distance
match_hc_403_trees_002.jpg

Seam Carving: Using Dynamic Programming to implement Content-Aware Image Resizing in Python

The following problem appeared as an assignment in the Algorithm Course (COS 226) at Princeton University taught by Prof. Sedgewick.  The following description of the problem is taken from the assignment itself.

The Seam Carving Problem

  • Seam-carving is a content-aware image resizing technique where the image is reduced in size by one pixel of height (or width) at a time.
  • vertical seam in an image is a path of pixels connected from the top to the bottom with one pixel in each row.
  • horizontal seam is a path of pixels connected from the left to the right with one pixel in each column.
  • Unlike standard content-agnostic resizing techniques (such as cropping and scaling), seam carving preserves the most interesting features (aspect ratio, set of objects present, etc.) of the image.
  • In this assignment, a data type needs to be created that resizes W-by-H image using the seam-carving technique.
  • Finding and removing a seam involves three parts:
    1. Energy calculation.The first step is to calculate the energy of a pixel, which is a measure of its importance—the higher the energy, the less likely that the pixel will be included as part of a seam (as you will see in the next step).In this assignment, we shall use the dual-gradient energy function, which is described below.Computing the energy of a pixel. With the dual-gradient energy function, the energy of  pixel (x,y) is  given by the following:f17.png
    2. Seam identification.The next step is to find a vertical seam of minimum total energy. (Finding a horizontal seam is analogous.) This is similar to the classic shortest path problem in an edge-weighted digraph, but there are three important differences:
      • The weights are on the vertices instead of the edges.
      • The goal is to find the shortest path from any of the W pixels in the top row to any of the W pixels in the bottom row.
      • The digraph is acyclic, where there is a downward edge from pixel (xy) to pixels (x − 1, y + 1), (xy + 1), and (x + 1, y + 1). assuming that the coordinates are in the prescribed ranges.
      • Also, Seams cannot wrap around the image (e.g., a vertical seam cannot cross from the leftmost column of the image to the rightmost column).
      • As described in the paper, the optimal seam can be found using dynamic programming. The first step is to traverse the image from the second row to the last row and compute the cumulative minimum energy M for all possible connected seams for each pixel (i, j):f18
    3. Seam removal.The final step is remove from the image all of the pixels along the vertical or horizontal seam.

Results with a few images

The following image is the original 507×284 pixel input image taken from the same assignment. The next few images and animations show the outputs of the seam carving algorithm implementation. The shape (the transpose of the image shape is reported) and size of the image (in terms of the memory taken by python np array as float, to store the image) is also reported for each iteration of the algorithm. As can be seen, the algorithm resizes the image by removing  the minimum energy vertical (and horizontal seams) iteratively one at a time, by keeping the main objects as is.

Input Original Image

HJoceanSmall.png

Removing the Vertical Seams

energy_000HJoceanSmall
vseam

After Removing 200 Vertical Seams

seam_199_vHJoceanSmall

energy

Removing the Vertical and the Horizontal Seams in alternating manner

vhseam
After Removing 100 Vertical Seams and 100 Horizontal Seams

seam_099_hHJoceanSmall

seam_099_vHJoceanSmall

The following is the original 1024×576 image of the Dakshineswar Temple, Kolkata along with the removed vertical seams with content-aware resizing.

temple

energy__temple
vseamtemple (2).gif

Output image after removing 500 Vertical Seams

seam_499_vtemple
The next animation again shows how the content-aware resizing preserves the objects in the original image.

vseam_sr

energy_000vr

The next image is the original dolphin 239×200 image taken from the paper, along with the removed vertical seams with content-aware resizing.

dolphin

energy_000dolphin.jpg

vseamdolphin

           After removing 112 Vertical Seams

seam_113_vdolphin

The next animation shows how the seams are deleted from the image in the reverse order of deletion.

vseamidolphin

 

The next image is the original 750×498 bird image taken from the paper, along with the removed vertical seams with content-aware resizing.

bird2.png

energy_000bird2.png

vseambird
         After Removing 297 Vertical Seams

seam_299_vbird2

 

The next image is the original 450×299 sea image taken from the paper, along with the removed vertical seams with content-aware resizing.

sea2.png

vseamsea2

 

The next image is the original 628×413 cycle image taken from the paper, along with the removed vertical seams with content-aware resizing.

cycle.png

energy_000cycle

vseamcycle

After Removing 99 Vertical Seams

seam_199_vcycle

 

The next image is the original 697×706 Fuji image again taken from the paper, along with the removed vertical seams with content-aware resizing.

Fuji

 

vseamfuji

               After Removing 282 Vertical Seams

seam_280_vFuji

 

Object Removal

The same technique can be applied with mask to remove objects from an image. For example. consider the following image of the shoes taken from the same paper.

shoes

Let’s use a black mask to remove a shoe that we don’t want, as shown in the next figure.

masked_shoes.jpg
Finally the vertical seams can be forced to go through the masked object, as shown in the next animation,  in order to remove the masked object completely just by using context-aware resizing.

or_shoes.gif

Output after removing the shoe with content-aware image resize algorithm

seam_115_vshoes.jpg

Measuring Semantic Relatedness using the Distance and the Shortest Common Ancestor and Outcast Detection with Wordnet Digraph in Python

The following problem appeared as an assignment in the Algorithm Course (COS 226) at Princeton University taught by Prof. Sedgewick.  The description of the problem is taken from the assignment itself. However, in the assignment, the implementation is supposed to be in java, in this article a python implementation will be described instead. Instead of using nltk, this implementation is going to be from scratch.

 

The Problem

 

  • WordNet is a semantic lexicon for the English language that computational linguists and cognitive scientists use extensively. For example, WordNet was a key component in IBM’s Jeopardy-playing Watson computer system.
  • WordNet groups words into sets of synonyms called synsets. For example, { AND circuitAND gate } is a synset that represent a logical gate that fires only when all of its inputs fire.
  • WordNet also describes semantic relationships between synsets. One such relationship is the is-a relationship, which connects a hyponym (more specific synset) to a hypernym (more general synset). For example, the synset gatelogic gate } is a hypernym of { AND circuitAND gate } because an AND gate is a kind of logic gate.
  • The WordNet digraph. The first task is to build the WordNet digraph: each vertex v is an integer that represents a synset, and each directed edge v→w represents that w is a hypernym of v.
  • The WordNet digraph is a rooted DAG: it is acyclic and has one vertex—the root— that is an ancestor of every other vertex.
  • However, it is not necessarily a tree because a synset can have more than one hypernym. A small subgraph of the WordNet digraph appears below.

 

wordnet-event.png

 

The WordNet input file formats

 

The following two data files will be used to create the WordNet digraph. The files are in comma-separated values (CSV) format: each line contains a sequence of fields, separated by commas.

  • List of synsets: The file synsets.txt contains all noun synsets in WordNet, one per line. Line i of the file (counting from 0) contains the information for synset i.
    • The first field is the synset id, which is always the integer i;
    • the second field is the synonym set (or synset); and
    • the third field is its dictionary definition (or gloss), which is not relevant to this assignment.

      wordnet-synsets.png

      For example, line 36 means that the synset { AND_circuitAND_gate } has an id number of 36 and its gloss is a circuit in a computer that fires only when all of its inputs fire. The individual nouns that constitute a synset are separated by spaces. If a noun contains more than one word, the underscore character connects the words (and not the space character).
  • List of hypernyms: The file hypernyms.txt contains the hypernym relationships. Line i of the file (counting from 0) contains the hypernyms of synset i.
    • The first field is the synset id, which is always the integer i;
    • subsequent fields are the id numbers of the synset’s hypernyms.

      wordnet-hypernyms.png

      For example, line 36 means that synset 36 (AND_circuit AND_Gate) has 42338 (gate logic_gate) as its only hypernym. Line 34 means that synset 34 (AIDS acquired_immune_deficiency_syndrome) has two hypernyms: 47569 (immunodeficiency) and 56099 (infectious_disease).

 

 

The WordNet data type 

 

Implement an immutable data type WordNet with the following API:

wn.png

 

  • The Wordnet Digraph contains 76066 nodes and 84087 edges, it’s very difficult to visualize the entire graph at once, hence small subgraphs will be displayed as and when required relevant to the context of the examples later.

 

  • The sca() and the distance() between two nodes v and w are implemented using bfs (bread first search) starting from the two nodes separately and combining the distances computed.

 

Performance requirements 

  • The data type must use space linear in the input size (size of synsets and hypernyms files).
  • The constructor must take time linearithmic (or better) in the input size.
  • The method isNoun() must run in time logarithmic (or better) in the number of nouns.
  • The methods distance() and sca() must make exactly one call to the length() and ancestor() methods in ShortestCommonAncestor, respectively.


The Shortest Common Ancestor
 

 

  • An ancestral path between two vertices v and w in a rooted DAG is a directed path from v to a common ancestor x, together with a directed path from w to the same ancestor x.
  • shortest ancestral path is an ancestral path of minimum total length.
  • We refer to the common ancestor in a shortest ancestral path as a shortest common ancestor.
  • Note that a shortest common ancestor always exists because the root is an ancestor of every vertex. Note also that an ancestral path is a path, but not a directed path.

wordnet-sca.png

  • The following animation shows how the shortest common ancestor node 1 for the nodes 3 and 10  for the following rooted DAG is found at distance 4 with bfs, along with the ancestral path 3-1-5-9-10.sca2.gif
  • The following animation shows how the shortest common ancestor node 5 for the nodes 8 and 11  for the following rooted DAG is found at distance 3 with bfs, along with the ancestral path 8-5-9-11.sca1.gif
  • The following animation shows how the shortest common ancestor node 0 for the nodes 2 and for the following rooted DAG is found at distance 4 with bfs, along with the ancestral path 6-3-1-0-2.sca3.gif
      
  • We generalize the notion of shortest common ancestor to subsets of vertices. A shortest ancestral path of two subsets of vertices A and B is a shortest ancestral path over all pairs of vertices v and w, with v in A and w in B.
  • The figure (digraph25.txt) below shows an example in which, for two subsets, red and blue, we have computed several (but not all) ancestral paths, including the shortest one.wordnet-sca-set.png

     

  • The following animation shows how the shortest common ancestor node 3 for the set of nodes {13, 23, 24} and {6, 16, 17}  for the following rooted DAG is found at associated length (distance) with bfs, along with the ancestral path 13-7-3-9-16.sca4Shortest common ancestor data type

     

    Implement an immutable data type ShortestCommonAncestor with the following API:

    sca.png

 

Basic performance requirements 

The data type must use space proportional to E + V, where E and V are the number of edges and vertices in the digraph, respectively. All methods and the constructor must take time proportional to EV (or better).

 

Measuring the semantic relatedness of two nouns

Semantic relatedness refers to the degree to which two concepts are related. Measuring semantic relatedness is a challenging problem. For example, let’s consider George W. Bushand John F. Kennedy (two U.S. presidents) to be more closely related than George W. Bush and chimpanzee (two primates). It might not be clear whether George W. Bush and Eric Arthur Blair are more related than two arbitrary people. However, both George W. Bush and Eric Arthur Blair (a.k.a. George Orwell) are famous communicators and, therefore, closely related.

Let’s define the semantic relatedness of two WordNet nouns x and y as follows:

  • A = set of synsets in which x appears
  • B = set of synsets in which y appears
  • distance(x, y) = length of shortest ancestral path of subsets A and B
  • sca(x, y) = a shortest common ancestor of subsets A and B

This is the notion of distance that we need to use to implement the distance() and sca() methods in the WordNet data type.

wordnet-distance.png

 

Finding semantic relatedness for some example nouns with the shortest common ancestor and the distance method implemented

 

apple and potato (distance 5 in the Wordnet Digraph, as shown below)

dag_apple_potato.png

As can be seen, the noun entity is the root of the Wordnet DAG.

beer and diaper (distance 13 in the Wordnet Digraph)

dag_beer_diaper.png

 

beer and milk (distance 4 in the Wordnet Digraph, with SCA as drink synset), as expected since they are more semantically closer to each other.

dag_beer_milk.png

bread and butter (distance 3 in the Wordnet Digraph, as shown below)

dag_bread_butter.png

cancer and AIDS (distance 6 in the Wordnet Digraph, with SCA as disease as shown below, bfs computed distances and the target distance between the nouns are also shown)

dag_cancer_AIDS.png

 

car and vehicle (distance 2 in the Wordnet Digraph, with SCA as vehicle as shown below)

dag_car_vehicle.png
cat and dog (distance 4 in the Wordnet Digraph, with SCA as carnivore as shown below)

dag_cat_dog.png

cat and milk (distance 7 in the Wordnet Digraph, with SCA as substance as shown below, here cat is identified as Arabian tea)

dag_cat_milk.png

 

Einstein and Newton (distance 2 in the Wordnet Digraph, with SCA as physicist as shown below)

dag_Einstein_Newton

Leibnitz and Newton (distance 2 in the Wordnet Digraph, with SCA as mathematician)

dag_Leibnitz_Newton

Gandhi and Mandela (distance 2 in the Wordnet Digraph, with SCA as national_leader synset)dag_Gandhi_Mandela

laptop and internet (distance 11 in the Wordnet Digraph, with SCA as instrumentation synset)dag_laptop_internet

school and office (distance 5 in the Wordnet Digraph, with SCA as construction synset as shown below)

dag_school_office

bed and table (distance 3 in the Wordnet Digraph, with SCA as furniture synset as shown below)
dag_table_bed

Tagore and Einstein (distance 4 in the Wordnet Digraph, with SCA as intellectual synset as shown below)

dag_Tagore_Einstein

Tagore and Gandhi (distance 8 in the Wordnet Digraph, with SCA as person synset as shown below)

dag_Tagore_Gandhi

Tagore and Shelley (distance 2 in the Wordnet Digraph, with SCA as author as shown below)
dag_Tagore_Shelley

text and mining (distance 12 in the Wordnet Digraph, with SCA as abstraction synset as shown below)
dag_text_mining

milk and water (distance 3 in the Wordnet Digraph, with SCA as food, as shown below)dag_water_milk

Outcast detection

 

Given a list of WordNet nouns x1x2, …, xn, which noun is the least related to the others? To identify an outcast, compute the sum of the distances between each noun and every other one:

di   =   distance(xix1)   +   distance(xix2)   +   …   +   distance(xixn)

and return a noun xt for which dt is maximum. Note that distance(xixi) = 0, so it will not contribute to the sum.

Implement an immutable data type Outcast with the following API:

outc.png

 

Examples

As expected, potato is the outcast  in the list of the nouns shown below (a noun with maximum distance from the rest of the nouns, all of which except potato are fruits, but potato is not). It can be seen from the Wordnet Distance heatmap from the next plot, as well as the sum of distance plot from the plot following the next one.
outcast_apple_pear_peach_banana_lime_lemon_blueberry_strawberry_mango_watermelon_potato

dag_apple_bananadag_strawberry_bananadag_strawberry_blueberry

dag_apple_potatodag_strawberry_potato

Again, as expected, table is the outcast  in the list of the nouns shown below (a noun with maximum distance from the rest of the nouns, all of which except table are mammals, but table is not). It can be seen from the Wordnet Distance heatmap from the next plot, as well as the sum of distance plot from the plot following the next one.

outcast_horse_zebra_cat_bear_table

dag_cat_beardag_cat_tabledag_horse_zebra

Finally, as expected, bed is the outcast  in the list of the nouns shown below (a noun with maximum distance from the rest of the nouns, all of which except bed are drinks, but bed is not). It can be seen from the Wordnet Distance heatmap from the next plot, as well as the sum of distance plot from the plot following the next one.

outcast_water_soda_bed_orange_juice_milk_apple_juice_tea_coffee

dag_bed_tea

dag_orange_juice_tea

Some more Variational Image Processing: Diffusion, TV Denoising, TV Image Inpainting in Python

In this article, a few variational image processing techniques will  be described along with application of those techniques with some images, most of the problems are taken from the assignments from this course.

 

Some preliminaries: The Calculus of Images – Computing Curvature and TV

 

  • Let’s first compute the Euclidian Curvature of a few images.  The curvature measures the rate at which the unit gradient vector is changing and is given byf11.png
  • The following couple of images are used to compute the curvature. As can be seen from the below figures, the curvature is zero in flat regions and along straight edges and non-zero along the rounded edges of the circles, as expected.cur_testcur_source
  • Now, let’s compute the total variation (TV), which is given by the following.f12.png
  • First we need to approximate the partial derivatives using a forward difference.
  • Let’s compute the TV for the grayscale image Cameraman. Now let’s add more and more Salt & Pepper noise (by increasing the probability threshold p) to the image and see how the norm of the gradient matrix along with the TV value changes from the following figure.cam
    tv_Cameraman256.png

The Heat Equation and Diffusion

Let’s implement the isotropic and anisotropic diffusion by solving PDEs numerically.
The following figure shows the math.

f13.png

The following shows the isotropic diffusion output with Δt = 0.1, with gradient descent.  As can be seen, the results are same as applying gaussian blur kernel on the image.

iso_cam.gif

The following shows the anisotropic diffusion output with Δt = 0.1, with gradient descent, with a = 5, 20, 100 respectively.  As can be seen, unlike isotropic diffusion, the anisotropic diffusion preserves the edges.

aniso_cam_5

aniso_cam_20

aniso_cam_100

Creating Cartoon / flat-texture Images with Anisotropic Diffusion

As can be seen from the following figures and animations, we can create cartoons from the real images with anisotropic diffusion, by applying the diffusion on each channel, this time on color images (the barbara image and my image).

Original image

barbara

Cartoon Image with anisotropic diffusion (a=5)

aniso_barbara_020.0_5

aniso_bar_col_10

Original Image

me2

Cartoon Image with anisotropic diffusion (a=5)

aniso_me2_020.0_5

 

 

me_col_7.5

Total Variation Denoising

The following math is going to be used for TV denoising, the Euler-Lagrange equation is used to solve the minimum of the functional, as shown in the following figures with proof.

f14.pngf9

f10.png

  • First a noisy grayscale image is prepared by adding Gaussian noise to the cameraman image.

 

Original Cameraman
Cameraman256

Noisy Cameraman

noisy_cam

 

  • Let’s first denoise this image with linear TV denoising. The next animations show the results obtained , using the fidelity weight λ=1. As can be seen, even with the fidelity term, this model blurs the edges.tvld_cam.gif
  • Now let’s denoise this image with Nonlinear TV denoising. The next animations show the results obtained , using the fidelity weight λ=0.01 and λ=1 respectively.tvn_cam_0.01.gif
    tvn_cam_1

 

Image Inpainting

Inpainting is the process of restoring damaged or missing parts of an image. Suppose we have a binary mask D that specifies the location of the damaged pixels in the input image f as shown below:

f15.png

The following theory is going to be used for TV inpainting.

f16.png

Damaged image  

tampered_cmask1_Cameraman256

cam_in0

Damaged image  

tampered_cmask2_Cameraman256cam_in1

Damaged image  

tampered_text_Cameraman256cam_in2cam_in2

Damaged image
tampered_lena2

lena_in.gif

tv_inpaint_gd_tampered_lena2.png

Some Variational Image Processing: Poisson Image Editing and its applications in Python

Poisson Image Editing

The goal of Poisson image editing is to perform seamless blending of an object or a texture from a source image (captured by a mask image) to a target image. We want to create a photomontage by pasting an image region onto a new background using Poisson image editing. This idea is from the P´erez et al’s SIGGRAPH 2003 paper Poisson Image Editing.

The following figures describe the basic theory. As can be seen, the problem is first expressed in the continuous domain as a constrained variational optimization problem (Euler-Lagrange equation is used to find a solution) and then can be solved using a discrete Poisson solver.

f7.png

 

f8.png

 

As described in the paper and also in this assignment from this MIT course on Computational Photography, the main task of Poisson image editing is to solve a huge linear system Ax = b (where I is the new unknown image and S and T are the known images).

 

Seamless Cloning

The following images are taken from an assignment  from the same MIT course, where the Poisson image editing had to be used to blend the source inside the mask inside the target image. The next few figures show the result obtained.

Source Imagebear

Mask Imagemask

Target Imagewaterpool

Output Gray-Scale Image with Poisson Image Editing
pe_waterpool

The next animation shows the solution gray-scale images obtained at different iterations using Conjugate Gradient method when solving the linear system of equations.

peditmit1

Output Color Image with Poisson Image Editingpe_waterpool_color1

The next animation shows the solution color images obtained at different iterations using Conjugate Gradient method to solve the linear system of equations, applying the discrete Poisson solver on each channel.

pbear.gif

 

 

The following images are taken from this UNC course on computational photography. Again, the same technique is used to blend the octopus from the source image to the target image.

Source Imagesource

Mask Imagemask

Target Imagetarget

Output Imagepe_target_color

The next animation shows the solution color images obtained at different iterations using Conjugate Gradient method to solve the linear system of equations, applying the discrete Poisson solver on each channel.

poct.gif

 

Again, Poisson gradient domain seamless cloning was used to blend the penguins’ image  inside the following target image with appropriate mask.

Source Image
peng1

Target Image                                                                                                            trekking

Output Imagepe_trekking.jpg

The next animation again shows the solution color images obtained at different iterations using Conjugate Gradient method to solve the linear system of equations, applying the discrete Poisson solver on each channel.

pe-trekk

 

The next figures show how a source bird image is blended into the target cloud image with seamless cloning.

Source Image
bird1

Target Imagecloud

Output gray-scale imagepe_cloud

The next animation shows the solution gray-scale images obtained at the first few iterations using Conjugate Gradient method when solving the linear system of equations.

pedit2

 

Finally, the next figures show how the idol of the Goddess Durga is blended into the target image of the city of kolkata with seamless cloning. As can be seen, since the source mask had its own texture and there is a lots of variations in the background texture, the seamless cloning does not work that well.

Source Image
madurga

Target Image
kol

Output Image
pe_kol_color

The next animation shows the solution gray-scale images obtained at the first few iterations using Conjugate Gradient method when solving the linear system of equations.

pedit

The next animation again shows the solution color images obtained at different iterations using Conjugate Gradient method while solving the linear system of equations, applying the discrete Poisson solver on each channel.

pe-madurga

 

Feature Cloning: Inserting objects

The next figures are taken from the same paper, here the eyes, nose and lips from the  source face image is going to be inserted into the target monalisa face.

  Source image          Target image         Output image
face mona pe_mona

The next animation again shows the solution color images obtained at different iterations using Conjugate Gradient method while solving the linear system of equations, applying the discrete Poisson solver on each channel.

pe-mona

 

Texture Swapping: Feature exchange with Seamless Cloning

As discussed in the paper, seamless cloning allows the user to replace easily certain features of one object by alternative features. The next figure shows how the texture of the other fruit was transferred to the orange, the images being taken from the same paper.

Source Image        Target Image        Mask Image
sfruit   dfruit    fmask

Output Image
pe_dfruit

The next animation again shows the solution color images obtained at different iterations using Conjugate Gradient method while solving the linear system of equations, applying the discrete Poisson solver on each channel.

pfruit

 

Gradient Mixing: Inserting objects with holes

Again, the next figures are taken from the same paper, this time the source objects contain holes. As can be seen from the following results, the seamless cloning does not work well in this case for inserting the object with holes into the target, the target texture is lost inside the mask after blending.

Source Image                                            Target Image
srch            dsth

Output Image with Poisson Seamless Cloning
pe_tran1

The next animation again shows the solution color images obtained at the first few iterations using Conjugate Gradient method while solving the linear system of equations for seamless cloning, applying the discrete Poisson solver on each channel.

pe-tran1

Using the mixing gradients method the blending result obtained is far better, as shown below, it preserves the target texture.

Output Image with Poisson Mixing Gradients
pe_tran

The next animation again shows the solution color images obtained at the first few iterations using Conjugate Gradient method while solving the linear system of equations for mixing gradients, applying the discrete Poisson solver on each channel.

pe-tran

 

Mixing Gradients: Inserting transparent objects

The next example shows the insertion of a rainbow into a target image, the images are taken from the paper again. As can be seen, the seamless cloning wrongly places the rainbow in front of the coconut tree in the target image. Using gradient mixing, the stronger gradient is used as the image gradient and this solves the issue.

 

    Source Image                              Target Image                         Mask Imagerainbow    sky1      rmask

Output Image with Seamless Cloning
pe_sky
Output Image with Mixing Gradients
pe_rainbow

The next animation again shows the solution color images obtained at the first few iterations using Conjugate Gradient method while solving the linear system of equations for mixing gradients, applying the discrete Poisson solver on each channel.

pe-rainbow

 

The next few figures show the results obtained using mixing gradients on another set of images, the seamless cloning does not work well in this case, but mixing gradient works just fine.

 

Source Image
liberty1

Target Image
vic

Output Image with mixing gradients
pe_vic

The next animation again shows the solution color images obtained at the first few iterations using Conjugate Gradient method while solving the linear system of equations for mixing gradients, applying the discrete Poisson solver on each channel.

pe-vic

Texture Flattening: Creating Flat-texture Cartoon-like images

As illustrated in the paper, by retaining only the gradients at edge locations, before integrating with the Poisson solver, one washes out the texture of the selected region, giving its contents a flat aspect.

The following figures show the output cartoon-like image obtained using texture flattening, using the canny-edge detector to generate the  mask.

Source Image
pe_face1

Mask Image created with Canny edge detector
mface1

Output cartoon obtained with texture flattening from the source with the mask
pe_cart1

The next animation shows the solution color images obtained at the first few iterations using Conjugate Gradient method while solving the linear system of equations for texture flattening, applying the discrete Poisson solver on each channel.

pe-cart1

 

Again, the next figures show the output cartoon-like image obtained using texture flattening, using the canny-edge detector to generate the  mask on my image.

Source Image
me2

Output image obtained with texture flattening
pe-cart-me

The next animation again shows the solution color images obtained at the first few iterations using Conjugate Gradient method while solving the linear system of equations for texture flattening, applying the discrete Poisson solver on each channel.

pe-cart-me