The following problems appeared in a project in the edX course **236862.1x: Introduction to the Fundamentals of Sparse Representations **by **Prof. Michael Elad** from **The Technion – Israel Institute of Technology**. The description of the problems are taken straightaway from the project.

## Pursuit Algorithms

In this article we demonstrate the **Orthogonal Matching Pursuit** (**OMP**) and **Basis Pursuit** (**BP**) algorithms by running them on a set of test signals and checking whether they provide the desired outcome for the **P0** problem.

Some Theory

Some Theory

Some Theory

Our goal is to solve the following problem:

which is also known as the **P0** problem (since the objective function to be minimized is the * L0 norm* of a vector

**): a problem that seeks the sparsest solution to a linear system. The concept of seeking the sparsest representation for data are of central importance to data processing in a universal way (e.g., in machine learning, image processing, computer vision, remote sensing), although the problem is**

*x***in general.**

*NP-hard*In order to solve it in practice, some **approximation algorithm** is needed to be used, either using **greedy** or **relaxation** based approaches, as shown in the next figure.

In this article the implementation of the above two algorithms and some experimental results on approximately solving the **P0** problem with some synthetic signal dataset will be described. The following figure describes how the experiments are to be done:

**Data Construction**

**Data Construction**

In this part we create a dictionary and synthetic signals that will serve for us in the experiments. Here are the ingredients involved:

- The matrix A is the “
**dictionary**” (of size 50-by-100) under which the*ideal signal*is known to be*sparse*. We need to first construct this dictionary by drawing at*random*a matrix with*i.i.d. Gaussian entries*. Normalize each atom to a*unit L2 norm*.

- Then we need to generate a sparse vector
of*x0***cardinality**. Let’s draw at random the locations of the non-zero entries. Then, draw at random the value of each entry from a uniform distribution in the range [1, 3] and multiply each by a random sign.*s*

The following figure shows the *heatmap* of the dictionary **A** randomly generated.

**Greedy Pursuit**

**Greedy Pursuit**

The greedy algorithm **OMP** is described in the following figure:

**Basis Pursuit **

**Basis Pursuit**

Even though the **P0** problem is **relaxed** to **P1**, it’s still not ready to be solved by an LP Solver since, the objective function is still *not* *linear*. As explained in this thesis, the P1 problem can be converted to a LP by introducing a bunch of new variables and using the trick shown in the following figure.

The matlab /octave function to implement the **BP** algorithm using **LP** is shown below:

**Analyzing the Results**

**Analyzing the Results**

In this part we compare the results obtained via the **OMP** and **BP,** by executing the following steps.

- Plot the average relative ℓ2 error, obtained by the OMP and BP versus the cardinality.
- Plot the average support recovery error, obtained by the OMP and BP versus the cardinality.
- Discuss the presented graphs and explain the results.

**Discussion **

As can be seen from the above results,

• The **Linear Programming**-based **relaxed** implementation of **Basis Pursuit** has higher accuracy (in terms of both **L2** and **support-probability** **errors**) than the **greedy** **Orthogonal Matching Pursuit**, particularly when the cardinality of the true solution increases beyond cardinality 6.

• Both **OMP** and **BP** (**LP**) performs equally well (with almost *zero* **L2** **error**) upto cardinality 6 and then OMP clearly performs worse than the BP (LP).

• Although the average **L2 error** for **OMP** increases upto **0.2** when the cardinality of the true solution increases to **15**, whereas that of BP only increases very *slightly*.

• Similar pattern can be seen for the **probability of error in support**.

## Mutual Coherence of A and theoretical guarantees for OMP and BP

where the mutual coherence of matrix A is defined as follows:

For our experiment, the **mutual coherence****μ(A)** = **0.45697. **Hence, as per the *theoretical guarantee*, the **OMP** and **BP** are guaranteed to find the solution of **P0** for **s < 1.594164**. Although in practice we observe that till *s = 6* the solution found is quite accurate, since the theoretical bound shown above is for the **worst-case **only.