I was in Communications Engineering major during my undergraduate years, I learnt Signals and Systems and Digital Signal Processing before, so I thought I was quite familiar with things like convolution, Fourier transform. However, recently I realized that what I knew is only sort of superficial things. I’m attending Prof. Eero Simoncelli‘s “Representation and Analysis of Visual Images” course this semester, this course broadens my horizon of these knowledge.

Convolution is a mathematical operation on two signals, what the output is a third signal that is generated by translate one input signal and fix the other one, and calculated by the overlap between the two signals during this whole process. It is a toolbox rather than a mathematical formula, a toolbox that makes lots of thoughts work.

### ABOUT BOUNDARIES

What always bothers people is what value should we use in boundaries of convolution problems, see this example:

This is 1-d convolution, say a have 5 elements and b have 3, if we use SHAPE=”full” in Matlab/Octave, we get a 7-element vector, and use SHAPE=”same”, we get a 5-element vector. For the “full” case, it gets same result by doing this:

The length of output vector is 7, which is :

#### Length (A) + Length (B) – 1

In some cases, change the size of vector (or matrix, in 2-d or higher dimension) is the last thing we want to do, so SHAPE=”same” seems to be a good choice. For the “same” case, we can get the result by:

The output vector’s length is 5, so far so good. However, as we can see that in first row and fifth row (begins from 1), we lost part of the convolution kernel, that means we are assuming that: before 1 and after 5, the value of a is 0. In digital world, this seems okay but in real world, this sudden changed signal seems kind of weird. Imagine we are processing an image, the output image must have some weird edges after this kind of process.

So, what can we do?

One way to improve is to suppose, how the signal should looks like outside this certain period of time. Here are several simple hypotheses:

- Figure_1 is the original signal, which we know how it goes from t1 to t2.
- Figure_2 is how the regular algorithm does, suppose that it is 0 in both side of t1 and t2.
- Figure_3 is suppose the value of function will continue to be the value of t1 and t2, we also call this “constant solution”.
- Figure_4 is called “circular” solution, we suppose the signal is a Periodic signal.
- Figure_5 is also Periodic signal, but the period becomes 2 * (t2 – t1), it’s like to use mirrors at t1 and t2, and inverse the signal (for example, we suppose that f(t2 + 1) is equal to f(t2 – 1), and f(t2 + 2) is equal to f(t2 – 2), as well).
- Figure_6 is the advanced version of Figure_5, we inverse both vertical and horizontal, which means we suppose that f(t2 + 1) – f(t2) = f(t2) – f(t2 – 1).
- Figure_7 means, we calculate the derivative of the signal at t1 and t2, and suppose the slope at t1 and t2 to be the outer signal.
- Figure_8 is a little more tricky, we let the function softly change to some constant.

For all the above solutions, each of them has its own advantage, for example using some of the solutions, we can keep the derivative of the signal, and some can keep the value of the signal at t1 and t2. Several of them works better than the regular convolution method does, however in practice they are not good enough, and actually many people are working on this problem.

To implement the above methods, it is troublesome to alter the signal, but is easy by altering the kernel. I’ll use the Figure_5 above as an example:

if the original kernel is something like:

Use kernel2 instead of kernel1, the convolution will work like using the “mirror” signal.

### ABOUT SVD

Before talking about SVD, I want to talk about Separable Convolution.

The first question is, what is the definition of separable? A 2-d kernel is separable if it can be expressed as the outer product of two vectors. For example, if we have two vectors **a** and **b**,

Sobel kernel is the outer product of vector a and vector b, so we can say that Sobel kernel is separable.

The second question is, why do we need separable kernel? The answer is, using separable kernel, we can do convolution easier. See this example:

x1 = randn(3, 1); x2 = randn(1, 3); kernel = x1 * x2; signal = randn(5, 5); res1 = conv2(signal, kernel, 'same') temp = conv2(signal, x1, 'same'); res2 = conv2(temp, x2, 'same')

I used two random vectors, and generated a separable kernel with them, res1 is the output of 2-d convolution of signal with kernel; res2 is the output of 2-d convolution of signal with first vector and then with the second vector, see the results, these two methods actually can get same output:

But, to calculate the output using the second way is easier than the first way. Instead of using 3*3 kernel and 5*5 matrix, I’ll use 100*100 kernel and 1,000*1,000 matrix, let’s see the running time using both the above ways.

See the difference? Really cool, right?

Now let’s move forward on SVD. SVD is the abbreviation of **Singular value decomposition**, it is a very useful tool in many fields, what it does can be briefly interpreted as, decompose any matrix into three parts, including a rotation matrix, a scaling matrix, and another rotation matrix.

As the above paragraphs say, we can accelerate the calculate speed of 2-d convolution using separable matrix, and SVD is exactly the way to find, or to approximate the two vectors of a matrix. Let’s see the following example, let me use the Sobel kernel again.

sobel = [-1 0 1; -2 0 2; -1 0 1] [U, S, V] = svd(sobel); a = U(:, 1) * sqrt(S(1, 1)); b = V(:, 1)' * sqrt(S(1, 1)); a * b

And the result is:

So if we want to do convolution using the Sobel kernel, we can also use a and b to do that convolution, and it is faster than Sobel kernel!

Along with writing this article, I understood these knowledge deeper, hope it can help others with learning these things, as well. Happy Presidents day!

🙂