Poisson Blending II

I re-wrote the Poisson Blending code using C++ and OpenCV.

About the Algorithm, see my Previous Poisson Blending post.

This time, I just used the most stupid way, just solving the Poisson Equation. You can improve it by using advanced methods. About solving discrete Poisson Equation using Jacobi, SOR, Conjugate Gradients, and FFT, read THIS 

In this code, I used two different ways to calculate vector (actually they are the same…), The first way is calculate gradient by Convolution a Laplacian Matrix, the second way is calculate gradient directly. You can use any one you like. Continue reading “Poisson Blending II” »

Posted in Algorithm, Maths, OpenCV | Tagged , , , , | 16 Responses

Softmax Regression (with OpenCV)

This is the same algorithm with the previous SOFTMAX REGRESSION post. Because I’m going to try to build deeper neural networks for images, so as a review of OpenCV programming, I rewrote the Softmax regression code using OpenCV mat, instead of Armadillo.

I used Matlab, Octave, Armadillo a lot these days, it is kind of hard to adapt OpenCV style, such as:

// in Armadillo, doing this is super easy.
temp = repmat(sum(M, 0), nclasses, 1);

// but in OpenCV...
temp = Mat::zeros(1, M.cols, CV_64FC1);
reduce(M, temp, 0, CV_REDUCE_SUM);
temp2 = repeat(temp, nclasses, 1);

Continue reading “Softmax Regression (with OpenCV)” »

Posted in Algorithm, Machine Learning, OpenCV | Tagged , , , , | 3 Responses

[UFLDL Exercise] Implement deep networks for digit classification

I’m learning Prof. Andrew Ng’s Unsupervised Feature Learning and Deep Learning tutorial, This is the 6th exercise, which is a combination of Sparse Autoencoder and Softmax regression algorithm, and fine-tuning algorithm. It builds a 2-hidden layers sparse autoencoder net and one layer Softmax regression, we first train this network layer by layer, from left to right, then use fine-tuning, to back propagate the weights, and improve the network. I’ll not go through the detail of the material. More details about this exercise can be found HERE.

BTW, I got some cool idea after finished this exercise, maybe I’ll implement it these days. Continue reading “[UFLDL Exercise] Implement deep networks for digit classification” »

Posted in Algorithm, Machine Learning | Tagged , , , , , , , | 3 Responses

C++ code for reading MNIST dataset

Here’s a code for reading MNIST dataset in C++, the dataset can be found HERE, and the file format is as well.

Using this code, you can read MNIST dataset into a double vector, or an OpenCV Mat, or Armadillo mat.

Feel free to use it for any purpose. (part of this code is stolen from HERE) Continue reading “C++ code for reading MNIST dataset” »

Posted in Machine Learning, OpenCV | Tagged , , , , | 5 Responses

[UFLDL Exercise] Self-Taught Learning

I’m learning Prof. Andrew Ng’s Unsupervised Feature Learning and Deep Learning tutorial, This is the 5th exercise, which is a combination of Sparse Autoencoder and Softmax regression algorithm. It uses the features trained by sparse autoencoder as training input of Softmax regression, and builds a classifier which have more accuracy than regular softmax regression. I’ll not go through the detail of the material. More details about this exercise can be found HERE.

When debugging the code, you can decrease the size of training set by:

mnistData = mnistData(:, 1:50);
mnistLabels = mnistLabels(1:50);
maxIter = 10;

Continue reading “[UFLDL Exercise] Self-Taught Learning” »

Posted in Algorithm, Machine Learning | Tagged , , , , | 2 Responses

Softmax Regression

WHAT IS SOFTMAX

Softmax regression always as one module in a deep learning network, and most likely to be the last module, the output module. What is it? It is a generalized version of logistic regression. Just like logistic regression, it belongs to supervised learning, and the superiority is, the class label y can be more than two values. Which means, compare to logistic regression, the amount of class will not be restricted to two. However, someone may recall that we can use “1 VS others” method to deal with multi-class problems, I hold this question for a while and let’s see formulae of softmax regression. Continue reading “Softmax Regression” »

Posted in Algorithm, Machine Learning | Tagged , , , , | 1 Response

Seminar this week

I attended a seminar this Monday. The first part was by Prof. Michael Jordan from UC Berkeley, it was about statistics and big data, to be honest, I had no idea of what he was talking about, I barely understood those things. However, the second part was by Prof. Rob Fergus, about deep learning for computer vision, was closer to my scope of knowledge. I inspired by it, and thinking maybe I can try to implement a deep learning network by my self. I know that most deep learning systems are using GPU instead of CPU, for large data calculation, but that is Beyond my scope, I can start by implementing simple parts, use regular methods and languages, and after my system can work on some simple datasets (for example, MNIST etc. ), I’ll learn GPU programming and try to use GPU instead.

That will be a long story, for now, let’s focus on every small pieces of knowledge.

I can do it!

顽张れ!

 

Posted in Twaddle | Tagged , | Leave a comment

[UFLDL Exercise] Softmax Regression

I’m learning Prof. Andrew Ng’s Unsupervised Feature Learning and Deep Learning tutorial, This is the 4th exercise, which is using Softmax regression to build a classifier and classify MNIST handwritten digits. Just like my other UFLDL exercise posts, I’ll not go through the detail of the material. More details about this exercise can be found HERE.

I’ll re-implement Softmax regression algorithm with C++ this weekend (if I have can finish my homeworks), and I will write a post about Softmax. Continue reading “[UFLDL Exercise] Softmax Regression” »

Posted in Algorithm, Machine Learning | Tagged , , , , | 4 Responses

[UFLDL Exercise] PCA and Whitening

I’m learning Prof. Andrew Ng’s Unsupervised Feature Learning and Deep Learning tutorial, This is part 2 of the 3rd exercise, which is use PCA algorithm in a natural image dataset. Just like my other UFLDL exercise posts, I’ll not go through the detail of the material. More details about this exercise can be found HERE. Continue reading “[UFLDL Exercise] PCA and Whitening” »

Posted in Algorithm, Machine Learning | Tagged , , , , , | 2 Responses

[UFLDL Exercise] PCA in 2D

I’m learning Prof. Andrew Ng’s Unsupervised Feature Learning and Deep Learning tutorial, This is part 1 of the 3rd exercise, which is use PCA algorithm in a simple 2D dataset. Just like my other UFLDL exercise posts, I’ll not go through the detail of the material. More details about this exercise can be found HERE. Continue reading “[UFLDL Exercise] PCA in 2D” »

Posted in Algorithm, Machine Learning, Uncategorized | Tagged , , , , | Leave a comment