A Fake Convolutional Neural Network

This is the early version of my CNN, at that time, I incorrectly thought that I can just use some randomly chosen Gabor filters to do the convolution, so I wrote this.  Actually, the test result is not bad for simple datasets such as MNIST, I think it’s just a fake CNN, but a nice deep network, which convolves the images with randomly chosen Gabor filters and pooling, then train use regular deep network. The convolution and pooling parts can be seen as kind of pre-processing.

ARCHITECTURE

  • Generate a Gabor filter bank.
  • Randomly choose 8 filters, and convolve with training data.
  • 2 * 2 Pooling.
  • Randomly choose 4 filters, convolve.
  • 2 * 2 Pooling.
  • Regular deep network.

The filter bank I generated is something like this:

888

For the regular deep network, I tried two different type:

  1. 2-layer Sparse Autoencoder + Softmax
  2. 2-layer full connected + Softmax

and they both work fine.

PART OF SOURCE CODE

// A Gabor kernel maker
// from https://github.com/dominiklessel/opencv-gabor-filter
// ksize: kernel size
// sigma: standard deviation of the Gaussian envelope
// lambda: wavelength of sinusoidal factor
// theta: orientation of the normal to the parallel stripes of a Gabor function
// psi: phase offset
Mat 
mkGaborKernel(int ksize, double sig, double theta, double lm, double ps)
{
    int hks = (ksize - 1) / 2;
    double omega = theta * CV_PI / 180;
    double psi = ps * CV_PI / 180;
    double del = 2.0 / (ksize - 1);
    double lambda = lm;
    double sigma = sig / ksize;
    double x_omega, y_omega;
    cv::Mat kernel(ksize, ksize, CV_64FC1);
    for (int y= -hks; y<=hks; y++){
        for (int x= -hks; x<=hks; x++){
            // Rotation 
            x_omega = x * del * cos(omega) + y * del * sin(omega);
            y_omega = -x * del * sin(omega) + y * del * cos(omega);
            //only real part of gabor filter
            kernel.ATD(hks + y, hks + x) = (double)exp(-0.5 * (pow(x_omega, 2) + 
                                                 pow(y_omega, 2)) / pow(sigma, 2)) * 
                                                 cos(2 * CV_PI * x_omega / lambda + psi);
        }
    }
    return kernel;
}

void
mkFilterBank(vector<Mat> &filterBank, int ksize, int bankSize){

    double sigma = 2;
    double lambda = 0.8 + 50/100.0;
    for(int i=0; i<bankSize; i++){
        int ranTheta = rand() % 180;
        int ranPsi = rand() % 180;
        Mat kernel = mkGaborKernel(ksize, sigma, ranTheta, lambda, ranPsi);
        filterBank.push_back(kernel);
    }
}

// A Matlab/Octave style 2-d convolution function.
// from http://blog.timmlinder.com/2011/07/opencv-equivalent-to-matlabs-conv2-function/
Mat 
conv2(Mat &img, Mat& kernel, int convtype) {
    Mat dest;
    Mat source = img;
    if(CONV_FULL == convtype) {
        source = Mat();
        int additionalRows = kernel.rows-1, additionalCols = kernel.cols-1;
        copyMakeBorder(img, source, (additionalRows+1)/2, additionalRows/2, (additionalCols+1)/2, additionalCols/2, BORDER_CONSTANT, Scalar(0));
    }

    Point anchor(kernel.cols - kernel.cols/2 - 1, kernel.rows - kernel.rows/2 - 1);
    int borderMode = BORDER_CONSTANT;
    Mat fkernal;
    flip(kernel, fkernal, -1);
    filter2D(source, dest, img.depth(), fkernal, anchor, 0, borderMode);

    if(CONV_VALID == convtype) {
        dest = dest.colRange((kernel.cols-1)/2, dest.cols - kernel.cols/2)
                   .rowRange((kernel.rows-1)/2, dest.rows - kernel.rows/2);
    }
    return dest;
}

 TEST RESULT

999

figure1

For simple datasets like MNIST, it performs well, and is faster than ConvNet.

The REAL CONVNET is HERE

Enjoy it 🙂

This entry was posted in Algorithm, Machine Learning and tagged , . Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.

3 Comments

  1. rahul
    Posted June 26, 2015 at 10:42 am | Permalink

    can’t we just use hard coded Gabor filters to do convolution? This video suggests that it is okay to use gabor filters instead of training your own filters?

    What is wrong with using gabor filters?

  2. rahul
    Posted June 26, 2015 at 10:49 am | Permalink
  3. Jim
    Posted December 4, 2015 at 7:48 pm | Permalink

    Thank you for your great work! It is very impressive.
    But I was wondering why your error rate is higher than the 99% reported by Yann LeCun. Is it just because you did not think it was necessary to keep fine tuning your codes? Or your algorithm is essentially different (If so, there must be some benefit, i.e. low cost or shorter runtime).

One Trackback

  • […] Gabor filters (you can see it in the following content). I mis-understood this point last week, so my first version of CNN generates a Gabor filter bank (about 200 random Gabor filters), and randomly choose several to […]

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*