This is a quick guide for people who newly joined the Machine Reading Comprehension army. Here I’ll give you some advice about which dataset to start with. ABOUT MRC Teaching machines to read is a non-negligible part of ‘True AI’, people are making progress since the renaissance of deep learning, however, we’re not even close, the state-of-the-art models […]
NAMED ENTITY RECOGNITION In Natural Language Processing, named-entity recognition is a task of information extraction that seeks to locate and classify elements in text into pre-defined categories. The following graph is stolen from Maluuba Website, it perfectly demonstrates what does NER do.
INTRODUCTION Sparse coding is one of the very famous unsupervised methods in this decade, it is a dictionary learning process, which target is to find a dictionary that we can use a linear combination of vectors in this dictionary to represent any training input vector. For better capture structures and patterns inherent in the input vectors, […]
I chose “Dropped out auto-encoder” as my final project topic in the last semester deep learning course, it was simply dropping out units in regular sparse auto-encoder, and furthermore, in stacked sparse auto-encoder, both in visible layer and hidden layer. It does not work well on auto-encoders, except can be used in fine-tune process of stacked sparse […]
Since the last CNN post, I was working on a new version of CNN, which support multi-layers Conv and Pooling process, I’d like to share some experience here. VECTOR VS HASH TABLE You can see in the last post, I used vector of Mat in convolution steps, it works well when we only have one […]
WHAT IS CNN A Convolutional Neural Network (CNN) is comprised of one or more convolutional layers, pooling layers and then followed by one or more fully connected layers as in a standard neural network. The architecture of a CNN is designed to take advantage of the 2D structure of an input image (or other 2D input […]
This is the early version of my CNN, at that time, I incorrectly thought that I can just use some randomly chosen Gabor filters to do the convolution, so I wrote this. Actually, the test result is not bad for simple datasets such as MNIST, I think it’s just a fake CNN, but a nice […]
During this spring break, I worked on building a simple deep network, which has two parts, sparse autoencoder and softmax regression. The method is exactly the same as the “Building Deep Networks for Classification” part in UFLDL tutorial. For better understanding it, I re-implemented it using C++ and OpenCV. GENERAL OUTLINE Read dataset (including training data […]
I’m learning Prof. Andrew Ng’s Unsupervised Feature Learning and Deep Learning tutorial, This is the 8th exercise, which is a simple ConvNet with Pooling process. I’ll not go through the detail of the material. More details about this exercise can be found HERE. I’ll try to implement it using C++ and OpenCV if I have time next week.
This is the same algorithm with the previous SOFTMAX REGRESSION post. Because I’m going to try to build deeper neural networks for images, so as a review of OpenCV programming, I rewrote the Softmax regression code using OpenCV mat, instead of Armadillo. I used Matlab, Octave, Armadillo a lot these days, it is kind of […]