Compare Among Popular Machine Reading Comprehension Datasets

This is a quick guide for people who newly joined the Machine Reading Comprehension army. Here I’ll give you some advice about which dataset to start with.

ABOUT MRC

Teaching machines to read is a non-negligible part of ‘True AI’, people are making progress since the renaissance of deep learning, however, we’re not even close, the state-of-the-art models still hard to beat a human kid. Our brain system is so sophisticated, that most people who claim their models to be inspired by human brain actually don’t exactly know how human brain works. Continue reading “Compare Among Popular Machine Reading Comprehension Datasets” »

Posted in Deep Learning, Machine Learning, Machine Reading Comprehension, NLP | Tagged , , , , , , , | Leave a comment

Deep Neural Network Framework in CUDA

Hey guys, long time no see! I’m happy to show you the project I’m working on recently. I transplanted my convolutional neural network implementation into GPU environment, and made a deep neural network framework in nVidia CUDA. Although it is not completely finished, most of the functions are available for use. You are more than welcome to try and play with it, and I appreciate if you could contribute or tell me any bug of it.

Continue reading “Deep Neural Network Framework in CUDA” »

Posted in Machine Learning, Twaddle | Tagged , | 2 Responses

Recurrent Neural Networks II — LSTM

In my previous post, I introduced the basic ideas of Recurrent Neural Networks, as the 2nd post of RNNs, we’ll focus on long short-term memory method.

LONG SHORT TERM MEMORY

One of the very famous problems of RNNs is the vanishing gradient, the problem is that the influence of a given input on the hidden layer, and therefore on the network output, either decays or blows up exponentially as it cycles around the network’s recurrent connections.

1 Continue reading “Recurrent Neural Networks II — LSTM” »

Posted in Machine Learning, NLP, OpenCV | Tagged , , , , , , , , | 3 Responses

Recurrent Neural Networks I

RNNs

Recurrent neural networks are very famous recently, they play as important roles as convolutional neural networks. RNNs can use their internal memory to process arbitrary sequences of inputs, so not only images, they work well on speech recognition and natural language processing tasks.

There are several type of RNNs, as the beginning, we focus our attention on only Elman-type RNNs (similar with Jordan type, they are both the simplest type of RNNs) in this post, and I’ll introduce and implement other advanced types of RNNs in the future parts of this RNNs series.

1

 

Continue reading “Recurrent Neural Networks I” »

Posted in Machine Learning, NLP | Tagged , , , | 6 Responses

Named-Entity Recognition using Deep Learning

NAMED ENTITY RECOGNITION

In Natural Language Processing, named-entity recognition is a task of information extraction that seeks to locate and classify elements in text into pre-defined categories. The following graph is stolen from Maluuba Website, it perfectly demonstrates what does NER do.

maluuba

 

Continue reading “Named-Entity Recognition using Deep Learning” »

Posted in Machine Learning, NLP | Tagged , , , , , | 3 Responses

Texture Synthesis

What in this post is actually part of my computational photography homework, because I’m recently preparing for interviews, so for reviewing it, I re-implemented this method.

WHAT IS IT

Texture synthesis is another very interesting application of image processing. What it does is, given a texture sample, generate new texture that is similar with the given sample, similar here means for human observer, the new generated texture appears to be the same kind of texture.

For example, if we have the following image.

a1 Continue reading “Texture Synthesis” »

Posted in Algorithm, OpenCV, Vision | Tagged , | 2 Responses

Morlet Wavelet

This is a simple Morlet Wavelet (Gabor wavelet) generator, which can be used for edge detecting.

CODE

% 2d morlet kernel generator
% input:
%       hori:   horizontal grid point amount
%       vert:   vertical grid point amount
%       theta:  theta
%       sigma:  controls the size of the kernel
%       npeaks: number of significant peaks appearing in the kernel     
% output:
%       psi: a 2d Morlet wavelet kernel (psi is complex)
%
function [psi] = morlet_2d(hori, vert, theta, sigma, npeaks)

    xi = 1 / npeaks * 4 * sigma;
    % 3 * sigma for each side
    %width = 6 * sigma;
    width = 18;
    height = width;
    x = linspace(- width / 2, width / 2, hori);
    y = linspace(- height / 2, height / 2, vert);
    [X, Y] = meshgrid(x, y);
    
    ue0 = X * cos(theta) + Y * sin(theta);
    u2 = X .^ 2 + Y .^ 2;
    k1 = exp(1i * 2 * pi / xi * ue0);
    k2 = exp(-0.5 * u2 / (sigma ^ 2));
    
    % get C2 (zero mean)
    C2 = sum(sum(k1 .* k2)) ./ sum(k2(:)); 
    % get C1 (one norm)
    tmp = (k1 - C2) .* k2;
    product = tmp .* conj(tmp);
    C1 = 1 ./ sum(product(:)) ^ 0.5;
    % result
    psi = C1 * (k1 - C2) .* k2;
end

Continue reading “Morlet Wavelet” »

Posted in Algorithm, Vision | Tagged , , | Leave a comment

Hanafuda

I watched an anime called Summer Wars (サマーウォーズ/夏日大作战) last month, I was impressed by the Hanafuda card game in it. Occasionally I found Hanafuda in Kinokuniya the Japanese book store yesterday, I bought one.

I’m learning to play Hanafuda and thinking if I can make a simple Hanafuda AI. It would be great since I’m preparing for the upcoming career fair, this would be far more interesting than doing LeetCode…

https://github.com/xingdi-eric-yuan/hanafuda

 

Maybe it’s good idea to use this post as a log or something like that, to keep a record of algos and methods using in this little game. 

Continue reading “Hanafuda” »

Posted in Something else | Tagged , | 6 Responses

Convolutional Neural Networks III

Hey, I’m recently working on my new version of CNN, the updates are as follows:

  1. Support 3-channels images;
  2. Add Dropout;
  3. In conv layers, one can use either 3-channel conv kernels or single-chanel conv kernels (that is to say, whether share weights).

Now I’ve finished most of the works, and I’m debugging the code, hope I can release it in several days.

Here’s Early adopters edition, which is still buggy. I’ll post the formal version in days.

https://github.com/xingdi-eric-yuan/conv-net-version-3

Continue reading “Convolutional Neural Networks III” »

Posted in Machine Learning | Tagged | 52 Responses

Sparse Coding

INTRODUCTION

Sparse coding is one of the very famous unsupervised methods in this decade, it is a dictionary learning process, which target is to find a dictionary that we can use a linear combination of vectors in this dictionary to represent any training input vector. For better capture structures and patterns inherent in the input vectors, we use an over-complete dictionary, moreover, we want the linear combination to be sparse, which means there are only a very small part of values in it are non-zero. This is how people define sparse coding cost function,

Continue reading “Sparse Coding” »

Posted in Algorithm, Machine Learning | Tagged , , , , | 7 Responses