• Toy Soldiers: War Chest – Hall of Fame …

  • Pingback: ()

  • YangPan

    Dear,friends!
    It is greate for your sharing.I have try to run your programe,but there maybe are some bugs in it,when i change the value of the MOMENTUM or the hidden layer num from 2 to 3 or the hidden units from 512 to 128,the programe will run into end of too big cost,and i see some Notes in your codes, it seems that you have Foreseen the bugs,So what is problem,thanks very much!

  • Pingback: ()

  • Mr T

    This is, mostly, copy and paste from Alex Graves PhD Thesis. See here http://www.cs.toronto.edu/~graves/phd.pdf page 33, 34, 38, 39

    No reference. And you don’t even mention him.

    • Eric

      Hi Mr T,
      I didn’t realize how serious this issue is when I was writing these posts, but yes as you said, I’ll add reference to all the posts. Thanks for the comment.
      Eric

  • Pingback: ()

  • jamila

    hi all
    I need an LSTM code under matlab to learn a sequence of values and make predict 10 values, if anybody to an idea I remind you
    thanks,

  • jamilaatiga

    hi all
    I need an LSTM code under matlab to learn a sequence of values and make predict 10 values, if anybody to an idea I remind you
    thanks,

  • hbyte

    Hi there I am working on my own LSTM implementation in C++ using std:vectors..

    Anyway I have some stumbling blocks with training for a sequence of numbers I am using the following:

    Forward Pass

    Act_FGate = ft = sigmoid_(Wgt[0]*(Hin+In)+Bias[0],1,0); //forget gate
    Act_IGate = It = sigmoid_(Wgt[1]*(Hin+In)+Bias[1],1,0); //Include Gate
    Ct_= tanh_(Wgt[2]*(Hin+In)+Bias[2],1,0);
    Act_CGate = Ct = ft*Ctin+It*Ct_;
    //Out gate
    Act_OGate = Ot = sigmoid_(Wgt[3]*(Hin+In)+Bias[3],1,0);
    Hout = Ot * tanh(Ct); //Outputs

    Backward Pass

    ***Backprop error:

    Hout_Err = Out – Hout

    Ctin_Err = Inv_Tanh(Wgt_O * Hout_Err)

    Err_FGate = Inv_Sigmoid(Wgt_F * Hout_Err)

    Err_IGate = Inv_Sigmoid(Wgt_I * Hout_Err)

    Err_CGate = Inv_Tanh(Wgt_C * Hout_Err)

    Hin_Err = Err_CGate + Err_IGate + Err_FGate

    Next layer down Hout_Err = Hin_Err

    ***Update Wgts (For each Wgt_F,Wgt_I,Wgt_C,Wgt_O):

    WgDelta = (Hin+In)*Err_Gate*Act_Gate*Lrt + Momentum*PreDelta – Decay*PreWgt

    PreWgt = Wgt
    PreDelta = WgtDelta
    Wgt += WgtDelta

    For sequential learning is it necessary to connect all inputs to all modules or as I have done connect each input to each module. I have used two layers and it merely outputs a similar sequence with no learning.

    Hope you can help thanks.

    • hbyte

      Thanks for the Translation / Explanation of Graves Equations it now all makes sense.

      Cheers.
      Hbyte