• sebastian schwank

    Is it possible to revert the Processes in an RNN so that you get the inverted trained process ?
    So insted of analyzing words the computer could write literatur ?

    • sebastian schwank

      … or paint pictures ?

      • You can look ay deep belief networks for that. So they can produce the points which trained them. for e.g. training on recognising handwritten digits, you can also build a deep belief network which write a handwritten digit.

  • sebastian schwank

    We imagine an RNN with Spellchecking purposes and an RNN with sensechecking purposes and an RNN with serveral “random” Parameters like popularity.

    could we combine the RNN’s like this there*s a minimal error to the given Prarameters for generating specific sentences ?

  • Pingback: ()

  • Ahmed Ramzy

    How Far would RNN work with words detection in image processing ?
    Ex. given 100 name each got 10 samples for it and then wants to recognize which word by giving it a certain picture for a word .

  • Yan Zhao

    Best description of hidden layers of RNN so far. Thank you.
    So the width of hidden layers is the same as number of inputs?

  • kanhavishva

    Great article but can you make function ‘getNetworkCost’ in file src/cost_gradient.cc little more clear by comments because its hard to understand for beginners, Thank you.