• Pingback: ()

  • Ted

    MITIE model file (352MB) contains the extractor which is more than 300 MB.

    So normally the size is only around 10 ~20 MB.

  • Ali Mollahosseini

    How did you measure the accuracy? And also, may I ask how did you implement your CNN?

  • Pingback: ()

  • Pingback: ()

  • Bruce

    There is something unclear about your method. You are feeding a sequence of 3-grams into the NN. Since there is no mention of RNN, I assume you are getting one prediction per 3 x 300 input frame? So for the first frame “Jim bought 300”, which word is the label predicted for? Is it the middle one? If so, how do you ever get a label for the first word Jim?

  • Vivek Mangipudi

    Please update post to reflect NER in 2017.

  • Prakshal Jain

    Hi Eric ! You’ve genuinely done some magnificent work here.

    I would really like to share a blog that I had come across, which definitely is a good read. It touches the aspects of the milestones models and also leads you to some of really cool NLP APIs as well. I hope it helps keep the discussion alive…

    You can find the blog here:
    http://blog.paralleldots.com/technology/nlp/named-entity-recognition-milestone-models-papers-and-technologies/

  • Bhaskar Arun

    Hi @Eric,
    Can you explain the output vector you used for training using a 3 gram sentence. I can only understand the concept of the uni-gram with one word having one label out of 9; are you saying for 3 words you made multiple labels from the 9 labels into one unique label?