Need help with WordEmbeddings-Elmo-Fasttext-Word2Vec?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

148 Stars 31 Forks 17 Commits 0 Opened issues


Using pre trained word embeddings (Fasttext, Word2Vec)

Services available


Need anything else?

Contributors list

# 184,970
14 commits

WordEmbeddings-ELMo, Fasttext, FastText (Gensim) and Word2Vec

This implementation gives the flexibility of choosing word embeddings on your corpus. One has the option of choosing word Embeddings from ELMo ( - recently introduced by Allennlp and these word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. Also fastext embeddings ( published in LREC from Thomas Mikolov and team is available. ELMo embeddings outperformed the Fastext, Glove and Word2Vec on an average by 2~2.5% on a simple Imdb sentiment classification task (Keras Dataset).


To run it on the Imdb dataset,

run: python

To run it on your data: comment out line 32-40 and uncomment 41-53


  • – contains all the functions for embedding and choosing which word embedding model you want to choose.
  • config.json – you can mention all your parameters here (embedding dimension, maxlen for padding, etc)
  • model_params.json - you can mention all your model parameters here (epochs, batch size etc.)
  • – This is the main file. Just use this file to run in terminal.

You have the option of choosing the word vector model

In config.json specify “option” as 0 – Word2vec, 1 – Gensim FastText, 2- Fasttext (FAIR), 3- ELMo

The model is very generic. You can change your model as per your requirements.

Feel free to reach out in case you need any help.

Special thanks to Jacob Zweig for the write up: Its a good 2 min read.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.