Open Source Neural Machine Translation in PyTorch
OpenNMT-py is the PyTorch version of the OpenNMT project, an open-source (MIT) neural machine translation framework. It is designed to be research friendly to try out new ideas in translation, summary, morphology, and many other domains. Some companies have proven the code to be production ready.
We love contributions! Please look at issues marked with the contributions welcome tag.
Unless there is a bug, please use the forum or Gitter to ask questions.
We're happy to announce the upcoming release v2.0 of OpenNMT-py.
The major idea behind this release is the -- almost -- complete makeover of the data loading pipeline. A new 'dynamic' paradigm is introduced, allowing to apply on the fly transforms to the data.
This has a few advantages, amongst which:
These transforms can be specific tokenization methods, filters, noising, or any custom transform users may want to implement. Custom transform implementation is quite straightforward thanks to the existing base class and example implementations.
You can check out how to use this new data loading pipeline in the updated docs.
All the readily available transforms are described here.
Given sufficient CPU resources according to GPU computing power, most of the transforms should not slow the training down. (Note: for now, one producer process per GPU is spawned -- meaning you would ideally need 2N CPU threads for N GPUs).
For now, the new data loading paradigm does not support Audio, Video and Image inputs.
A few features are also dropped, at least for now:
For any user that still need these features, the previous codebase will be retained as
legacyin a separate branch. It will no longer receive extensive development from the core team but PRs may still be accepted.
Feel free to check it out and let us know what you think of the new paradigm!
OpenNMT-py requires:
Install
OpenNMT-pyfrom
pip:
bash pip install OpenNMT-py
or from the sources:
bash git clone https://github.com/OpenNMT/OpenNMT-py.git cd OpenNMT-py pip install -e .
Note: if you encounter a
MemoryErrorduring installation, try to use
pipwith
--no-cache-dir.
(Optional) Some advanced features (e.g. working pretrained models or specific transforms) require extra packages, you can install them with:
pip install -r requirements.opt.txt
:warning: New in OpenNMT-py 2.0: On the fly data processing
Encoder-decoder models with multiple RNN cells (LSTM, GRU) and attention types (Luong, Bahdanau)
Inference time loss functions
SRU "RNNs faster than CNN" paper
Mixed-precision training with APEX, optimized on Tensor Cores
Model export to CTranslate2, a fast and efficient inference engine
To get started, we propose to download a toy English-German dataset for machine translation containing 10k tokenized sentences:
wget https://s3.amazonaws.com/opennmt-trainingdata/toy-ende.tar.gz tar xf toy-ende.tar.gz cd toy-ende
The data consists of parallel source (
src) and target (
tgt) data containing one sentence per line with tokens separated by a space:
src-train.txt
tgt-train.txt
src-val.txt
tgt-val.txt
Validation files are used to evaluate the convergence of the training. It usually contains no more than 5k sentences.
$ head -n 3 toy-ende/src-train.txt It is not acceptable that , with the help of the national bureaucracies , Parliament 's legislative prerogative should be made null and void by means of implementing provisions whose content , purpose and extent are not laid down in advance . Federal Master Trainer and Senior Instructor of the Italian Federation of Aerobic Fitness , Group Fitness , Postural Gym , Stretching and Pilates; from 2004 , he has been collaborating with Antiche Terme as personal Trainer and Instructor of Stretching , Pilates and Postural Gym . " Two soldiers came up to me and told me that if I refuse to sleep with them , they will kill me . They beat me and ripped my clothes .
We need to build a YAML configuration file to specify the data that will be used:
# toy_en_de.yamlWhere the samples will be written
save_data: toy-ende/run/example
Where the vocab(s) will be written
src_vocab: toy-ende/run/example.vocab.src tgt_vocab: toy-ende/run/example.vocab.tgt
Prevent overwriting existing files in the folder
overwrite: False
Corpus opts:
data: corpus_1: path_src: toy-ende/src-train.txt path_tgt: toy-ende/tgt-train.txt valid: path_src: toy-ende/src-val.txt path_tgt: toy-ende/tgt-val.txt ...
From this configuration, we can build the vocab(s) that will be necessary to train the model:
onmt_build_vocab -config toy_en_de.yaml -n_sample 10000
Notes: -
-n_sampleis required here -- it represents the number of lines sampled from each corpus to build the vocab. - This configuration is the simplest possible, without any tokenization or other transforms. See other example configurations for more complex pipelines.
To train a model, we need to add the following to the YAML configuration file: - the vocabulary path(s) that will be used: can be that generated by onmtbuildvocab; - training specific parameters.
# toy_en_de.yaml...
Vocabulary files that were just created
src_vocab: toy-ende/run/example.vocab.src tgt_vocab: toy-ende/run/example.vocab.tgt
Train on a single GPU
world_size: 1 gpu_ranks: [0]
Where to save the checkpoints
save_model: toy-ende/run/model save_checkpoint_steps: 500 train_steps: 1000 valid_steps: 500
Then you can simply run:
onmt_train -config toy_en_de.yaml
This configuration will run the default model, which consists of a 2-layer LSTM with 500 hidden units on both the encoder and decoder. It will run on a single GPU (
world_size 1&
gpu_ranks [0]).
Before the training process actually starts, the
*.vocab.pttogether with
*.transforms.ptwill be dumpped to
-save_datawith configurations specified in
-configyaml file. We'll also generate transformed samples to simplify any potentially required visual inspection. The number of sample lines to dump per corpus is set with the
-n_sampleflag.
For more advanded models and parameters, see other example configurations or the FAQ.
onmt_translate -model toy-ende/run/model_step_1000.pt -src toy-ende/src-test.txt -output toy-ende/pred_1000.txt -gpu 0 -verbose
Now you have a model which you can use to predict on new data. We do this by running beam search. This will output predictions into
toy-ende/pred_1000.txt.
Note:
The predictions are going to be quite terrible, as the demo dataset is small. Try running on some larger datasets! For example you can download millions of parallel sentences for translation or summarization.
When you are satisfied with your trained model, you can release it for inference. The release process will remove training-only parameters from the checkpoint:
onmt_release_model -model toy-ende/run/model_step_1000.pt -output toy-ende/run/model_step_1000_release.pt
The release script can also export checkpoints to CTranslate2, a fast inference engine for Transformer models. See the
-formatcommand line option.
Click this button to open a Workspace on FloydHub for training/testing your code.
Please see the FAQ: How to use GloVe pre-trained embeddings in OpenNMT-py
Several pretrained models can be downloaded and used with
onmt_translate:
http://opennmt.net/Models-py/
OpenNMT-py is run as a collaborative open-source project. The original code was written by Adam Lerer (NYC) to reproduce OpenNMT-Lua using PyTorch.
Major contributors are: * Sasha Rush (Cambridge, MA) * Vincent Nguyen (Ubiqus) * Ben Peters (Lisbon) * Sebastian Gehrmann (Harvard NLP) * Yuntian Deng (Harvard NLP) * Guillaume Klein (Systran) * Paul Tardy (Ubiqus / Lium) * François Hernandez (Ubiqus) * Linxiao Zeng (Ubiqus) * Jianyu Zhan (Shanghai) * Dylan Flaute (University of Dayton) * ... and more!
OpenNMT-py is part of the OpenNMT project.
If you are using OpenNMT-py for academic work, please cite the initial system demonstration paper published in ACL 2017:
@inproceedings{klein-etal-2017-opennmt, title = "{O}pen{NMT}: Open-Source Toolkit for Neural Machine Translation", author = "Klein, Guillaume and Kim, Yoon and Deng, Yuntian and Senellart, Jean and Rush, Alexander", booktitle = "Proceedings of {ACL} 2017, System Demonstrations", month = jul, year = "2017", address = "Vancouver, Canada", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P17-4012", pages = "67--72", }