You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 3, 2021. It is now read-only.
OpenSeq2Seq: toolkit for distributed and mixed precision training of sequence-to-sequence models
OpenSeq2Seq main goal is to allow researchers to most effectively explore various
sequence-to-sequence models. The efficiency is achieved by fully supporting
distributed and mixed-precision training.
OpenSeq2Seq is built using TensorFlow and provides all the necessary
building blocks for training encoder-decoder models for neural machine translation, automatic speech recognition, speech synthesis, and language modeling.
@misc{openseq2seq,
title={Mixed-Precision Training for NLP and Speech Recognition with OpenSeq2Seq},
author={Oleksii Kuchaiev and Boris Ginsburg and Igor Gitman and Vitaly Lavrukhin and Jason Li and Huyen Nguyen and Carl Case and Paulius Micikevicius},
year={2018},
eprint={1805.10387},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
About
Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP