You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This repository provieds the re-implemented code for pre-training character n-gram embeddings presented in our Joint Many-Task (JMT) paper [1].
Compared with the original single-thread code used in the paper, in the new version, substantial speedup is achieved (Not yet! Sorry).
Some pre-trained character n-gram embeddings are also available at my project page.
To use Eigen, please modify the line in Makefile as follows:
EIGEN_LOCATION=$$HOME/local/eigen_3.3-beta1 # Modify here to use Eigen
More details will come soon!
Reference
[1] Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2017. A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, arXiv cs.CL 1611.01587.
@InProceedings{hashimoto-jmt:2017:EMNLP2017,
author = {Hashimoto, Kazuma and Xiong, Caiming and Tsuruoka, Yoshimasa and Socher, Richard},
title = {{A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks}},
booktitle = {Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
month = {September},
year = {2017},
address = {Copenhagen, Denmark},
publisher = {Association for Computational Linguistics},
note = {To appear},
url = {https://arxiv.org/abs/1611.01587}
}