You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For the non-translation tasks, we follows DiffuSeq for the dataset settings.
For IWSLT14 and WMT14, we follow the data preprocessing from fairseq, we also provide the processed datasets in this links. (Update 04/13/2023: Sorry for missing WMT14 data, I just uploaded it. Download from here)
Training
To run the code, we use iwslt14 en-de as an illustrative example:
Prepare the data of iwslt14 under ./data/iwslt14/ directory;
The ema_0.9999_280000.pt file is the model weights and alpha_cumprod_step_260000.npy is the saved noise schedule. You have to use the most recent .npy schedule file saved before .pt model weight file.
Other Comments
Note that for all the training experiments, we all set the maximum training steps and warmups to 1000000 and 10000. For different datasets, it is needless to stop training until maximum training steps. IWSLT14 use checkpoint around 300000 training steps, WMT15 around 500000 train steps and non-translation task around 100000 train steps.
You can change the hyperparameter setting for your own experiments, maybe increasing the training batches or modify the training schedule will bring some improvements.
Citation
If you find our work and codes interesting and useful, please cite:
@article{Yuan2022SeqDiffuSeqTD,
title={SeqDiffuSeq: Text Diffusion with Encoder-Decoder Transformers},
author={Hongyi Yuan and Zheng Yuan and Chuanqi Tan and Fei Huang and Songfang Huang},
journal={ArXiv},
year={2022},
volume={abs/2212.10325}
}
About
Text Diffusion Model with Encoder-Decoder Transformers for Sequence-to-Sequence Generation [NAACL 2024]