You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Run pip install -r requirements.txt to install python dependencies.
Run download.sh to download the dataset.
Run python preproc.py to build tensors from the raw dataset.
Run python main.py --mode train to train the model. After training, log/model.pt will be generated.
Run python main.py --mode test to test an pretrained model. Default model file is log/model.pt
Structure
preproc.py: downloads dataset and builds input tensors.
main.py: program entry; functions about training and testing.
models.py: QANet structure.
config.py: configurations.
Differences from the paper
The paper doesn't mention which activation function they used. I use relu.
I don't set the embedding of <UNK> trainable.
The connector between embedding layers and embedding encoders may be different from the implementation of Google, since the description in the paper is inconsistent (residual block can't be used because the dimensions of input and output are different) and they don't say how they implemented it.
TODO
Reduce memory usage
Improve converging speed (to reach 60 F1 scores in 1000 iterations)
Reach state-of-art scroes of the original paper
Performance analysis
Test on SQuAD 2.0
Contributors
InitialBug: found two bugs: (1) positional encodings require gradients; (2) wrong weight sharing among encoders.
linthieda: fixed one issue about dependencies and offered computing resources.