You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@article{jin2019bert,
title={Is BERT Really Robust? Natural Language Attack on Text Classification and Entailment},
author={Jin, Di and Jin, Zhijing and Zhou, Joey Tianyi and Szolovits, Peter},
journal={arXiv preprint arXiv:1907.11932},
year={2019}
}
--counter_fitting_embeddings_path: The path to the counter-fitting word embeddings.
--counter_fitting_cos_sim_path: This is optional. If given, then the pre-computed cosine similarity scores based on the counter-fitting word embeddings will be loaded to save time. If not, it will be calculated.
--USE_cache_path: The path to save the USE model file (Downloading is automatic if this path is empty).
Two more things to share with you:
In case someone wants to replicate our experiments for training the target models, we shared the used seven datasets we have processed for you!
In case someone may want to use our generated adversary results towards the benchmark data directly, here it is.
About
A Model for Natural Language Attack on Text Classification and Inference