You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
BiasEdit is an efficient model editing method to eliminate stereotyped bias from language models with small editor networks, including a debiasing loss to guide edits on partial parameters and a remaining loss to maintain the language modeling abilities during editing. Experimental results show BiasEdit' excellent performance on debiasing, modeling ability preservation, and robustness of gender reverse and semantic generality.
With StereoSet, editor networks are trained to generate parameter shifts for debiasing at first. Then, the trained editor networks are used to conduct edits on language models and produce an unbiased model.
⌚️ Training Editor Networks
Formatted datasets with train/dev/test (gender_test.json, race_test.json, religion_test.json) splits are in data/stereoset.
Configurations are in config. Partial parameters to be edited are presented in editor. The configurations, like weights to be edited, are in model.
Experimental scripts are in scripts. All hyper-parameters are in the scripts. Since hyper-parameters have a great effect on hyper-network tuning, higly recommand conducting hyper-paramter tuning.
For the ablation study on the remaining loss, set editor.loc_coef=0.
Metrics can be found in the training log.
🚀 Debiasing with Editor Networks
Set eval_only=True
Set data.valid_path as the path of the test set
Metrics can be found at the end of the debiasing log, like "Test ------- XXX".