You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recent advances in LLMs have enhanced AI capabilities, but also increased the risk posed by malicious requests, highlighting the need for effective LLM safeguards to detect such queries. Existing approaches largely rely on classifier-based methods that lack interpretability and perform poorly on low-resource languages. To address these limitations, we propose ConsistentGuard, a novel reasoning-based multilingual safeguard, which enhances explainability via reasoning and boosts knowledge transfer between languages through alignment. Our training process comprises three stages: cold start, reasoning training, and cross-lingual alignment.
With only 1,000 training samples, our method demonstrates superior performance on three datasets across six languages, outperforming larger models trained with significantly more data, and exhibits strong interpretability and generalization ability. We also contribute a multilingual benchmark extension and release our codes to support future research.
Citation
@misc{chen2025unlockingllmsafeguardslowresource,
title={Unlocking LLM Safeguards for Low-Resource Languages via Reasoning and Alignment with Minimal Training Data},
author={Zhuowei Chen and Bowei Zhang and Nankai Lin and Tian Hou and Lianxi Wang},
year={2025},
eprint={2510.10677},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2510.10677},
}