You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
RAE is a novel framework for editing knowledge in large language models (LLMs) for multi-hop question answering tasks. It employs mutual information maximization for fact retrieval and a self-optimizing technique to prune redundant data.
Data
MQUAKE-CF-3k and MQUAKE-T Edited Knowledge Graph (KG)
NatureL: When enabled, transforms a triple into a human-readable natural language statement. This benefits LLM modeling and improves retrieval success. Enabled by default.
Template: When enabled, builds "question+fact chain" as in-context examples to help LLMs understand the task. Examples are extracted from MQUAKE-CF, containing 9k examples different from the test cases.
Template_number: Number of templates used to extract relevant facts for fact chain retrieval. Default is 3.
entropy_template_number: Number of templates used for knowledge pruning tasks. Default is 6.
correctConflict: Specific design for MQUAKE-CF-3k dataset to handle editing conflicts where both unedited and edited versions of a fact are needed to answer different questions. You can leanr more details about this issue from DeepEdit. Enabled by default but not necessary for other datasets.
Citation
If you find this work helpful, please cite our paper:
@article{shi2024retrieval,
title={Retrieval-enhanced knowledge editing for multi-hop question answering in language models},
author={Shi, Yucheng and Tan, Qiaoyu and Wu, Xuansheng and Zhong, Shaochen and Zhou, Kaixiong and Liu, Ninghao},
journal={arXiv preprint arXiv:2403.19631},
year={2024}
}
About
[CIKM2024] Retrieval-enhanced Knowledge Editing in Language Models for Multi-Hop Question Answering