You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Download the data for training and evaluation on Longbench. For Infinite-Bench, you can download from InfiniteBench. We will also release the checkpoint of FocusLLM.
This project builds upon the codebase of Activation Beacon, and we sincerely thank the authors for their valuable contribution. However, please note that the "beacon token" mentioned in our code actually refers to the "candidate token" as described in our paper. While we reuse the term "beacon token," its function is fundamentally different from the beacon token in the Activation Beacon paper. For details on how the candidate token functions, please refer to our paper.
Due to memory constraints, during training, we randomly select either the repetition loss or continuation loss for optimization at each step. If sufficient memory is available, you can modify the forward function in src/activation_beacon_llama/modeling_llama.py to optimize both losses simultaneously.
Citation
If you find this repository useful, please give us a star ⭐.
To cite our work:
@misc{li2024focusllmscalingllmscontext,
title={FocusLLM: Scaling LLM's Context by Parallel Decoding},
author={Zhenyu Li and Yike Zhang and Tengyu Pan and Yutao Sun and Zhichao Duan and Junjie Fang and Rong Han and Zixuan Wang and Jianyong Wang},
year={2024},
eprint={2408.11745},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2408.11745},
}
About
FocusLLM: Scaling LLM’s Context by Parallel Decoding