My research interests lie in multimodal large language models, video/audio generation, unified understanding and generation, etc. Here is my Google Scholar. I’m seeking a job opportunity in the 2026 job market. If you are interested to chat with me, feel free to drop me an email.
Listed below are the accepted papers in top conferences and journals where I worked as the first author. Here are the full lists of publications and the repositories will come soon. I look forward to continuing to make valuable contributions to the multimodal community.
@inproceedings{liu2025javisgpt,title={JavisGPT: A Unified Multi-modal LLM for Sounding-Video Comprehension and Generation},author={Liu, Kai and Li, Jungang and Sun, Yuchong and Wu, Shengqiong and Gao, Jianzhang and Zhang, Daoan and Zhang, Wei and Jin, Sheng and Yu, Sicheng and Zhan, Geng and Ji, Jiayi and Zhou, Fan and Zheng, Liang and YAN, Shuicheng and Fei, Hao and Chua, Tat-Seng},booktitle={Conference on Neural Information Processing Systems [Spotlight]},month=nov,year={2025},}
Javisdit: Joint audio-video diffusion transformer with hierarchical spatio-temporal prior synchronization
Kai Liu, Wei Li, Lai Chen, Shengqiong Wu, Yanhao Zheng, Jiayi Ji, Fan Zhou, Rongxin Jiang, Jiebo Luo, Hao Fei, and Tat-Seng Chua
@article{liu2025javisdit,title={Javisdit: Joint audio-video diffusion transformer with hierarchical spatio-temporal prior synchronization},author={Liu, Kai and Li, Wei and Chen, Lai and Wu, Shengqiong and Zheng, Yanhao and Ji, Jiayi and Zhou, Fan and Jiang, Rongxin and Luo, Jiebo and Fei, Hao and Chua, Tat-Seng},journal={arXiv preprint arXiv:2503.23377},month=mar,year={2025},}
Structure-aware Domain Knowledge Injection for Large Language Models
Kai Liu, Ze Chen, Zhihang Fu, Rongxin Jiang, Fan Zhou, Yaowu Chen, Yue Wu, and Jieping Ye
In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics, Jul 2025
@inproceedings{liu2025structure,title={Structure-aware Domain Knowledge Injection for Large Language Models},author={Liu, Kai and Chen, Ze and Fu, Zhihang and Jiang, Rongxin and Zhou, Fan and Chen, Yaowu and Wu, Yue and Ye, Jieping},booktitle={Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics},month=jul,year={2025},}
Enhancing LLM’s Cognition via Structurization
Kai Liu, Zhihang Fu, Chao Chen, Wei Zhang, Rongxin Jiang, Fan Zhou, Yaowu Chen, Yue Wu, and Jieping Ye
In Conference on Neural Information Processing Systems, Nov 2024
@inproceedings{liu2024enhancing,title={Enhancing LLM's Cognition via Structurization},author={Liu, Kai and Fu, Zhihang and Chen, Chao and Zhang, Wei and Jiang, Rongxin and Zhou, Fan and Chen, Yaowu and Wu, Yue and Ye, Jieping},booktitle={Conference on Neural Information Processing Systems},month=nov,year={2024},}
INSIDE: LLMs’ Internal States Retain the Power of Hallucination Detection
Chao Chen, Kai Liu, Ze Chen, Yi Gu, Mingyuan Tao, Zhihang Fu, and Jieping Ye
In International Conference on Learning Representations, May 2024
@inproceedings{chen2024inside,title={INSIDE: LLMs' Internal States Retain the Power of Hallucination Detection},author={Chen, Chao and Liu, Kai and Chen, Ze and Gu, Yi and Tao, Mingyuan and Fu, Zhihang and Ye, Jieping},booktitle={International Conference on Learning Representations},month=may,year={2024},}
An email is generally the fastest way to reach me.