My research interests include reinforcement learning and its application in robot learning and neuro-symbolic artificial intelligence. In particular, I am working on using RL for controlling legged robots and leveraging the foundation models in policy learning recently.
Recently, large language models (LLMs) have shown strong potential in facilitating human-robotic interaction and collaboration. However, existing LLM-based systems often overlook the misalignment between human and robot perceptions, which hinders their effective communication and real- world robot deployment. To address this issue, we introduce SYNERGAI, a unified system designed to achieve both perceptual alignment and human-robot collaboration. At its core, SYNERGAI employs 3D Scene Graph (3DSG) as its explicit and innate representation. This enables the system to leverage LLM to break down complex tasks and allocate appropriate tools in intermediate steps to extract relevant information from the 3DSG, modify its structure, or generate responses. Importantly, SYNERGAI incorporates an automatic mechanism that enables perceptual misalignment correction with users by updating its 3DSG with online interaction. SYNERGAI achieves comparable performance with the data-driven models in ScanQA in a zero- shot manner. Through comprehensive experiments across 10 real-world scenes, SYNERGAI demonstrates its effectiveness in establishing common ground with humans, realizing a success rate of 61.9% in alignment tasks. It also significantly improves the success rate from 3.7% to 45.68% on novel tasks by transferring the knowledge acquired during alignment.
@inproceedings{pmlr-v235-luo24j,title={SYNERGAI: Perception Alignment for Human-Robot Collaboration},author={Chen, Yixin and Zhang, Guoxi and Zhang, Yaowei and Xu, Hongming and Zhi, Peiyuan and Li, Qing and Huang, Siyuan},booktitle={Proceedings of the 2025 IEEE International Conference on Robotics and Automation},year={2025},publisher={IEEE},location={Atlanta, GA, USA},}
End-to-End Neuro-Symbolic Reinforcement Learning with Textual Explanations Spotlight (top 3.5%)
Lirui Luo, Guoxi Zhang, Hongming Xu, and 3 more authors
In Proceedings of the Forty-First International Conference on Machine Learning, Vienna, Austria, 2024
Neuro-symbolic reinforcement learning (NS-RL) has emerged as a promising paradigm for explainable decision-making, characterized by the interpretability of symbolic policies. NS-RL entails structured state representations for tasks with visual observations, but previous methods cannot refine the structured states with rewards due to a lack of efficiency. Accessibility also remains an issue, as extensive domain knowledge is required to interpret symbolic policies. In this paper, we present a neuro-symbolic framework for jointly learning structured states and symbolic policies, whose key idea is to distill the vision foundation model into an efficient perception module and refine it during policy learning. Moreover, we design a pipeline to prompt GPT-4 to generate textual explanations for the learned policies and decisions, significantly reducing users’ cognitive load to understand the symbolic policies. We verify the efficacy of our approach on nine Atari tasks and present GPT-generated explanations for policies and decisions.
@inproceedings{pmlr-v235-luo24k,title={End-to-End Neuro-Symbolic Reinforcement Learning with Textual Explanations},author={Luo, Lirui and Zhang, Guoxi and Xu, Hongming and Yang, Yaodong and Fang, Cong and Li, Qing},booktitle={Proceedings of the Forty-First International Conference on Machine Learning},pages={33533--33557},year={2024},publisher={PMLR},location={Vienna, Austria},url={https://proceedings.mlr.press/v235/luo24j.html},badge={Spotlight (top 3.5%)},}