| CARVIEW |
Select Language
Accepted Papers
Congratulations to Paper #13 (WoMAP: World Models For Embodied Open-Vocabulary Object Localization) for winning the Best Paper Award and for Paper #1 (Hi Robot: Open-Ended Instruction Following with Hierarchical Vision-Language-Action Models) for winning the Best Paper Runner-up!
-
(Paper ID #1) Hi Robot: Open-Ended Instruction Following with Hierarchical Vision-Language-Action Models
(spotlight) (best paper runner-up)
-
(Paper ID #2) VERDI: VLM-Embedded Reasoning for Autonomous Driving
- (Paper ID #3) STRIVE: Structured Representation Integrating VLM Reasoning for Efficient Object Navigation
-
(Paper ID #4) Flexible Multitask Learning with Factorized Diffusion Policy
(spotlight)
-
(Paper ID #5) RayFronts: Open-Set Semantic Ray Frontiers for Online Scene Understanding and Exploration
-
(Paper ID #6) EgoZero: Robot Learning from Smart Glasses
-
(Paper ID #7) Touch begins where vision ends: Generalizable policies for contact-rich manipulation
-
(Paper ID #8) Feel the Force: Contact-Driven Learning from Humans
-
(Paper ID #9) IMPACT: Intelligent Motion Planning with Acceptable Contact Trajectories via Vision-Language Models
-
(Paper ID #10) Point Policy: Unifying Observations and Actions with Key Points for Robot Manipulation
-
(Paper ID #11) GRIM: Task-Oriented Grasping with Conditioning on Generative Examples
-
(Paper ID #12) Hybrid Diffusion for Simultaneous Symbolic and Continuous Planning
-
(Paper ID #13) WoMAP: World Models For Embodied Open-Vocabulary Object Localization
(spotlight) (best paper award)
-
(Paper ID #14) Scene Graph-Guided Proactive Replanning for Failure-Resilient Embodied Agents
-
(Paper ID #15) CASPER: Inferring Diverse Intents for Assistive Teleoperation with Vision Language Models
-
(Paper ID #16) Grounding Language Models with Semantic Digital Twins for Robotic Planning
-
(Paper ID #17) MotIF: Motion Instruction Fine-tuning
(spotlight)
-
(Paper ID #18) Mixed Initiative Dialog for Human-Robot Collaborative Mobile Manipulation
-
(Paper ID #19) GRAPPA: Generalizing and Adapting Robot Policies via Online Agentic Guidance
-
(Paper ID #20) Points2Reward: Robotic Manipulation Rewards from Just One Video
-
(Paper ID #21) Human2LocoMan: Learning Versatile Quadrupedal Manipulation with Human Pretraining
-
(Paper ID #22) GraphEQA: Using 3D Semantic Scene Graphs for Real-time Embodied Question Answering
Organizers
Andrew Melnik*
Bremen University
Jonathan Francis*
Bosch Center for AI; Carnegie Mellon University
Michelle Zhao
Carnegie Mellon University
Ishika Singh
University of Southern California
Siddhant Haldar
New York University
Mehreen Naeem
Bremen University
Krishan Rana
QUT Centre for Robotics
* — co-leads
Acknowledgement
This workshop is supported by the Research Initiative FAME (Future-oriented cognitive Action Modelling Engine) and the European Network of Excellence Centers in Robotics euROBIN.
Contact and Information
Direct questions to semrob.workshop+general@gmail.com. Subscribe to our mailing list to stay updated.
Direct questions to semrob.workshop+general@gmail.com. Subscribe to our mailing list to stay updated.