Published in IEEE Robotics and Automation Letters, 2025
Language-guided robotic grasping in cluttered environments presents significant challenges due to severe occlusions and complex scene structures, which often hinder accurate target localization. … Read more
Published in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2025, 2025
Robotic grasping, serving as the cornerstone of robot manipulation, is fundamental for embodied intelligence. Manipulation in challenging scenarios demands grasp detection algorithms with higher efficiency and generalizability. … Read more
Published in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2025, 2025
Visual object navigation, requiring agents to locate target objects in novel environments through egocentric visual observation, remains a critical challenge in Embodied AI. … Read more
Published in IEEE Transactions on Instrumentation and Measurement, 2025
The line-structured-light system has been widely applied in intelligent welding robots for weld seam reconstruction and tracking. However, it’s challenging to extract the projected laser stripes from captured images due to the strong noise and high dynamic range in welding environments. … Read more
Recommended citation:
Yixiang Dai, Siang Chen, Tianyu Sun, Zimo Fan, Chun Zhang, Xiaobing Feng, Guijin Wang. (2024). Uncertainty-Aware Laser Stripe Segmentation with Non-Local Mechanisms for Welding Robots. [pdf][bib]
Published in 2025 IEEE International Symposium on Circuits and Systems, 2025
6-DoF grasp detection is critically important for the advancement of intelligent embodied systems, as it provides feasible robot poses for object grasping. … Read more
Recommended citation:
Kaiqin Yang, Yixiang Dai, Guijin Wang, Siang Chen (2024). Efficient End-to-End 6-Dof Grasp Detection Framework for Edge Devices with Hierarchical Heatmaps and Feature Propagation. [pdf]
Published in IEEE Robotics and Automation Letters, 2025
Traditional affordance segmentation on 3D point cloud objects requires massive amounts of annotated training data and can only make predictions within predefined classes and affordance tasks. … Read more
Published in IEEE Robotics and Automation Letters, 2024
Dynamic grasping of moving objects in complex, continuous motion scenarios remains challenging. Reinforcement Learning (RL) has been applied in various robotic manipulation tasks, benefiting from its closed-loop property. … Read more
Recommended citation:
Pengwei Xie, Siang Chen, Qianrun Chen, Wei Tang, Dingchang Hu, Yixiang Dai, Rui Chen, Guijin Wang. (2024). GAP-RL: Grasps As Points for RL Towards Dynamic Object Grasping. [pdf]
A series of region-based methods succeed in extracting regional features and enhancing grasp detection quality. However, faced with a cluttered scene with potential collision, the definition of the grasp-relevant region stays inconsistent, and the relationship between grasps and regional spaces remains incompletely investigated. … Read more
Recommended citation:
Siang Chen, Pengwei Xie, Tang Wei, Dingchang Hu, Yixiang Dai, Guijin Wang. (2024). Region-aware Grasp Framework with Normalized Grasp Space for Efficient 6-DoF Grasping. [pdf]
Published in ECCV 2024 Workshop on Assistive Computer Vision and Robotic (ACVR 2024), 2024
In the context of human-robot interaction and collaboration scenarios, robotic grasping still encounters numerous challenges. Traditional grasp detection methods generally analyze the entire scene to predict grasps, leading to redundancy and inefficiency. … Read more
Published in IEEE International Conference on Image Processing 2024 (ICIP 2024), 2024
The goal of object pose estimation is to visually determine the pose of a specific object in the RGB-D input. Unfortunately, when faced with new categories, both instance-based and category-based methods are unable to deal with unseen objects of unseen categories, which is a challenge for pose estimation. … Read more
Recommended citation:
Bowen Liu, Wei Liu, Siang Chen, Pengwei Xie, Guijin Wang. (2024). Category-Agnostic Pose Estimation for Point Clouds. [pdf]
Robotic grasping is a primitive skill for complex tasks and is fundamental to intelligence. For general 6-Dof grasping, most previous methods directly extract scene-level semantic or geometric information, while few of them consider the suitability for various downstream applications, such as target-oriented grasping. … Read more
Published in IEEE Transactions on Circuits and Systems for Video Technology, 2023
Few-shot 3D point cloud segmentation segments novel categories in point cloud scenes with only limited annotations. However, most current methods do not consider query content when exploring support prototypes, and thus suffer from intra-class variations between objects and incomplete representation of category information from annotated support samples. … Read more
Recommended citation:
Hu, D., Chen, S., Yang, H., & Wang, G. (2023). Query-guided Support Prototypes for Few-shot 3D Indoor Segmentation. IEEE Transactions on Circuits and Systems for Video Technology. [pdf][bib]
Published in IEEE Robotics and Automation Letters, 2023
Manipulating unseen articulated objects through visual feedback is a critical but challenging task for real robots. Existing learning-based solutions mainly focus on visual affordance learning or other pre-trained visual models to guide manipulation policies, which face challenges for novel instances in real-world scenarios. … Read more
Recommended citation:
Xie, P., Chen, R., Chen, S., Qin, Y., Xiang, F., Sun, T., ... & Su, H. (2023). Part-Guided 3D RL for Sim2Real Articulated Object Manipulation. IEEE Robotics and Automation Letters. [pdf][bib]
Published in IEEE Robotics and Automation Letters, 2023
Fast and robust object grasping in clutter is a crucial component of robotics. Most current works resort to the whole observed point cloud for 6-Dof grasp generation, ignoring the guidance information excavated from global semantics, thus limiting high-quality grasp generation and real-time performance. … Read more
Recommended citation:
Chen, S., Tang, W., Xie, P., Yang, W., & Wang, G. (2023). Efficient heatmap-guided 6-DoF grasp detection in cluttered scenes. IEEE Robotics and Automation Letters. [pdf][bib]
Published in 2022 IEEE International Conference on Visual Communications and Image Processing (VCIP), 2022
Various low-bit quantized methods have been widely exploited and shown decent performance on 2D vision tasks in recent years. Complemented with 2D images, 3D point clouds provide an opportunity to understand the surrounding environ-ment better. However, low-bit quantization methods designed for 2D vision tasks are not readily transferable to 3D point clouds due to the higher dimension of 3D data and the increased proportion of activations. … Read more
Recommended citation:
Hu, D., Chen, S., Yang, H., & Wang, G. (2022, December). Distribution-aware Low-bit Quantization for 3D Point Cloud Networks. In 2022 IEEE International Conference on Visual Communications and Image Processing (VCIP) (pp. 1-5). IEEE. [pdf][bib]