HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Tue, 26 Sep 2023 17:16:57 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"65131209-45fd"
expires: Sun, 28 Dec 2025 15:03:07 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 5740:21D6A4:7CD34D:8BCAD8:69514451
accept-ranges: bytes
age: 0
date: Sun, 28 Dec 2025 14:53:07 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210052-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766933587.140468,VS0,VE202
vary: Accept-Encoding
x-fastly-request-id: ee46191183d50303194685cc97c0398e6162c36c
content-length: 4804
Welcome! About Me I am a researcher at NEC America Lab. Previously, I was a Ph.D. student at Texas A&M University supervised by Prof. Zhangyang Wang. My research interests lie in Self-Supervised (SS) Learning, Efficient Training, Indoor Scene Occlusion Reasoning, etc. His works are published at various top conferences such as NeurIPS, ICML, ICLR, CVPR, and ECCV.
Selected Publications Ziyu Jiang , Yinpeng Chen, Mengchen Liu, Dongdong Chen, Xiyang Dai, Lu Yuan, Zicheng Liu, Zhangyang Wang
[Paper] [Code] Ziyu Jiang* , Xuxi Chen*, Xueqin Huang, Xianzhi Du, Denny Zhou, Zhangyang Wang
[Paper] [Code] Hanxue Liang*, Zhiwen Fan*, Rishov Sarkar, Ziyu Jiang , Tianlong Chen, Kai Zou, Yu Cheng, Cong Hao, and Zhangyang Wang
[Paper] [Code] Ziyu Jiang , Tianlong Chen, Xuxi Chen, Yu Cheng, Luowei Zhou, Lu Yuan, Ahmed Awadallah, Zhangyang Wang
[Paper] [Code] Ziyu Jiang , Tianlong Chen, Ting Chen, Zhangyang Wang
[Paper] [Code] Ziyu Jiang , Tianlong Chen, Bobak Mortazavi, Zhangyang Wang
[Paper] [Code] Ziyu Jiang , Tianlong Chen, Ting Chen, Zhangyang Wang
[Paper] [Code] Yue Wang*, Ziyu Jiang* , Xiaohan Chen*, Pengfei Xu, Yang Zhao, Yingyan Lin, Zhangyang Wang
[Paper] [Code] Wuyang Chen*, Ziyu Jiang* , Zhangyang Wang, Kexin Cui, Xiaoning Qian
[Paper] [Code] Professional Experience May - August, 2022 Research Intern , Microsoft , Redmond, WAMentors: Yinpeng Chen Research on self-supervised pre-training
Explore self-supervised methods that can combine the benefits of both Mask Image Modeling (MIM) and Contrastive Learning (CL). May - August, 2021 Research Intern , Microsoft , Redmond, WAMentors: Luowei Zhou, Yu Cheng Research on self-supervised transfer learning
Explore self-supervised methods with better few-shot performance via improving both pre-training and fine-tuning. June - November, 2020 Research Intern , Bytedance AI Lab , Mountain View, CAMentors: Linjie Yang Research on video segmentation
Explore efficient approach for video segmentation. June - August, 2019 Research Intern , NEC laboratories america inc , San. Jose, CAMentors: Buyu, Liu Research on indoor scene understanding
Explore better algorithms for semantic segmenation of indoor scene.