| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Sun, 05 Oct 2025 00:57:43 GMT
access-control-allow-origin: *
etag: W/"68e1c287-5476"
expires: Sun, 28 Dec 2025 21:25:53 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 4404:123DE:7FF520:8F9AF5:69519E08
accept-ranges: bytes
age: 0
date: Sun, 28 Dec 2025 21:15:53 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210083-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766956553.091905,VS0,VE210
vary: Accept-Encoding
x-fastly-request-id: f1fe358ab677e91d3efd9e2ee4c22ca993b0c036
content-length: 4659
Yong (Norris) Zhang
Yong (Norris) Zhang
Beijing, China
I joined Meituan as a researcher.
My research is on AIGC (AI Generated Content), e.g., image and video generation.
Before joining Meituan, I was a senior researcher in Tencent AI Lab (2018-2024).
I received my Ph.D. degree from Institute of Automation,
Chinese Academy of Sciences (CASIA) in 2018. I was supervised by Prof.
Bao-Gang Hu and Prof. Weiming Dong at National Laboratory of Pattern Recognition (NLPR).
Prior to CASIA, I got my B.Eng in Automation from Hunan University in 2012.
From Sep. 2015 to Sep. 2017, I was a joint Ph.D. student in the Intelligent System Lab (ISL)
at Rensselaer Ploytechnic Institute (RPI), advised by Prof. Qiang Ji.
Representative works on AIGC image and video generation:
-- Foundation Video Models : VideoCrafter series, DynamiCrafter, LVDM, CV-VAE,
-- Human Video Models : MeiGen-InfiniteTalk, MeiGen-MultiTalk,
-- Controllable Generation : AnchorCrafter, StereoCrafter, DepthCrafter, Make-Your-Video, Make-Your-Anchor, StyleCrafter, AnimateZero,
-- ID Consistency : VideoMaker, CustomCrafter, CustomTTT, OMG, CeleBias, TaleCrafter, Animate-a-story ,
-- Long Video Generation : DiTCtrl , FreeNoise ,
-- High-resolution : DAM-VSR , Noise Calibration , ScaleCrafter , Make-a-cheap-scaling ,
-- Interpolation : ZeroSmooth , ToonCrafter
-- Video Editing : FateZero , MOFA-Video ,
-- Benchmark : EvalCrafter ,
Representative works on AIGC image and video generation:
-- Foundation Video Models : VideoCrafter series, DynamiCrafter, LVDM, CV-VAE,
-- Human Video Models : MeiGen-InfiniteTalk, MeiGen-MultiTalk,
-- Controllable Generation : AnchorCrafter, StereoCrafter, DepthCrafter, Make-Your-Video, Make-Your-Anchor, StyleCrafter, AnimateZero,
-- ID Consistency : VideoMaker, CustomCrafter, CustomTTT, OMG, CeleBias, TaleCrafter, Animate-a-story ,
-- Long Video Generation : DiTCtrl , FreeNoise ,
-- High-resolution : DAM-VSR , Noise Calibration , ScaleCrafter , Make-a-cheap-scaling ,
-- Interpolation : ZeroSmooth , ToonCrafter
-- Video Editing : FateZero , MOFA-Video ,
-- Benchmark : EvalCrafter ,
News
Sep 19, 2025
MultiTalk has been accepted by NeurIPS 2025.
Mar 29, 2025
Two papers (DAM-VSR and Mobius) conditionally accepted to SIGGRAPH 2025.
Feb 27, 2025
Two papers accepted to CVPR 2025. Congratulations to the co-authors.
Dec 10, 2024
Two papers (CustomCrafter and CustomTTT) accepted to AAAI 2025. Congratulations to the co-authors.
Sep 26, 2024
Two papers accepted to NeurIPS 2024. Congratulations to Sijie and Liang Chen.
July 30, 2024
Two papers conditionally accepted to Siggraph Asia 2024. Congratulations to the co-authors.
July 1, 2024
Five papers accepted to ECCV 2024. Congratulations to the co-authors.
June 25, 2024
Congratulations to Ziyao! Our "face swapping" paper has been accepted to TOG