HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Tue, 23 Sep 2025 03:38:05 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"68d2161d-2b4a"
expires: Mon, 29 Dec 2025 06:09:09 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 58B6:272D88:865CF0:96ECC8:695218AD
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 05:59:10 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210021-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766987950.841664,VS0,VE218
vary: Accept-Encoding
x-fastly-request-id: ba94ae36b133c2e81ea8dfac24311984ba509c4c
content-length: 2716
Yuzhen Chen
I am a master's student in the Computational Science and Engineering program at Harvard.
I am a member of the MIT Media Lab (Multisensory Intelligence), supervised by Prof. Paul Liang.
I completed my undergraduate studies in Computer Engineering at the University of Michigan,
where I was a member of the ROAHM Lab under the supervision of Prof. Ram Vasudevan.
We incorporate both visual and tactile information to improve semantic maps for robots.
We develop the DEFORM modle to predict the real time position and pose of single branch elastic rods.
We Leverage multiple sensing modalities to improve the prediction of semantic classifications and their physical properties.
Learning Physical Object Properties from Synthetic Images Using Diffusion Model
Real-Time Modeling for Elastic Rods