| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Wed, 26 Nov 2025 00:45:52 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"69264dc0-58be"
expires: Tue, 30 Dec 2025 16:26:06 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: DAFB:328FD3:A4BE4B:B8EE07:6953FAC6
accept-ranges: bytes
age: 0
date: Tue, 30 Dec 2025 16:16:06 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210084-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767111366.450253,VS0,VE209
vary: Accept-Encoding
x-fastly-request-id: b2c060bc5a949e1c787c7f4b44081d2e57925d76
content-length: 4287
LLaVi Lab - Home
Welcome to the LLaVi Lab. The Learning · Language · Vision (LLaVi) Lab pursues fundamental research in artificial intelligence from different dimensions of machine learning (ML), natural language processing (NLP), computer vision (CV), and their intersections. Our lab has a broad interest in bayesian deep learning, active learning, trustworthy and privacy-preserving learning, large language model, visual understanding, robotic vision, vision-language learning, and medical image analysis. Our overarching goal is to create systems endowed with comprehensive intelligence to interact with the real world in a human-like fashion.
To perspective students: Our lab is looking for highly self-motivated PhD students. If you are interested, please read here.