| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Mon, 29 Dec 2025 07:46:56 GMT
access-control-allow-origin: *
etag: W/"695231f0-2940"
expires: Tue, 30 Dec 2025 17:29:30 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 69F5:2D8B9D:A56C1B:B9BA24:695409A1
accept-ranges: bytes
age: 0
date: Tue, 30 Dec 2025 17:19:30 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210067-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767115170.208124,VS0,VE204
vary: Accept-Encoding
x-fastly-request-id: 000819eb695d6f0559894d512083ff1b720593af
content-length: 3444
Han Bao
Associate Professor at the Institute of Statistical Mathematics (ISM)
(Affil.) Associate Professor at the Graduate University for Advanced Studies (SOKENDAI)
(Affil.) Specially Appointed Associate Professor at Tohoku University (under JST BOOST)
(Affil.) Visiting Scientist at RIKEN AIP Sequential Decision Making Team
(Affil.) Visiting Researcher at OIST (MLDS Unit)
bao.han (#) ism.ac.jp
Office: D410B (ISM)
CV, Google Scholar, DBLP, ORCID, researchmap, GitHub
Han Bao
About me
🔎 For students
I'm open for accepting PhD students and interns at ISM. Feel free to reach out to me. Here are some of my favorite recent papers, but I'm more broadly interested in learning theory. More info can be found here.
Research interests
- Learning theory
- classification-calibrated losses, proper scoring rule, property elicitation
- class probability estimation, probability calibration
- learning dynamics, gradient descent
- convex analysis, information geometry
- Representation learning
- contrastive learning
- robust learning
- Online convex optimization
News
- Nov 14, 2025: Our work on any-stepsize gradient descent convergence got IBIS2025 excellent presentation award!
- Oct 1, 2025: Japanese translation of Kevin Murphy’s textbook “Probabilistic Machine Learning: An Introduction” will be published soon from Asakura Publishing [link (vol 1)][link (vol 2)].
- Sep 19, 2025: Our three papers are accepted by NeurIPS2025: (1) O(n ln T) regret bound for online inverse linear optimization, (2) gradient descent convergence for Fenchel-Young losses beyond the stable regime (spotlight!), (3) linear surrogate regret bounds by convex smooth losses (spotlight!).
- Mar 28, 2025: My grant proposal to JST-BOOST (Japanese govermental 5 year research funding for early-career researchers in AI field) has been accepted (official info).
- Feb 25, 2025: I moved to the Institute of Statistical Mathematics as an associate professor.
- Jan 23, 2025: Our three new papers are accepted by AISTATS2025: (1) unified understanding of online inverse optimization via Fenchel-Young loss, (2) non-principal-centric model of inverse optimization via prediction market, and (3) a new loss class extending proper loss to incorporate the focal loss. Additionally, our two papers are accepted by ICLR2025: (1) hippocampus-inspired self-supervised learning and (2) scheduled knowledge distillation for language modeling.
- (archived)
Upcoming travels
- Feb 20-24: Hangzhou (private)
- (archived)
