| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Fri, 14 Nov 2025 19:04:29 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"69177d3d-fd8"
expires: Mon, 29 Dec 2025 12:31:10 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 3112:318CF6:8BB02B:9CEB98:69527235
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 12:21:10 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210053-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767010870.185288,VS0,VE204
vary: Accept-Encoding
x-fastly-request-id: 0eccb0c8668ae1ce54c0a61f70dfb7c7136a13e1
content-length: 1790
Tyler A. Chang
Tyler Chang
Hello!
I am a research scientist at Google DeepMind working on multilinguality for [Gemini].
I'm also interested in model interpretability, pretraining dynamics, and the science of behavior in general.
For details, see my [publications] or [CV].
I did my undergrad in math and cognitive science at Carleton College in Northfield, Minnesota, and my PhD in cognitive science at UC San Diego.
Outside of research, I enjoy playing piano, running, and taking blurry photos in the ocean. For questions about my research, contact me at tachang@ucsd.edu!
Recent Highlights
- Co-led (with Catherine Arnett) [Global PIQA], a physical commonsense reasoning benchmark for 100+ languages, in collaboration with over 300 researchers from 65 countries!
- We published a [blog post], [preprint], and [demo] for our work at Google DeepMind scaling training data attribution methods to LLM pretraining! I'll be presenting this work at ICLR 2025.
- We released [Goldfish], a suite of small, comparable monolingual language models for 350 languages!