HTTP/2 301
server: GitHub.com
content-type: text/html
location: https://www.jlko.eu/
x-github-request-id: EED4:2D64E0:94C76D:A707D8:6952F91D
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 21:56:45 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210058-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767045405.292316,VS0,VE198
vary: Accept-Encoding
x-fastly-request-id: 446341930e8077e6c832ce64435e0f9de71a82fa
content-length: 162
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Sun, 05 Oct 2025 14:29:50 GMT
access-control-allow-origin: *
etag: W/"68e280de-51e0"
expires: Mon, 29 Dec 2025 22:06:45 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 5769:3ABDEF:9491EA:A6D54F:6952F91D
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 21:56:45 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210053-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767045406.715287,VS0,VE215
vary: Accept-Encoding
x-fastly-request-id: dae31ff1dd6604c4680d45b0f8eea734eeca3042
content-length: 6234
Jannik Kossen
Jannik Kossen
π¨βπ» AI Research Scientist @ FAIR
Hi! π
I am an AI research scientist at FAIR training LLMs for code generation and reasoning.
I did my PhD at the University of Oxford, where I worked on data-efficiency and uncertainties in language and vision models.
My supervisors were Yarin Gal in OATML and Tom Rainforth in RainML@OxCSML.
I’ve worked on a bunch of different things, including detecting hallucinations in LLMs, better understanding in-context learning in LLMs, predicting hallucinations from LLM hidden states, contrastive vision-language models, active model evaluation (twice), non-parametric transformers, multimodal active feature acquisition, and object-structured world models.
Our research on detecting hallucinations in LLMs was published in Nature and discussed in Time Magazine, The Economist, Science, The Independent, and The Washington Post.