| CARVIEW |
Select Language
HTTP/2 200
accept-ranges: bytes
age: 2
cache-control: public,max-age=0,must-revalidate
cache-status: "Netlify Edge"; fwd=miss
content-encoding: gzip
content-type: text/html; charset=UTF-8
date: Mon, 29 Dec 2025 21:48:18 GMT
etag: "6e0b67ed7267e43094661d925044e794-ssl-df"
permissions-policy: accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=()
referrer-policy: strict-origin-when-cross-origin
server: Netlify
strict-transport-security: max-age=31536000; includeSubDomains
vary: Accept-Encoding
x-content-type-options: nosniff
x-frame-options: DENY
x-nf-request-id: 01KDP1APVXJC6RYY28QP9JVESM
x-xss-protection: 1; mode=block
UniReps Workshop
UniReps Workshop
Unifying Representations in Neural Models
Location: San Diego Convention Center
Date: 06/12/2025
When, how and why do different neural models learn the same representations?
New findings in neuroscience and artificial intelligence reveal a shared pattern: whether in biological brains or artificial models, different learning systems tend to create similar representations when subject to similar stimuli.
The emergence of these similar representations is igniting a growing interest in the fields of neuroscience and artificial intelligence, with both fields offering promising directions for their theoretical understanding. These include analyzing the learning dynamics in neuroscience and studying the problem of identifiability in the functional and parameter space in artificial intelligence.
While the theoretical aspects already demand investigation, the practical applications are equally compelling: aligning representations allows for model merging, stitching and reuse, while also playing a crucial role in multi-modal scenarios. Furthermore, studying the features that are universally highlighted by different learning processes brings us closer to pinpoint the invariances that naturally emerge from learning models, possibly suggesting ways to enforce them.
The objective of the workshop is to discuss theoretical findings, empirical evidence and practical applications of this phenomenon, benefiting from the cross-pollination of different fields (ML, Neuroscience, Cognitive Science) to foster the exchange of ideas and encourage collaborations.
In conclusion, our primary focus is to delve into the underlying reasons, mechanisms, and extent of similarity in internal representations across distinct neural models, with the ultimate goal of unifying them into a single cohesive whole.
The emergence of these similar representations is igniting a growing interest in the fields of neuroscience and artificial intelligence, with both fields offering promising directions for their theoretical understanding. These include analyzing the learning dynamics in neuroscience and studying the problem of identifiability in the functional and parameter space in artificial intelligence.
While the theoretical aspects already demand investigation, the practical applications are equally compelling: aligning representations allows for model merging, stitching and reuse, while also playing a crucial role in multi-modal scenarios. Furthermore, studying the features that are universally highlighted by different learning processes brings us closer to pinpoint the invariances that naturally emerge from learning models, possibly suggesting ways to enforce them.
The objective of the workshop is to discuss theoretical findings, empirical evidence and practical applications of this phenomenon, benefiting from the cross-pollination of different fields (ML, Neuroscience, Cognitive Science) to foster the exchange of ideas and encourage collaborations.
In conclusion, our primary focus is to delve into the underlying reasons, mechanisms, and extent of similarity in internal representations across distinct neural models, with the ultimate goal of unifying them into a single cohesive whole.