| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Tue, 06 May 2025 18:19:54 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"681a52ca-1777"
expires: Mon, 29 Dec 2025 23:48:26 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 8D8D:1387E:956351:A7D460:695310F2
accept-ranges: bytes
age: 0
date: Mon, 29 Dec 2025 23:38:26 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210057-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767051507.525416,VS0,VE221
vary: Accept-Encoding
x-fastly-request-id: 8428850e89b4966b21f66433f1c0fa8d1a9ca49a
content-length: 2384
Trustworthy and Reliable Large-Scale Machine Learning Models | Workshop at ICLR 2023
Skip to the content.
Trustworthy and Reliable Large-Scale Machine Learning Models
Workshop at ICLR 2023
Home Call for Papers Accepted Papers Schedule Speakers Organizers Program Committee Related WorkshopsCall for Papers
| Submission Deadline | |
| Author notification | April 14, 2023 Anywhere on Earth (AoE) |
| Camera ready deadline | April 21, 2023 Anywhere on Earth (AoE) |
| Submission server | https://openreview.net/group?id=ICLR.cc/2023/Workshop/RTML |
| Submission format | Submissions need to be anonymized, and use the latex template here, modified from the template for ICLR conference papers. Submissions should include at most 4 pages, excluding the references and appendices. |
We invite submissions on any aspect of trustworthy and reliable ML, especially for large-scale models. Topics include but are not limited to:
- Novel methods for building more trustworthy large-scale machine learning models that prevent or alleviate negative societal impacts of existing ML methods
- New applications and settings where the robustness and trustworthiness of machine learning play an important role and how well existing techniques work under these settings
- Machine learning models with verifiable guarantees (such as robustness, fairness, and privacy guarantees) to build trustworthiness
- Privacy-preserving machine learning approaches for large-scale machine learning models
- Theoretical understanding of trustworthy machine learning
- Explainable and interpretable methods for large-scale AI
- Pre-training techniques to build more robust and trustworthy large-scale machine learning models
- Efficient fine-tuning methods to alleviate the trustworthiness gap for large-scale pre-trained models
- Machine unlearning to mitigate the privacy, toxicity, and bias issues within large-scale AI models
- Robust decision-making under uncertainty
- Futuristic concerns about trustworthy machine learning for foundation models
- Game-theoretic analysis for socially responsible machine learning systems
- Case studies and field research of the societal impacts of applying machine learning in mission-critical and human-centric tasks
We only consider submissions that haven’t been published in any peer-reviewed venue, including ICLR 2023 conference. The workshop is non-archival and will not have any official proceedings. Based on the PC’s recommendation, the accepted papers will be allocated either a contributed talk or a poster presentation.
We offer a Best Paper Award ($1,000), a Best Paper Honorable Mention Award ($500), and several travel grants (complimentary ICLR conference registrations).