| CARVIEW |
Select Language
HTTP/2 200
last-modified: Fri, 24 Oct 2025 00:31:32 GMT
cache-control: max-age=3600
content-type: text/html; charset=utf-8
content-security-policy: frame-ancestors 'none'
x-frame-options: SAMEORIGIN
x-cloud-trace-context: 7416ecd8219acade6b9dad5ca6f0dfdd
server: Google Frontend
via: 1.1 google, 1.1 varnish, 1.1 varnish
accept-ranges: bytes
age: 1230786
date: Thu, 01 Jan 2026 01:26:12 GMT
x-served-by: cache-lga21960-LGA, cache-bom-vanm7210038-BOM
x-cache: HIT, HIT
x-timer: S1767230772.052039,VS0,VE205
content-length: 50623
[2505.24261] Taming Hyperparameter Sensitivity in Data Attribution: Practical Selection Without Costly Retraining
Skip to main content
[v1] Fri, 30 May 2025 06:33:56 UTC (113 KB)
[v2] Thu, 23 Oct 2025 07:07:24 UTC (124 KB)
We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors.
Donate
Computer Science > Machine Learning
arXiv:2505.24261 (cs)
[Submitted on 30 May 2025 (v1), last revised 23 Oct 2025 (this version, v2)]
Title:Taming Hyperparameter Sensitivity in Data Attribution: Practical Selection Without Costly Retraining
Authors:Weiyi Wang, Junwei Deng, Yuzheng Hu, Shiyuan Zhang, Xirui Jiang, Runting Zhang, Han Zhao, Jiaqi W. Ma
View a PDF of the paper titled Taming Hyperparameter Sensitivity in Data Attribution: Practical Selection Without Costly Retraining, by Weiyi Wang and 7 other authors
View PDF
HTML (experimental)
Abstract:Data attribution methods, which quantify the influence of individual training data points on a machine learning model, have gained increasing popularity in data-centric applications in modern AI. Despite a recent surge of new methods developed in this space, the impact of hyperparameter tuning in these methods remains under-explored. In this work, we present the first large-scale empirical study to understand the hyperparameter sensitivity of common data attribution methods. Our results show that most methods are indeed sensitive to certain key hyperparameters. However, unlike typical machine learning algorithms -- whose hyperparameters can be tuned using computationally-cheap validation metrics -- evaluating data attribution performance often requires retraining models on subsets of training data, making such metrics prohibitively costly for hyperparameter tuning. This poses a critical open challenge for the practical application of data attribution methods. To address this challenge, we advocate for better theoretical understandings of hyperparameter behavior to inform efficient tuning strategies. As a case study, we provide a theoretical analysis of the regularization term that is critical in many variants of influence function methods. Building on this analysis, we propose a lightweight procedure for selecting the regularization value without model retraining, and validate its effectiveness across a range of standard data attribution benchmarks. Overall, our study identifies a fundamental yet overlooked challenge in the practical application of data attribution, and highlights the importance of careful discussion on hyperparameter selection in future method development.
| Subjects: | Machine Learning (cs.LG); Machine Learning (stat.ML) |
| Cite as: | arXiv:2505.24261 [cs.LG] |
| (or arXiv:2505.24261v2 [cs.LG] for this version) | |
| https://doi.org/10.48550/arXiv.2505.24261
arXiv-issued DOI via DataCite
|
Submission history
From: Weiyi Wang [view email][v1] Fri, 30 May 2025 06:33:56 UTC (113 KB)
[v2] Thu, 23 Oct 2025 07:07:24 UTC (124 KB)
Full-text links:
Access Paper:
- View PDF
- HTML (experimental)
- TeX Source
View a PDF of the paper titled Taming Hyperparameter Sensitivity in Data Attribution: Practical Selection Without Costly Retraining, by Weiyi Wang and 7 other authors
Current browse context:
cs.LG
References & Citations
export BibTeX citation
Loading...
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender
(What is IArxiv?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.