| CARVIEW |
Select Language
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Thu, 29 Aug 2024 07:51:02 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"66d02866-2fc1"
expires: Tue, 30 Dec 2025 05:09:10 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: 42FE:2685F2:98A1E5:ABA23B:69535C1E
accept-ranges: bytes
age: 0
date: Tue, 30 Dec 2025 04:59:10 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210084-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1767070751.625712,VS0,VE207
vary: Accept-Encoding
x-fastly-request-id: 4cebef1bc076a1761af28a3a2ce47152e54bda22
content-length: 3355
Select, Label, and Mix: Learning Discriminative Invariant Feature Representations for Partial Domain Adaptation
Select, Label, and Mix: Learning Discriminative Invariant
Feature Representations for Partial Domain Adaptation
Partial domain adaptation which assumes that the unknown target label space is a subset of the source label space has attracted much attention in computer vision. Despite recent progress, existing methods often suffer from three key problems: negative transfer, lack of discriminability and domain invariance in the latent space. To alleviate the above issues, we develop a novel `Select, Label, and Mix' (SLM) framework that aims to learn discriminative invariant feature representations for partial domain adaptation. First, we present an efficient "select" module that automatically filters out the outlier source samples to avoid negative transfer while aligning distributions across both domains. Second, the "label" module iteratively trains the classifier using both the labeled source domain data and the generated pseudo-labels for the target domain to enhance the discriminability of the latent space. Finally, the "mix" module utilizes domain mixup jointly with the other two modules to explore more intrinsic structures across domains leading to a domain-invariant latent space for partial domain adaptation. Extensive experiments on several benchmark datasets demonstrate the superiority of our proposed framework over state-of-the-art methods.
Feature Representations for Partial Domain Adaptation
|
|
|
|
|
|
|
|
|
![]() |
Abstract
Experimental Results Overview
![]() |
![]() |
![]() |
![]() |
Paper & Code
|
Aadarsh Sahoo, Rameswar Panda, Rogerio Feris, Kate Saenko, Abir Das Select, Label, and Mix: Learning Discriminative Invariant Feature Representations for Partial Domain Adaptation NeurIPS DistShift Workshop (NeurIPS-W), 2021 [Extended Draft Under Review] [PDF] [Supp] [Poster] [Presentation] [Slides] [Code] |




