Wasserstein Adversarial Regularization for learning with label noise

Kilian Fatras*, Bharath Bhushan Damodaran, Sylvain Lobry, Remi Flamary, Devis Tuia, Nicolas Courty

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

4 Citations (Scopus)


Noisy labels often occur in vision datasets, especially when they are obtained from crowdsourcing or Web scraping. We propose a new regularization method, which enables learning robust classifiers in presence of noisy data. To achieve this goal, we propose a new adversarial regularization {scheme} based on the Wasserstein distance. Using this distance allows taking into account specific relations between classes by leveraging the geometric properties of the labels space. {Our Wasserstein Adversarial Regularization (WAR) encodes a selective regularization, which promotes smoothness of the classifier between some classes, while preserving sufficient complexity of the decision boundary between others. We first discuss how and why adversarial regularization can be used in the context of noise and then show the effectiveness of our method on five datasets corrupted with noisy labels: in both benchmarks and real datasets, WAR outperforms the state-of-the-art competitors.

Original languageEnglish
Pages (from-to)7296-7306
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Issue number10
Early online date7 Jul 2021
Publication statusPublished - 1 Oct 2022
Externally publishedYes


  • Adversarial regularization
  • Entropy
  • Label noise
  • Neural networks
  • Noise measurement
  • Optimal transport
  • Semantics
  • Smoothing methods
  • Task analysis
  • Training
  • Wasserstein distance


Dive into the research topics of 'Wasserstein Adversarial Regularization for learning with label noise'. Together they form a unique fingerprint.

Cite this