Files
NoteNextra-origin/content/CSE5519/CSE5519_F5.md
2025-11-06 13:59:31 -06:00

1.3 KiB

CSE5519 Advances in Computer Vision (Topic F: 2025: Representation Learning)

Can Generative Models Improve Self-Supervised Representation Learning?

link to the paper

Novelty in SSL with Generative Models

  • Use generative models to generate synthetic data to train self-supervised representation learning models.
  • Use generative augmentation to generate new data from the original data using a generative model. (with gaussian noise, or other data augmentation techniques)
  • Using standard augmentation techniques like flipping, cropping, and color jittering with generative techniques can further improve the performance of the self-supervised representation learning models.

Tip

This paper shows that using generative models to generate synthetic data can improve the performance of self-supervised representation learning models. The key seems to be the use of generative augmentation to generate new data from the original data using a generative model.

However, both representation learning and generative modeling have some hallucinations. I wonder will these kinds of hallucinations would be reinforced, and the bias in the generation model would propagate to the representation learning model in the process of generative augmentation?