Update CSE5519_H4.md
This commit is contained in:
@@ -1,2 +1,20 @@
|
|||||||
# CSE5519 Advances in Computer Vision (Topic H: 2024: Safety, Robustness, and Evaluation of CV Models)
|
# CSE5519 Advances in Computer Vision (Topic H: 2024: Safety, Robustness, and Evaluation of CV Models)
|
||||||
|
|
||||||
|
## Efficient Bias Mitigation Without Privileged Information
|
||||||
|
|
||||||
|
[link to the paper](https://arxiv.org/pdf/2409.17691)
|
||||||
|
|
||||||
|
TAB: Targeted Augmentation for Bias mitigation
|
||||||
|
|
||||||
|
1. Loss history embedding construction (use Helper model to generate loss history for training dataset)
|
||||||
|
2. Loss aware partitioning (partition the training dataset into groups based on the loss history, reweight the loss of each group to balance the dataset)
|
||||||
|
3. Group-balanced dataset generation (generate a new dataset by sampling from the groups based on the reweighting)
|
||||||
|
4. Robust model training (train the model on the new dataset)
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
>
|
||||||
|
> This paper is a good example of how to mitigate bias in a dataset without using privileged information.
|
||||||
|
>
|
||||||
|
> However, the mitigation is heavy relied on the loss history, which might be different for each model architecture. Thus, the produced dataset may not be generalizable to other models.
|
||||||
|
>
|
||||||
|
> How to evaluate the bias mitigation effect across different models and different datasets?
|
||||||
Reference in New Issue
Block a user