updates
This commit is contained in:
@@ -1,2 +1,13 @@
|
||||
# CSE5519 Advances in Computer Vision (Topic H: 2023: Safety, Robustness, and Evaluation of CV Models)
|
||||
|
||||
## How to backdoor diffusion models
|
||||
|
||||
[link to paper](https://openaccess.thecvf.com/content/CVPR2023/papers/Chou_How_to_Backdoor_Diffusion_Models_CVPR_2023_paper.pdf )
|
||||
|
||||
> [!TIP]
|
||||
>
|
||||
> This is an interesting paper showing that it is possible to backdoor a diffusion model with high utility and high specificity at a low cost compared to per-training.
|
||||
>
|
||||
> I wonder how this technique could possibly be used for AI watermarking and reliably detected with other AI operations?
|
||||
>
|
||||
> And there are many metrics and loss functions used in this paper, I wonder what objectives they are trying to optimize, and looking for more clarification on the presentation.
|
||||
Reference in New Issue
Block a user