tg-me.com/RIMLLab/196
Last Update:
π Compositional Learning Journal Club
Join us this week for an in-depth discussion on Data Unlearning in Deep generative models in the context of cutting-edge generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle unlearning tasks and where improvements can be made.
β
This Week's Presentation:
πΉ Title: Data Unlearning in Diffusion Models
πΈ Presenter: Aryan Komaei
π Abstract:
Diffusion models have been shown to memorize and reproduce training data, raising legal and ethical concerns regarding data privacy and copyright compliance. While retraining these models from scratch to remove specific data is computationally costly, existing unlearning methods often rely on strong assumptions or exhibit instability. To address these limitations, we introduce a new family of loss functions called Subtracted Importance Sampled Scores (SISS). SISS leverages importance sampling to provide the first method for data unlearning in diffusion models with theoretical guarantees.
Session Details:
- π
Date: Tuesday
- π Time: 4:45 - 5:45 PM
- π Location: Online at vc.sharif.edu/ch/rohban
We look forward to your participation! βοΈ
BY RIML Lab

Share with your friend now:
tg-me.com/RIMLLab/196