Telegram Group & Telegram Channel
💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Unlearning in Deep generative models in the context of cutting-edge generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle unlearning tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: The Illusion of Unlearning: The Unstable Nature of Machine Unlearning in Text-to-Image Diffusion Models


🔸 Presenter: Aryan Komaei

🌀 Abstract:
This paper tackles a critical issue in text-to-image diffusion models like Stable Diffusion, DALL·E, and Midjourney. These models are trained on massive datasets, often containing private or copyrighted content, which raises serious legal and ethical concerns. To address this, machine unlearning methods have emerged, aiming to remove specific information from the models. However, this paper reveals a major flaw: these unlearned concepts can come back when the model is fine-tuned. The authors introduce a new framework to analyze and evaluate the stability of current unlearning techniques and offer insights into why they often fail, paving the way for more robust future methods.

Session Details:
- 📅 Date: Tuesday
- 🕒 Time: 11:00 - 12:00 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban

We look forward to your participation! ✌️



tg-me.com/RIMLLab/213
Create:
Last Update:

💠 Compositional Learning Journal Club

Join us this week for an in-depth discussion on Unlearning in Deep generative models in the context of cutting-edge generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle unlearning tasks and where improvements can be made.

This Week's Presentation:

🔹 Title: The Illusion of Unlearning: The Unstable Nature of Machine Unlearning in Text-to-Image Diffusion Models


🔸 Presenter: Aryan Komaei

🌀 Abstract:
This paper tackles a critical issue in text-to-image diffusion models like Stable Diffusion, DALL·E, and Midjourney. These models are trained on massive datasets, often containing private or copyrighted content, which raises serious legal and ethical concerns. To address this, machine unlearning methods have emerged, aiming to remove specific information from the models. However, this paper reveals a major flaw: these unlearned concepts can come back when the model is fine-tuned. The authors introduce a new framework to analyze and evaluate the stability of current unlearning techniques and offer insights into why they often fail, paving the way for more robust future methods.

Session Details:
- 📅 Date: Tuesday
- 🕒 Time: 11:00 - 12:00 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban

We look forward to your participation! ✌️

BY RIML Lab


Warning: Undefined variable $i in /var/www/tg-me/post.php on line 283

Share with your friend now:
tg-me.com/RIMLLab/213

View MORE
Open in Telegram


RIML Lab Telegram | DID YOU KNOW?

Date: |

A Telegram spokesman declined to comment on the bond issue or the amount of the debt the company has due. The spokesman said Telegram’s equipment and bandwidth costs are growing because it has consistently posted more than 40% year-to-year growth in users.

Unlimited members in Telegram group now

Telegram has made it easier for its users to communicate, as it has introduced a feature that allows more than 200,000 users in a group chat. However, if the users in a group chat move past 200,000, it changes into "Broadcast Group", but the feature comes with a restriction. Groups with close to 200k members can be converted to a Broadcast Group that allows unlimited members. Only admins can post in Broadcast Groups, but everyone can read along and participate in group Voice Chats," Telegram added.

RIML Lab from ye


Telegram RIML Lab
FROM USA