Forwarded from Deep RL (Sp25)
🚀 Join Mark Ho’s Talk at Sharif University of Technology
🎙 Title: Making Sense of Intelligence, both Natural and Artificial
👨🏫 Speaker: Mark Ho (Assistant Professor at the Department of Psychology, New York University)
📅 Date: Thursday (May 22, 2025)
🕗 Time: 6:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/DchJa94PLCnTmHm28
@DeepRLCourse
🎙 Title: Making Sense of Intelligence, both Natural and Artificial
👨🏫 Speaker: Mark Ho (Assistant Professor at the Department of Psychology, New York University)
📅 Date: Thursday (May 22, 2025)
🕗 Time: 6:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/DchJa94PLCnTmHm28
@DeepRLCourse
ما به دنبال هم تیمی برای پروژه های پژوهشی ترکیبی در حوزه امنیت در یادگیری ماشین و مدل های زبانی بزرگ (Trustworthy+LLMs) هستیم. بحث توصیهنامه برای اعضای تیم در صورت رضایتمندی دو جانبه از همکاری، توسط اساتید در نظر گرفته میشود. برای مشاهده پژوهشهای مرتبط میتوانید مقالات پذیرفته شده در کنفرانسهای رده یک را در این لینک و اینجا مشاهده بفرمایید. دوستان علاقه مند لطفا رزومه خود را در این فرم تا دو روز آینده بارگذاری کنند. در صورت داشتن سوال، پیام خود را به @ReliableAdmn ارسال کنید.
scholar.google.ch
Mohammad Hossein Rohban
Associate Professor in Computer Engineering, Sharif University of Technology - Cited by 4,439 - Machine Learning - Statistics - Computational Biology
Forwarded from Deep RL (Sp25)
🚀 Join Pascal Poupart’s Talk at Sharif University of Technology
🎙 Title: Reinforcement Learning for Large Language Model Alignment and Inference (Abstract)
👨🏫 Speaker: Pascal Poupart (Professor at University of Waterloo and Canada CIFAR AI Chair at the Vector Institute) (Biography)
📅 Date: Thursday (May 29, 2025)
🕗 Time: 4:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/MzdwseJdP3fhFPW78
@DeepRLCourse
🎙 Title: Reinforcement Learning for Large Language Model Alignment and Inference (Abstract)
👨🏫 Speaker: Pascal Poupart (Professor at University of Waterloo and Canada CIFAR AI Chair at the Vector Institute) (Biography)
📅 Date: Thursday (May 29, 2025)
🕗 Time: 4:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/MzdwseJdP3fhFPW78
@DeepRLCourse
Forwarded from Deep RL (Sp25)
🚀 Join Karl Friston’s Talk at Sharif University of Technology
🎙 Title: The Physics of Sentience (Abstract)
👨🏫 Speaker: Karl J. Friston (Professor at UCL Queen Square Institute of Neurology and Honorary Consultant at the National Hospital for Neurology and Neurosurgery) (Biography)
📅 Date: Friday (May 30, 2025)
🕗 Time: 3:00 PM Iran Time
💡 Sign Up Here: https://forms.gle/LFAtRyuMMa9VtXJN8
@DeepRLCourse
🎙 Title: The Physics of Sentience (Abstract)
👨🏫 Speaker: Karl J. Friston (Professor at UCL Queen Square Institute of Neurology and Honorary Consultant at the National Hospital for Neurology and Neurosurgery) (Biography)
📅 Date: Friday (May 30, 2025)
🕗 Time: 3:00 PM Iran Time
💡 Sign Up Here: https://forms.gle/LFAtRyuMMa9VtXJN8
@DeepRLCourse
💠 Compositional Learning Journal Club
Join us this week for an in-depth discussion on Data Unlearning in Deep generative models in the context of cutting-edge generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle unlearning tasks and where improvements can be made.
✅ This Week's Presentation:
🔹 Title: Data Unlearning in Diffusion Models
🔸 Presenter: Aryan Komaei
🌀 Abstract:
Diffusion models have been shown to memorize and reproduce training data, raising legal and ethical concerns regarding data privacy and copyright compliance. While retraining these models from scratch to remove specific data is computationally costly, existing unlearning methods often rely on strong assumptions or exhibit instability. To address these limitations, we introduce a new family of loss functions called Subtracted Importance Sampled Scores (SISS). SISS leverages importance sampling to provide the first method for data unlearning in diffusion models with theoretical guarantees.
Session Details:
- 📅 Date: Tuesday
- 🕒 Time: 4:45 - 5:45 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban
We look forward to your participation! ✌️
Join us this week for an in-depth discussion on Data Unlearning in Deep generative models in the context of cutting-edge generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle unlearning tasks and where improvements can be made.
✅ This Week's Presentation:
🔹 Title: Data Unlearning in Diffusion Models
🔸 Presenter: Aryan Komaei
🌀 Abstract:
Diffusion models have been shown to memorize and reproduce training data, raising legal and ethical concerns regarding data privacy and copyright compliance. While retraining these models from scratch to remove specific data is computationally costly, existing unlearning methods often rely on strong assumptions or exhibit instability. To address these limitations, we introduce a new family of loss functions called Subtracted Importance Sampled Scores (SISS). SISS leverages importance sampling to provide the first method for data unlearning in diffusion models with theoretical guarantees.
Session Details:
- 📅 Date: Tuesday
- 🕒 Time: 4:45 - 5:45 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban
We look forward to your participation! ✌️
arXiv.org
Data Unlearning in Diffusion Models
Recent work has shown that diffusion models memorize and reproduce training data examples. At the same time, large copyright lawsuits and legislation such as GDPR have highlighted the need for...
Forwarded from Deep RL (Sp25)
🚀 Join Peter Norvig’s Talk at Sharif University of Technology
🎙 Title: The Future of AI and Programming
👨🏫 Speaker: Peter Norvig (Distinguished Education Fellow at Stanford's Human-Centered Artificial Intelligence Institute and Researcher at Google) (Biography)
📅 Date: Thursday (May 29, 2025)
🕗 Time: 6:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/EZ4pXXNGbbBYECaj7
@DeepRLCourse
🎙 Title: The Future of AI and Programming
👨🏫 Speaker: Peter Norvig (Distinguished Education Fellow at Stanford's Human-Centered Artificial Intelligence Institute and Researcher at Google) (Biography)
📅 Date: Thursday (May 29, 2025)
🕗 Time: 6:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/EZ4pXXNGbbBYECaj7
@DeepRLCourse
Forwarded from Deep RL (Sp25)
🚀 Join Amy Zhang’s Talk at Sharif University of Technology
🎙 Title: Representations for Hierarchical Reinforcement Learning
👨🏫 Speaker: Amy Zhang (Assistant Professor at UT Austin)
📅 Date: Friday (May 30, 2025)
🕗 Time: 6:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/NuDzpbZvoQjmViKc7
@DeepRLCourse
🎙 Title: Representations for Hierarchical Reinforcement Learning
👨🏫 Speaker: Amy Zhang (Assistant Professor at UT Austin)
📅 Date: Friday (May 30, 2025)
🕗 Time: 6:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/NuDzpbZvoQjmViKc7
@DeepRLCourse
Forwarded from Deep RL (Sp25)
🚀 Join Adam White’s Talk at Sharif University of Technology
🎙 Title: Empirical Practices in RL and How to Make Them Better?
👨🏫 Speaker: Adam White (Canada CIFAR AI Chair, Director of Amii, PI of the RLAI Lab, and Assistant Professor in Computing Science at the University of Alberta)
📅 Date: Friday (May 30, 2025)
🕗 Time: 8:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/aF4SMB7bd8KZm4nw7
@DeepRLCourse
🎙 Title: Empirical Practices in RL and How to Make Them Better?
👨🏫 Speaker: Adam White (Canada CIFAR AI Chair, Director of Amii, PI of the RLAI Lab, and Assistant Professor in Computing Science at the University of Alberta)
📅 Date: Friday (May 30, 2025)
🕗 Time: 8:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/aF4SMB7bd8KZm4nw7
@DeepRLCourse
Forwarded from انجمن هوش مصنوعی شریف :: SAIC
Please open Telegram to view this post
VIEW IN TELEGRAM
Forwarded from Deep RL (Sp25)
🚀 Join Anne Collins’s Talk at Sharif University of Technology
🎙 Title: Insights from Cognitive Science into Fast and Flexible Learning
👨🏫 Speaker: Anne Collins (University of California, Berkeley; Department of Psychology; Helen Wills Neuroscience Institute)
📅 Date: Monday (June 2, 2025)
🕗 Time: 6:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/K74JT6zuCLzQSU15A
@DeepRLCourse
🎙 Title: Insights from Cognitive Science into Fast and Flexible Learning
👨🏫 Speaker: Anne Collins (University of California, Berkeley; Department of Psychology; Helen Wills Neuroscience Institute)
📅 Date: Monday (June 2, 2025)
🕗 Time: 6:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/K74JT6zuCLzQSU15A
@DeepRLCourse
Forwarded from Deep RL (Sp25)
🚀 Join Martha White’s Talk at Sharif University of Technology
🎙 Title: Better Actor-Critic Algorithms for RL
👨🏫 Speaker: Martha White (Associate Professor at the University of Alberta, Canada CIFAR AI Chair, and Fellow of the Alberta Machine Intelligence Institute)
📅 Date: Thursday (June 5, 2025)
🕗 Time: 7:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/eNRZSccp2eHEUdZz8
@DeepRLCourse
🎙 Title: Better Actor-Critic Algorithms for RL
👨🏫 Speaker: Martha White (Associate Professor at the University of Alberta, Canada CIFAR AI Chair, and Fellow of the Alberta Machine Intelligence Institute)
📅 Date: Thursday (June 5, 2025)
🕗 Time: 7:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/eNRZSccp2eHEUdZz8
@DeepRLCourse
💠 Compositional Learning Journal Club
Join us this week for an in-depth discussion on Unlearning in Deep generative models in the context of cutting-edge generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle unlearning tasks and where improvements can be made.
✅ This Week's Presentation:
🔹 Title: Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation
🔸 Presenter: Aryan Komaei
🌀 Abstract:
Diffusion models can unintentionally generate harmful content when trained on unfiltered data. Previous methods tried to address this by adding loss or regularization terms to minimize changes in the model, but balancing content erasure and model stability remains difficult. This paper proposes a novel approach: identifying and preserving "adversarial concepts" — the concepts most affected by parameter changes — to ensure that content erasure has minimal impact on other elements. Their method outperforms current state-of-the-art techniques in maintaining content quality while removing unwanted information.
Session Details:
- 📅 Date: Tuesday
- 🕒 Time: 4:45 - 5:45 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban
We look forward to your participation! ✌️
Join us this week for an in-depth discussion on Unlearning in Deep generative models in the context of cutting-edge generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle unlearning tasks and where improvements can be made.
✅ This Week's Presentation:
🔹 Title: Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation
🔸 Presenter: Aryan Komaei
🌀 Abstract:
Diffusion models can unintentionally generate harmful content when trained on unfiltered data. Previous methods tried to address this by adding loss or regularization terms to minimize changes in the model, but balancing content erasure and model stability remains difficult. This paper proposes a novel approach: identifying and preserving "adversarial concepts" — the concepts most affected by parameter changes — to ensure that content erasure has minimal impact on other elements. Their method outperforms current state-of-the-art techniques in maintaining content quality while removing unwanted information.
Session Details:
- 📅 Date: Tuesday
- 🕒 Time: 4:45 - 5:45 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban
We look forward to your participation! ✌️
arXiv.org
Erasing Undesirable Concepts in Diffusion Models with Adversarial...
Diffusion models excel at generating visually striking content from text but can inadvertently produce undesirable or harmful content when trained on unfiltered internet data. A practical solution...
Forwarded from Deep RL (Sp25)
🚀 Join Luis Serrano’s Talk at Sharif University of Technology
🎙 Title: The Role of Reinforcement Learning in Training and Fine-Tuning Large Language Models
👨🏫 Speaker: Luis Serrano (Founder and CEO of Serrano.Academy)
📅 Date: Thursday (June 5, 2025)
🕗 Time: 5:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/Pswm8oLMBfGyN16E8
@DeepRLCourse
🎙 Title: The Role of Reinforcement Learning in Training and Fine-Tuning Large Language Models
👨🏫 Speaker: Luis Serrano (Founder and CEO of Serrano.Academy)
📅 Date: Thursday (June 5, 2025)
🕗 Time: 5:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/Pswm8oLMBfGyN16E8
@DeepRLCourse
Forwarded from Deep RL (Sp25)
🚀 Join Marlos C. Machado’s Talk at Sharif University of Technology
🎙 Title: Representation-Driven Option Discovery in RL
👨🏫 Speaker: Marlos C. Machado (Assistant Professor at the University of Alberta, Alberta Machine Intelligence Institute Fellow, Canada CIFAR AI Chair through Amii, and a principal investigator in the Reinforcement Learning and Artificial Intelligence Group)
📅 Date: Friday (June 6, 2025)
🕗 Time: 6:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/11E9YdyMYWEPw4hLA
@DeepRLCourse
🎙 Title: Representation-Driven Option Discovery in RL
👨🏫 Speaker: Marlos C. Machado (Assistant Professor at the University of Alberta, Alberta Machine Intelligence Institute Fellow, Canada CIFAR AI Chair through Amii, and a principal investigator in the Reinforcement Learning and Artificial Intelligence Group)
📅 Date: Friday (June 6, 2025)
🕗 Time: 6:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/11E9YdyMYWEPw4hLA
@DeepRLCourse
Forwarded from Deep RL (Sp25)
🚀 Join Christopher Amato’s Talk at Sharif University of Technology
🎙 Title: A Short Introduction to Cooperative Multi-Agent Reinforcement Learning
👨🏫 Speaker: Christopher Amato (Associate Professor in the Khoury College of Computer Sciences at Northeastern University)
📅 Date: Friday (June 6, 2025)
🕗 Time: 4:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/69VpNRoULDrjTPKz7
@DeepRLCourse
🎙 Title: A Short Introduction to Cooperative Multi-Agent Reinforcement Learning
👨🏫 Speaker: Christopher Amato (Associate Professor in the Khoury College of Computer Sciences at Northeastern University)
📅 Date: Friday (June 6, 2025)
🕗 Time: 4:30 PM Iran Time
💡 Sign Up Here: https://forms.gle/69VpNRoULDrjTPKz7
@DeepRLCourse
Forwarded from انجمن هوش مصنوعی شریف :: SAIC
حتی اگر به مرحله نهایی هکاتون نرسیدید، هنوز هم میتونید مورد حمایت مالی قرار بگیرید!
Please open Telegram to view this post
VIEW IN TELEGRAM
Forwarded from محراب مرادزاده
RIML Lab
سلام دوستان
ما که هکاتون رو برگزار کردیم و جایزه ۱۵۵ میلیونی خودمون رو هم دادیم.
فولاد مبارکه و سایر حامیان ولی حمایت تا سقف ۲ میلیاردشون رو گفتن ربطی به رتبهبندی ما در هکاتون نداره و خودشون داوری میکنند. نیازی هم نیست حتما محصول LLM باشه. هر چیزی که به هوش مصنوعی مرتبط باشه رو داوری میکنند.
خلاصه اینکه حتی اگر توی هکاتون شرکت نکردید هم میتونید این فرم رو پر کنید.
اینم لینک چند تا از مسائل واقعی شرکتها که اگر حل کنید ازتون میخرن یا سرمایهگذاری میکنند:
https://drive.google.com/drive/folders/1-C_jYAaLn4Ij4ZpeqIeROTRT1DE9qhuZ?usp=sharing
ما که هکاتون رو برگزار کردیم و جایزه ۱۵۵ میلیونی خودمون رو هم دادیم.
فولاد مبارکه و سایر حامیان ولی حمایت تا سقف ۲ میلیاردشون رو گفتن ربطی به رتبهبندی ما در هکاتون نداره و خودشون داوری میکنند. نیازی هم نیست حتما محصول LLM باشه. هر چیزی که به هوش مصنوعی مرتبط باشه رو داوری میکنند.
خلاصه اینکه حتی اگر توی هکاتون شرکت نکردید هم میتونید این فرم رو پر کنید.
اینم لینک چند تا از مسائل واقعی شرکتها که اگر حل کنید ازتون میخرن یا سرمایهگذاری میکنند:
https://drive.google.com/drive/folders/1-C_jYAaLn4Ij4ZpeqIeROTRT1DE9qhuZ?usp=sharing
💠 Compositional Learning Journal Club
Join us this week for an in-depth discussion on Compositional Learning in the context of cutting-edge text-to-image generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle compositional tasks and where improvements can be made.
🌟 This Week's Presentation:
📌 Title:
A Cat Is A Cat (Not A Dog!): Unraveling Information Mix-ups in Text-to-Image Encoders through Causal Analysis and Embedding Optimization
🎙️ Presenter: Amir Kasaei
🧠 Abstract:
This work presents an in-depth analysis of the causal structure in the text encoder of text-to-image (T2I) diffusion models, highlighting its role in introducing information bias and loss. While prior research has mainly addressed these issues during the denoising stage, this study focuses on the underexplored contribution of text embeddings—particularly in multi-object generation scenarios. The authors investigate how text embeddings influence the final image output and why models often favor the first-mentioned object, leading to imbalanced representations. To mitigate this, they propose a training-free text embedding balance optimization method that improves information balance in Stable Diffusion by 125.42%. Additionally, a new automatic evaluation metric is introduced, offering a more accurate assessment of information loss with an 81% concordance rate with human evaluations. This metric better captures object presence and accuracy compared to existing measures like CLIP-based text-image similarity scores.
📄 Paper:
A Cat Is A Cat (Not A Dog!): Unraveling Information Mix-ups in Text-to-Image Encoders through Causal Analysis and Embedding Optimization
Session Details:
- 📅 Date: Tuesday
- 🕒 Time: 5:00 - 6:00 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban
We look forward to your participation! ✌️
Join us this week for an in-depth discussion on Compositional Learning in the context of cutting-edge text-to-image generative models. We will explore recent breakthroughs and challenges, focusing on how these models handle compositional tasks and where improvements can be made.
🌟 This Week's Presentation:
📌 Title:
A Cat Is A Cat (Not A Dog!): Unraveling Information Mix-ups in Text-to-Image Encoders through Causal Analysis and Embedding Optimization
🎙️ Presenter: Amir Kasaei
🧠 Abstract:
This work presents an in-depth analysis of the causal structure in the text encoder of text-to-image (T2I) diffusion models, highlighting its role in introducing information bias and loss. While prior research has mainly addressed these issues during the denoising stage, this study focuses on the underexplored contribution of text embeddings—particularly in multi-object generation scenarios. The authors investigate how text embeddings influence the final image output and why models often favor the first-mentioned object, leading to imbalanced representations. To mitigate this, they propose a training-free text embedding balance optimization method that improves information balance in Stable Diffusion by 125.42%. Additionally, a new automatic evaluation metric is introduced, offering a more accurate assessment of information loss with an 81% concordance rate with human evaluations. This metric better captures object presence and accuracy compared to existing measures like CLIP-based text-image similarity scores.
📄 Paper:
A Cat Is A Cat (Not A Dog!): Unraveling Information Mix-ups in Text-to-Image Encoders through Causal Analysis and Embedding Optimization
Session Details:
- 📅 Date: Tuesday
- 🕒 Time: 5:00 - 6:00 PM
- 🌐 Location: Online at vc.sharif.edu/ch/rohban
We look forward to your participation! ✌️
arXiv.org
A Cat Is A Cat (Not A Dog!): Unraveling Information Mix-ups in...
This paper analyzes the impact of causal manner in the text encoder of text-to-image (T2I) diffusion models, which can lead to information bias and loss. Previous works have focused on addressing...
We have few open RA positions on Generalization in Reinforcement Learning, who will directly work with me during this summer. This topic deals with generalization of an RL agent beyond its training environments (e.g. see this paper and also this one as two instances). I am looking for highly motivated researchers, including B.Sc. or above students/alumni with these requirements:
1. Strong theoretical background in Probability and Statistics.
2. Proficient in Deep and Reinforcement Learning (must have taken/audited these two courses).
3. Having at least 3 months of past research experience.
4. Being self-reliant, self-motivated, and quick learner.
5. On site presence in the lab, and directly reporting to me on a weekly basis in the lab meeting.
If you are eligible, please fill this form at most by Wednesday, June 11th 2025, 9:00 AM Tehran time: Application Form.
1. Strong theoretical background in Probability and Statistics.
2. Proficient in Deep and Reinforcement Learning (must have taken/audited these two courses).
3. Having at least 3 months of past research experience.
4. Being self-reliant, self-motivated, and quick learner.
5. On site presence in the lab, and directly reporting to me on a weekly basis in the lab meeting.
If you are eligible, please fill this form at most by Wednesday, June 11th 2025, 9:00 AM Tehran time: Application Form.
arXiv.org
On the Importance of Exploration for Generalization in...
Existing approaches for improving generalization in deep reinforcement learning (RL) have mostly focused on representation learning, neglecting RL-specific aspects such as exploration. We...