Please open Telegram to view this post
VIEW IN TELEGRAM
Forwarded from C’s Random Collection
https://ai-2027.com “We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.” 不管怎样,这个页面的 interaction 很棒 #ai
发现一个非常好用的 Obsidian 插件:https://github.com/RyotaUshio/obsidian-pdf-plus

通过 backlink 实现不出 Obsidian 就能给 PDF 做标注和笔记,并且笔记还可以分散在多个文件中,设计得相当 Obsidian native。

#obsidian
A really good and concise deep dive into RLHF in LLM post-training, Proximal Policy Optimization (PPO), and Group Relative Policy Optimization (GRPO)
https://yugeten.github.io/posts/2025/01/ppogrpo/
#llm
Please open Telegram to view this post
VIEW IN TELEGRAM
https://arxiv.org/abs/2305.18290 #llm #ai

今天深入学习了 DPO,再次感叹扎实的数学功底对 AI/ML Research 的重要性……

原始的 RLHF 是用 pairwise human preference data(A 和 B 哪个更好)去训练一个 reward model,然后用 RL 来训练主 policy model,objective 是 minimize negative log likelihood + regularization(比如 PPO 就是通过新旧 policy 之间的 KL Divergence 来做 regularization)。这样的缺点在于 RL 是出了名的难搞,而且还需要一个 critic model 来预测 reward,使得整个系统的复杂性很高。

DPO 的思路是,观察到 RLHF 的 objective 本质上是 minimize loss over (latent) reward function,通过一番 reparameterization 等数学推导,重新设计了一个 minimize loss over policy 的 objective,绕过了中间这个 reward model,让 gradient update 直接增加 policy model 生成 winner response 的概率并降低 loser response 的概率,大幅简化了流程。

拓展阅读:
- KTO: 更进一步,不需要 pairwise comparison,只用对 individual example 的 upvote/downvote 也可以学习到 preference。
- IPO: 解决 DPO 容易 overfit 的问题。
支持一下友邻!很有意思的一个人!👇
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
Forwarded from C’s Random Collection
image_2025-05-14_23-36-37.png
504.8 KB
New landing page design and live at https://deeptime.now 🎉 and deeptime is now in beta, all features are free! Sign up today! #DeeptimeNow
2025/05/23 04:01:17
Back to Top
HTML Embed Code: