Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"subliminal learning" does not even work for use cases like distilling o1 to R1 because they do not share a base model


Who's talking about that?

[Edit] My bad, I thought I was commenting on Anthropic's article


i replied to a comment by the hacker news user called pyman which claimed incorrectly that distillation was repackaged as "subliminal learning". so if you are asking me, who is talking about subliminal learning, which is unrelated to the topic of the article, the answer is that the hacker news user called pyman is doing that.


Ah you are right, I was commenting on this article:

https://alignment.anthropic.com/2025/subliminal-learning/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: