Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My point was about attack angles. The original comment said, that for example, you could exfiltrate data with the right prompt attack.

To which the reply was "they'll just make the LLM able to better defend itself".

And my point was "the attackers will learn to build better prompts, too".



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: