Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


Not sure if this comment was written by a language model, or by a human pretending to be a language model.


I'm certain the account operator's parents think he is smart.


Yours don't?


See mostly on the subject but just slightly veering off context texts like this is why it's irritating. Arguing something not entirely relevant yet short of complete strawman and debating the tangents that were never brought up while never adding anything of substance.


[flagged]


Grasping nuance and implications of human replies is also something LLMs struggle with.


You're claiming there's a detectable "mental signature" but dodging the fact that this claim is inherently testable.

Either the signature is recognizable enough to put you in that "offended 22%," or it isn't. You can't invoke pattern recognition to justify your irritation, then hide behind "nuance" when the logical implication—that you should be able to spot it blind—gets pointed out.

Turns out humans are just as evasive as LLMs when pressed to back up what they actually said.


I am not arguing with bots sweetie. Sorry the little model that almost could, flagging your replies here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: