Hacker Newsnew | past | comments | ask | show | jobs | submit | da_grift_shift's commentslogin

The README clocks immediately as slop (the exact "AI tells" are left as an exercise for the reader), so clearly this Claude skill didn't do a good job.

What specifically is slop therein?

Hacker News: Where the hackers don't want to think about the code required to build the webshit nor the command-line nonsense used to write it, because the agents will take care of all that.

UWP, one of the 10+ frameworks used to run Windows 11's system components. Wonderful! Exemplary!

https://old.reddit.com/r/Windows11/comments/o2a0kp/there_are...


Those are design languages/styles, not frameworks. There is a fewer number of frameworks but it's still a handful. Win32, WPF, UWP, MAUI, etc... but at least they're fairly consistent at using UWP for system UI, with older bits using Win32 still.

Yeah no, it's against the guidelines. This kind of behavior should be called out as HN is a place for curious conversation between humans, not LLMs.

https://news.ycombinator.com/item?id=45077654


>I wanted to see if I could get 100% mathematical accuracy using local LLMs

So did you?

>[Your GitHub Link] I also wrote a deeper dive on the "Writer vs. Editor" shift here: [Link to Medium Article V2]

Generated comments are against the guidelines. In any case———I suggest you give your comments and expense filings a quick manual review before submitting! :-)

https://news.ycombinator.com/item?id=45077654


Right on the money. This should be the top comment IMO, and the fact that it isn't says a lot about modern HN...

>I hope


It makes sense when you consider that every part of this gimmick is rationalist brained.

The Village is backed by Effective Altruist-aligned nonprofits which trace their lineage back to CFEA and the interwoven mess of SF's x-risk and """alignment""" cults. These have big pockets and big influence. (https://news.ycombinator.com/item?id=46389950)

As expected, the terminally online tpot cultists are already flaming Simon to push the LLM consciousness narrative:

https://x.com/simonw/status/2004649024830517344

https://x.com/simonw/status/2004764454266036453


Am I losing my mind, or are these people going out of their way to tarnish the very nice concept of altruism?

From way out here, it really appears like maybe the formula is:

Effective Altruism = guilt * (contrarianism ^ online)

I have only been paying slight attention, but is there anything redeemable going on over there? Genuine question.

You mentioned "rationalist" - can anyone clue me in to any of this?

edit: oh, https://en.wikipedia.org/wiki/Rationalist_community. Wow, my formula intuition seems almost dead on?


Apologies for replying to myself, I am fumbling in my ignorance here, and genuinely curious if anyone could share any other valuable/interesting things from this "movement." In all other cases of people calling themselves "rationalists," it has been a huge yellow flag for me, as a fallibilist. :~]

I guess Dwarkesh Patel is part of that community? Well, his interviews are quite interesting, at least in the sense of seeing into a world that I otherwise don't see regarding AI researchers, and his questions are often quite good. Also, after interviewing many leading researchers and being on the hype train, he eventually did say a few months ago ~"yeah, the 'fast take-off' is not upon us," after trying to use leading tools to make his own podcast. That's intellectual honesty that is greatly missing in this world. So, there is that? I am also a huge fan of his Sara Paine pieces, at least on her part.

Is there anything else intellectually honest and interesting coming out of that group?


I was mildly interested in the movement but found it weird as well. Some causes seem good (eg fighting malaria), others like Super AIs just seem like geeks doing mental gymnastics over sci fi topics.


I have had a similar experience. I think one big problem is that EA often uses a low discount rate, meaning they treat theoretical people who won't be born for a century with similar value as people who are alive today. In theory that's defensible, but in reality it means you can hand wave at any large scale issue and come up with massive numbers of lives saved.

My church has a shower ministry, where we open up our showers to people without homes so they can clean up. We also provide clothes and personal supplies. That's just about the opposite of what EA would say we should do, but we can count exactly how many showers we provide and supplies we distribute and how those numbers are trending. Shouting "AI and asteroids!" is more EA, but it eventually devolves into the behavior you describe.


And even if it's "small stuff", I do believe acts of kindness are contagious, and lead to other people doing good deeds.

If we want to rationalize this EA style, we could say these small acts to have an exponential effect: 1 person can inspire 2 to be more selfless. So it's better to start propagating this as soon as possible, to reach maximum selflessness ASAP :)


So you haven't seen the models (by direction of the Effective Altruists at AI Digest/Sage) slopping out poverty elimination proposals and spamming childcare groups, charities and NGOs with them then? Bullshit asymmetry principle and all that.


The LLMs are FAANG PMs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: