The README clocks immediately as slop (the exact "AI tells" are left as an exercise for the reader), so clearly this Claude skill didn't do a good job.
Hacker News: Where the hackers don't want to think about the code required to build the webshit nor the command-line nonsense used to write it, because the agents will take care of all that.
Those are design languages/styles, not frameworks. There is a fewer number of frameworks but it's still a handful. Win32, WPF, UWP, MAUI, etc... but at least they're fairly consistent at using UWP for system UI, with older bits using Win32 still.
>I wanted to see if I could get 100% mathematical accuracy using local LLMs
So did you?
>[Your GitHub Link] I also wrote a deeper dive on the "Writer vs. Editor" shift here: [Link to Medium Article V2]
Generated comments are against the guidelines. In any case———I suggest you give your comments and expense filings a quick manual review before submitting! :-)
It makes sense when you consider that every part of this gimmick is rationalist brained.
The Village is backed by Effective Altruist-aligned nonprofits which trace their lineage back to CFEA and the interwoven mess of SF's x-risk and """alignment""" cults. These have big pockets and big influence. (https://news.ycombinator.com/item?id=46389950)
As expected, the terminally online tpot cultists are already flaming Simon to push the LLM consciousness narrative:
Apologies for replying to myself, I am fumbling in my ignorance here, and genuinely curious if anyone could share any other valuable/interesting things from this "movement." In all other cases of people calling themselves "rationalists," it has been a huge yellow flag for me, as a fallibilist. :~]
I guess Dwarkesh Patel is part of that community? Well, his interviews are quite interesting, at least in the sense of seeing into a world that I otherwise don't see regarding AI researchers, and his questions are often quite good. Also, after interviewing many leading researchers and being on the hype train, he eventually did say a few months ago ~"yeah, the 'fast take-off' is not upon us," after trying to use leading tools to make his own podcast. That's intellectual honesty that is greatly missing in this world. So, there is that? I am also a huge fan of his Sara Paine pieces, at least on her part.
Is there anything else intellectually honest and interesting coming out of that group?
I was mildly interested in the movement but found it weird as well. Some causes seem good (eg fighting malaria), others like Super AIs just seem like geeks doing mental gymnastics over sci fi topics.
I have had a similar experience. I think one big problem is that EA often uses a low discount rate, meaning they treat theoretical people who won't be born for a century with similar value as people who are alive today. In theory that's defensible, but in reality it means you can hand wave at any large scale issue and come up with massive numbers of lives saved.
My church has a shower ministry, where we open up our showers to people without homes so they can clean up. We also provide clothes and personal supplies. That's just about the opposite of what EA would say we should do, but we can count exactly how many showers we provide and supplies we distribute and how those numbers are trending. Shouting "AI and asteroids!" is more EA, but it eventually devolves into the behavior you describe.
And even if it's "small stuff", I do believe acts of kindness are contagious, and lead to other people doing good deeds.
If we want to rationalize this EA style, we could say these small acts to have an exponential effect: 1 person can inspire 2 to be more selfless. So it's better to start propagating this as soon as possible, to reach maximum selflessness ASAP :)
So you haven't seen the models (by direction of the Effective Altruists at AI Digest/Sage) slopping out poverty elimination proposals and spamming childcare groups, charities and NGOs with them then? Bullshit asymmetry principle and all that.
reply