Since when has raising taxes actually solved any major problem? We have enough taxes, the issue is the corrupt politicians swindling it to themselves and their cronies.
Power is not the problem, because power exists regardless of who owns it.
We the people actually have a relatively high amount of power in our states and communities. We just don't use it. The real solution is to convince the masses to pay attention, which is harder today than it ever was.
Considering the seeming increasing frequency of high severity bugs happening at FAANG companies in the last year I think perhaps The great getting greater is not actually the case.
I happen to think that's largely a self-delusion which nobody is immune to, no matter how smart you are (or think you are).
I've heard this from a few smart people whom I know really well. They strongly believe this, they also believe that most people are deluding themselves, but not them - they're in the actually-great group, and when I pointed out the sloppiness of their LLM-assisted work they wouldn't have any of it.
I'm specifically talking about experienced programmers who now let LLMs write majority of their code.
All on my own, I hand-craft pretty good code, and I do it pretty fast. But one person is finite, and the amount of software to write is large.
If you add a second, skilled programmer, just having two people communicating imperfectly drops quality to 90% of the base.
If I add an LLM instead, it drops to maybe 80% of my base quality. But it's still not bad. I'm reading the diffs. There are tests and fancy property tests and even more documentation explaining constraints that Claude would otherwise miss.
So the question is if I can get 2x the features at 80% of the quality, how does that 80% compare to what the engineering problem requires?
I was somewhat surprised to find that the differentiator isn't being smart or not, but the ability to accurately assess when they know something.
From my own observations, the types of people I previously observed to be sloppy in their thought processes and otherwise work, correlates almost perfectly with those that seem most eager to praise LLMs.
It's almost as if the ability to identify bullshit, makes you critical of the ultimate bullshit generator.
This is very true. My biggest frustration is people who use LLMs to generate code, and then don't use LLMs to refine that code. That is how you end up with slop.I would estimate that as a SDE I spend about 30% of my time reviewing and refining my own code, and I would encourage anyone operating a coding agent to still spend 30% figuring out how to improve the code before shipping.
Yeah, Mithril got this right over 10 years ago. Still good to see at least one big player finally catching up. React's state model has always been a pain to work with.
Same here. I tried codex a few days ago for a very simple task (remove any references of X within this long text string) and it fumbled it pretty hard. Very strange.
yeah I'm in the same boat. Codex can't do this one task, and constantly forgets what I've told it, and I'm reading these comments saying how is so great to the point that I'm wondering if I'm the one taking the crazy pills. Maybe we're being A/B tested and don't know about it?
No, no one that's super boosting the LLMs ever tells you what they are working on or give any reasonable specifics about how and why it's beneficial. When someone does, it's a fairly narrow scope and typically inline with my experience.
They can save you some time by doing some fairly complex basic tasks that you can write in plain language instead of coding. To get good results you really need a lot of underlying knowledge yourself and essentially, I think of it as a translator. I can write a program in very good detail using normal language and then the LLM can convert it to code with reasonable accuracy.
I haven't been able to depend on it to do anything remotely advanced. They all make up API endpoints or methods or fill in data with things that simply don't exist, but that's the nature of the model.
You misread me. I'm one of the people you're complaining about. Claude code has been great in my experience and no I don't have a GitHub repo of code that's been generated for you to tell me that's trivial and unadvanced and that a child could do it.
What I'm saying was to compare my experience with Claude code vs Codex with GPT-5. CC's better than codex in my experience, contrary to GP's comment.
You're framing of "sex icky" is a common reductionist approach to remove all humanity from the topic and try and make it purely logical. But that's always been a ridiculous way to argue.
The human experience has never been pure reason. A picture of a naked person will have wildly different effects than a picture of a dog, even though you could technically say they're both "just pixels on a screen". Reductionism doesn't get an argument anywhere; it's too commonly an intellectually lazy defense of the vulgar.
Of course you prefer reductionist, because that fits your interest of doing nothing, rather than seeking a solution to the very real destructive consequences of the genre in question. That's what I mean by intellectually lazy.
Porn is way easier to define than obscenity, so I don't see that being a problem.
Not being 100% effective isn't backfiring. No law is ever absolutely effective. But making something illegal objectively makes it more difficult to obtain, and is certainly effective at reducing access, even if it's not 100%.
In many cases, bans can have unintended side effects which might make the means of acquiring/distributing/producing "banned X" far worse (aka the cure is worse than the disease).
At least in the case of South Korea, all porn is treated equally illegally, so the country has a really high incidence of secret cameras peeping in places like women’s bathrooms, because that’s just as illegal as a scripted porn film.
You're the only one who asserted a percentage. So allow me to clarify, when I wrote that comment I had no belief that a law need be 100% effective for it to be a useful law. I also believe there's a lot of room between 100% effective and "backfiring". I don't believe this is a binary situation but there's a spectrum (that isn't one dimensional)
I hope with this added context that my previous comment will make much more sense and you can interpret it closer to what I intended.
I'll just add, I don't think most people work in those absolutes. So I'd be wary of jumping to the extreme interpretation. People might interpret you as being disingenuous and using the logical fallacy "logical extreme" or "reductio ad absurdum". But I'm pretty sure you're not doing that because then I'd be grossly misinterpreting you, right?
I misread your "It always backfires" comment as making something illegal always backfires, rather than the desire to make things simple always backfires (note that "always" implies 100%). So now I see all you're saying is "be careful", which is fine.
This is just how people speak. Sometimes qualifiers are critical, sometimes they are a bit of exaggeration. But always doesn't mean always because only a sith deals in absolutes.
> if one were to operate at that level then Facebook would be illegal.
Sounds great, where do I sign?
Sure ban porn, but IMO ban social media first. Or at the very least, mandate educational materials on it. Kids grow up thinking it's important and it ruins their lives. Brainrot content deadens their sensory inputs. Same thing needs to happen with AI; we seriously need some required education in these spaces.