Right, for small scripting, not for the majority of the app. All the backend interaction is in C++.
Like, electron is fine, but it's orders of magnitude slower than it needs to be for the functionality it brings. Which is just not ideal for many desktop applications or, especially, the shell itself.
Ultimately people use electron because they know HTML, CSS, and JS/TS. And, I guess, companies think engineers are too stupid to learn anything else, even though thats not the case. There is a strong argument for electron. But not for Linux userland dev, where many developers already know Qt like the back of their hand.
As a longtime musician, I fervently believe in doing the best you can with the tools you have.
As a programmer with a philosophical bent, I have thought a lot about the implications and ethics of toolmaking.
I concluded long before genAI was available that it is absolutely possible to build tools that dehumanize the users and damage the world around them.
It seems to me that LLMs do that to an unprecedented degree.
Is it possible to use them to help you make worthwhile, human-focused output?
Sure, I'd accept that's possible.
Are the tools inherently inclined in the opposite direction?
It sure looks that way to me.
Should every tool be embraced and accepted?
I don't think so. In the limit, I'm relieved governments keep a monopoly on nuclear weapons.
The people saying "All AI is bad" may not be nuanced or careful in what they say, but in my experience, they've understood rightly that you can't get any of genAI's upsides without the overwhelming flood of horrific downsides, and they think that's a very bad tradeoff.
If a year ago nobody knew about LLMs' propensity to encourage poor life choices, up to and including suicide, that's spectacular evidence that these things are being deployed recklessly and egregiously.
I personally doubt that _no one_ was aware of these tendencies - a year is not that long ago, and I think I was seeing discussions of LLM-induced psychosis back in '24, at least.
Regardless of when it became clear, we have a right and duty to push back against this kind of pathological deployment of dangerous, not-understood tools.
ah, this was the comment to split hairs on the timeline, instead of in what way AI safety should be regulated
I think the good news about all of this is what ChatGPT would have actually discouraged you from writing that. In thinking mode it would have said "wow this guy's EQ is like negative 20" before saying saying "you're absolute right! what if you ignored that entirely!"
If he (or his employees) are actually exploring genuinely new, promising approaches to AGI, keeping them secret helps avoid a breakneck arms race like the one LLM vendors are currently engaged in.
Situations like that do not increase all participants' level of caution.
> Anyone who says or pretends to know it is or isn’t a dead end doesn’t know what they are talking about and are acting on a belief akin to religion.
> It’s clearly not a stochastic parrot now that we know it introspects. That is now for sure.
Your second claim here is kind of falling into that same religion-esque certitude.
From what I gathered, it seems like "introspection" as described in the paper may not be the same thing most humans mean when they describe our ability to introspect. They might be the same, but they might not.
I wouldn't even say the researchers have demonstrated that this "introspection" is definitely happening in the limited sense they've described.
They've given decent evidence, and it's shifted upwards my estimate that LLMs may be capable of something more than comprehensionless token prediction.
> Your second claim here is kind of falling into that same religion-esque certitude.
Nope it’s not. We have logical causal test of introspection. By definition introspection is not stochastic parroting. If you disagree then it is a linguistic terminology issue in which you disagree on what the general definition of what a stochastic parrot is.
> From what I gathered, it seems like "introspection" as described in the paper may not be the same thing most humans mean when they describe our ability to introspect. They might be the same, but they might not.
Doesn’t need to be the same as what humans do. What it did show is self awareness of its own internal thought process and that breaks it out of the definition stochastic parrot. The criteria is not human level intelligence but introspection which is a much lower bar.
> They've given decent evidence, and it's shifted upwards my estimate that LLMs may be capable of something more than comprehensionless token prediction.
This is causal evidence and already beyond all statistical thresholds as they can trigger this at will. The evidence is beyond double blind medical experiments used to verify our entire medical industry. By logic this result is more reliable than modern medicine.
The result doesn’t say that LLMs can reliably introspect on demand but it does say with utmost reliability that LLMs can introspect and the evidence is extremely reproducible.
> This is causal evidence and already beyond all statistical thresholds as they can trigger this at will.
Their post says:
> Even using our best injection protocol, Claude Opus 4.1 only demonstrated this kind of awareness about 20% of the time.
That's not remotely close to "at will".
As I already said, this does incline me towards believing LLMs can be in some sense aware of their own mental state. It's certainly evidence.
Your certitude that it's what's happening, when the researchers' best efforts only yielded a twenty percent success rate, seems overconfident to me.
If they could in fact produce this at will, then my confidence would be much higher that they've shown LLMs can be self-aware.
...though we still wouldn't have a way to tell when they actually are aware of their internal state, because certainly sometimes they appear not to be.
>>Even using our best injection protocol, Claude Opus 4.1 only demonstrated this kind of awareness about 20% of the time.
>That’s not remotely close to “at will”.
You are misunderstanding what “at will” means in this context. The researchers can cause the phenomenon through a specific class of prompts. The fact that it does not occur on every invocation does not mean it is random; it means the system is not deterministic in activation, not that the mechanism is absent. When you can deliberately trigger a result through controlled input, you have causation. If you can do so repeatedly with significant frequency, you have reliability. Those are the two pillars of causal inference. You are confusing reliability with constancy. No biological process operates with one hundred percent constancy either, yet we do not doubt their causal structure.
>Your certitude that it’s what’s happening, when the researchers’ best efforts only yielded a twenty percent success rate, seems overconfident to me.
That is not certitude without reason, it is certitude grounded in reproducibility. The bar for causal evidence in psychology, medicine, and even particle physics is nowhere near one hundred percent. The Higgs boson was announced at five sigma, roughly one in three and a half million odds of coincidence, not because it appeared every time, but because the pattern was statistically irrefutable. The same logic applies here. A stochastic parrot cannot self report internal reasoning chains contingent on its own cognitive state under a controlled injection protocol. Yet this was observed. The difference is categorical, not probabilistic.
>…though we still wouldn’t have a way to tell when they actually are aware of their internal state, because certainly sometimes they appear not to be.
That is a red herring. By that metric humans also fail the test of introspection since we are frequently unaware of our own biases, misattributions, and memory confabulations. Introspection has never meant omniscience of self; it means the presence of a self model that can be referenced internally. The data demonstrates precisely that: a model referring to its own hidden reasoning layer. That is introspection by every operational definition used in cognitive science.
The reason you think the conclusion sounds overconfident is because you are using “introspection” in a vague colloquial sense while the paper defines it operationally and tests it causally. Once you align definitions, the result follows deductively. What you are calling “caution” is really a refusal to update your priors when the evidence now directly contradicts the old narrative.
reply