I feel this framing in general says more about our attitudes to nuclear weapons than it does about chatbots. The 'Peace Dividend' era which is rapidly drawing to a close has made people careless when they talk about the magnitude of effects a nuclear war would have.
AI can be misused, but it can't be misused to the point an enormously depopulated humanity is forced back into subsistence agriculture to survive, spending centuries if not millennia to get back to where we are now.
These are not independent hypotheses. If (b) is true it decreases the possibility that (a) is true and vice versa.
The dependency here is that if Sam Altman is indeed a con man, it is reasonable to assume that he has in fact conned many people who then report an over inflated metric on the usefulness of the stuff they just bought (people don’t like to believe they were conned; cognitive dissonance).
In other words, if Sam Altman is indeed a con man, it is very likely that most metrics of the usefulness of his product is heavily biased.
Yes, that’s the point I’m making. In the scenario you’re describing, that would make Sam Altman a con man. Alternatively, he could simply be delusional and/or stupid. But given his history of deceit with Loopt and Worldcoin, there is precedent for the former.
It would make every marketing department and basically every startup founder conmen too.
While I don’t completely disagree with that framing it’s not really helpful.
Slogans are not promises, they are vague feelings. In the case of Coca-Cola, I know someone who might literally agree with the happiness part of it (though I certainly wouldn’t).
The promises of Theranos and LLMs are concrete measurable things we can evaluate and report where they succeed, fall short, or are lies.
Sure but equating Theranos and LLMs seems a bit disingenuous.
Theranos was an outright scam that never produced any results, whereas LMMs might not have (yet?) lived up to all the marketing promises (you might call them slogans?) they made, but they definitely provided some real measurable value.
I quite like IndexTTS2 personally, it does voice cloning and also lets you modulate emotion manually through emotion vectors which I've found quite a powerful tool. It's not necessarily something everyone needs, but it's really cool technology in my opinion.
It's been particularly useful for a model orchestration project I've been working on. I have an external emotion classification model driving both the LLM's persona and the TTS output so it stays relatively consistent. The affect system also influences which memories are retrieved; it's more likely to retrieve 'memories' created in the current affect state. IndexTTS2 was pretty much the only TTS that gives the level of control I felt was necessary.
Personally I'm not a fan of terse writing; if something's worth saying at all it's worth using suitably expressive language to describe it, and being short and cold with people isn't a good form of interpersonal communication in my view. Pleasantries are important for establishing mutual respect, if they're considered the baseline of politeness in a particular culture then it's overtly disrespectful to forgo them with strangers. Terseness is efficient for the writer certainly, but it's not necessarily for the reader.
Written like you're on one side of the cultural barrier and think that you have to be somehow naturally correct because that's what's natural to you. To others, that attitude is just arrogant and self-centered. Why should one particular culture dictate the behavior of everyone, and especially why should it be your culture?
What you call "establishing mutual respect" is just "insincere and shallow" to others. I do not believe for a second that a grocery store cashier wants to know how my day has been.
That's not what I mean, I don't like corpo-speak either. I mean just treating people like they're human beings, neither with affected shortness nor affected warmth. I really don't like the common notion that you have to be cold and short with people to be a good engineer, it makes the culture considerably less pleasant and more abrasive than it needs to be in my view.
I could just as well turn that around and say why should we all adopt your preference of unpleasantly curt communication? Is that not also an imposition of someone else's culture?
What if short isn't "cold" at all? That's a value you're projecting to it.
I understand there are cultures that value flowery speech more than mine. I'm asking you to stop using emotionally loaded words to describe how other people behave.
Nah I disagree, tool calling isn't that difficult. I've got my own Cats Effect based model orchestration project I'm working on, and while it's not 100% yet I can do web browse, web search, memory search (this one is cool), and others on my own hardware.
Yeah I do think if your trust in state institutions is gone for whatever reason (such as living in a dictatorship), it'd be absolute madness to carry around an electronic snitch with you. I'm not sure what I would rely on in those circumstances, but it certainly wouldn't be smartphones. Personally I'd want to rely on in-person communication as much as possible.
I'd go even further. Even if you trust it now, can you trust it in 5 years? How much of your data do apps, companies, and mobile providers hold onto? The real answer is that you don't know. So if your phone is a super precise GPS that you can't turn off (eg: Android) -- were you near a crime scene by chance? How about a big protest 2 years before the political winds shifted. Who knows you were there? You can't know for sure.
Part of why I like sailing is for a similar reason, beyond a certain range the only people who can bother you electronically are other people at sea (and you actually want to listen to them).
Swearing is a good heuristic still I think. The American corporate world remains rather prissy about swearing, so if the post sounds like a hairy docker after 12 pints then it's probably not an LLM.
I know the very roundabout you mean without having to look it up, I used to cycle in Oxford very often and while I’m sure there’s a tendency on the internet to underrate locals’ stories as hyperbolic, it really can’t be stressed enough how hazardous this particular feature of civil engineering is.
I feel this framing in general says more about our attitudes to nuclear weapons than it does about chatbots. The 'Peace Dividend' era which is rapidly drawing to a close has made people careless when they talk about the magnitude of effects a nuclear war would have.
AI can be misused, but it can't be misused to the point an enormously depopulated humanity is forced back into subsistence agriculture to survive, spending centuries if not millennia to get back to where we are now.
reply