The people who want GIMP to change its name are the people who use GIMP, love GIMP, and have difficulties using GIMP in contexts like education or employment because of its name. It's a simple fact that the name "GIMP" caused problems. It's a shame.
But there's not much use in changing the name now, you can't get that lost time back. GIMP isn't the only FOSS image editor available anymore. There are myriad of Photoshop competitors in the subscription, freeware, and FOSS spaces.
I love GIMP and I'm still using it because I've got decades of muscle memory. But I also love Krita and if I need to edit something on a work computer, I'll just use that.
I think it might be muscle memory. I started using GIMP in 2005 or so before the single window mode, and my muscle memory is tailor-fit to it. It felt like an extension of myself.
With GIMP 3, there are a lot of improvements! But it also breaks my muscle memory a lot. GIMP 3 is objectively better, but I find myself opening 2.10 regularly.
What is the name for this fallacy, "We should all start being adults"? Everyone who is an adult can understand that names matter, especially ones intentionally chosen to cause offense or a ruckus.
First, it doesn't matter how much you or I or the commentator above us changes to "be adults". Only the saddest and most lonesome people will be the sole decisionmaker in every context they exist in.
Sometimes, you exist in a context where you need someone elses permission to use software. This is often the case for employed people.
Second, other adults will disagree with you. It doesn't make them any less "adults".
On the other hand, someone would not be unreasonable to consider you childish if you're so stuck up on your software opinions that you'll disparage everyone around you in the defense of your obscure preferred image editing program. Could you imagine implying to a room of peers that you're the only adult?
It's wonderful for you that GIMP's name has never been a problem for you. But there are about 8 billion people who are not you, and a few dozen of them are fellow GIMP users.
I've been using GIMP for most of its existence but I've faced difficulties trying to use it in school and work. Where I live, "gimp" is a word which means either a slur for someone with a motor disability or as a form-fitting leather sex torture-fetish full-body garment.
(For what it's worth, the G was added in order to reference the form-fitting leather sex torture-fetish full-body garment in Pulp Fiction. The program was called 'IMP' beforehand.)
There are over 7000 languages in the world, around half of them dying or having already died due to linguistic domination, in large part English, each with its own set of culturally sensitive words.
To follow the above mode of reasoning without advantaging one or few languages, you would have to change an enormous amount of words in all languages, if not basically all. This is obviously not feasible.
If GIMP was a dirty word in a Native American language, or a native African language, there would be no debate. That we are debating this at all is because English has privileged status due to the Anglo-Saxon hegemony.
Hence, you are expecting us to give special, privileged treatment to the linguistic sensitivities of your dominant culture. Which is unfair, especially historically, because the hegemony was achieved by mass land steal and many genocides, which we shouldn't be rewarding by allowing further claims.
So yes, it should be expected from an adult anglophone to tolerate the existence of sordophones, words that are dirty in their dialect but not in others, especially in an international, multilingual setting. This is what it means to abstain from linguistic imperialism. This is what it means to tolerate and respect other cultures.
And to enforce tolerance, indeed it may be needed to view those who fail at this as childish.
I feel somewhat sorry to say this, but I need to be assertive here.
> And to enforce tolerance, indeed it may be needed to view those who fail at this as childish.
No, it's not necessary to denigrate other people under the belief you can police others by proxy.
"Is this derogatory or offensive?" is a basic localization question that is constantly asked in many languages. Yes, including Arabic.
I generally agree about the evils of linguistic imperialism. But I'm describing the world I live in, not the one I want to create.
But that's beside the point. "Linguistic imperialism" is the wrong lens to use here to defend the name. GIMP is not a sordophone, it's the opposite.
GIMP was named by American-born English speakers with the intent to have an edgy name. GIMP was chosen in reference to the full-body sex garment, because they were college kids and that's funny when you're 23.
The intent was offense. It worked well. It's no surprise that GIMP is only well-adopted where the word doesn't carry its offensive meaning.
Gonna have to say this a bunch around here, but yours is yet ANOTHER comment shooting the messenger. You (theoretically) are championing an idea of freedom in language or something like that.
Look, people, this is PR. The author wondered out loud "why isn't he more recognized" and a reasonable answer is that "People like me, in America, who love free software and try to get people using it, run into trouble that could have been avoided if the name was changed."
You want your lesson out there on freedom of language, fine, that's what you all got. Just be honest about what you may have missed -- which I genuinely believe could have been a world in which Adobe was nowhere near as annoyingly powerful as it is (or at least had been).
It's my experience that every professional and educational setting I've tried to use the GIMP in has seen the name as a roadblock and had it swiftly rejected.
It's really a shame they were steadfast in that one baffling decision. It was so self-destructive to the project. I wonder what would have happened if they stayed with their original name IMP, or found a different Pulp Fiction reference to make.
Switching desktops on MacOS is a >1 second long animation that blocks input which can't be disabled. It can only be replaced with a fade in/out which is just as long.
> no latency from brain to action is the greatest design you can possibly have. We want to feel one with the machine.
But... I used Windows growing up before switching to Linux, and I've been using a Macbook in recent years. Both Windows and Linux can be configured to run with no animation lag, but AFAIK this is just not possible in MacOS. I can't imagine doing anything serious on MacOS with animation log completely interrupting my train of thought or flow state.
I'm no Windows fan, but at least circa 2019, I know Windows 10 could be configured to be similarly snappy and free of laggy animations.
The greatest sin in MacOS is the immense lag when switching desktops ("Spaces"). It's a baffling design decision, I can't believe it's intentional.
IMO terminals are still the fastest way to do a lot of things on a phone, but it's a much better experience on Androids with keyboards for the purpose.
And even on an iPhone, it's just fine. Python works really well as a shell for quick calculations, and you can use a script with the -i flag to make it more accessible.
> Again I do not know why MJ Rathbun decided based on your PR comment to post some kind of takedown blog post,
This wording is detached from reality and conveniently absolves responsibility from the person who did this.
There was one decision maker involved here, and it was the person who decided to run the program that produced this text and posted it online. It's not a second, independent being. It's a computer program.
This already happens every single time when there is a security breach and private information is lost.
We take your privacy and security very seriously. There is no evidence that your data has been misused. Out of an abundance of caution… We remain committed to... will continue to work tirelessly to earn ... restore your trust ... confidence.
What else would you see them do or say beyond this canned response? The reason I am asking is because people almost always bring up how dissatisfied they are with such apologies, yet I’ve never seen a good alternative that someone would be happy with. I don’t work in PR or anything, just curious if there is a better way.
Unfortunately, the market seems to have produced horrors by way of naturally thinking agents, instead. I wish that, for all these years of prehistoric wretchedness, we would have had AI to blame. Many more years in the muck, it seems.
Change this to "smash into a barricade" and that's why I'm not riding in a self-driving vehicle. They get to absolve themselves of responsibility and I sure as hell can't outspend those giants in court.
I agree with you for a company like Tesla, not only examples of self driving crashes but even the door handles would stop working when the power was cut, people trapped inside burning vehicles... Tesla doesn’t care
Meanwhile, Waymo has never been at fault for a collision afaik. You are more likely to be hurt by an at fault uber driver than a Waymo
And it's incredibly easy now. Just blame the Soul.md or say you were cycling thru many models so maybe one of those went off the rails. The real damage is that most of us know AI can go rouge, but if someone is pulling the strings behind the scenes, most people will be like "oh silly AI, anyways..."
It seems like the OpenClaw users have let their agents make Twitter accounts and memecoins now. Most people are thinking these agents have less "bias" since it's AI, but most are being heavily steered by their users.
It’s funny to think that, like AI, people take actions and use corporations as a shield (legal shield, personal reputation shield, personal liability shield).
Adding AI to the mix doesn’t really change anything, other than increasing the layers of abstraction away from negative things corporations do to the people pulling the strings.
its surprisingly easy to get away with murder (literally and figuratively) without piercing the corporate veil if you understand the rules of the game. Running decisions through a good law firm also “helps” a lot.
A bit over five years ago, someone struck and killed my friend in a crosswalk. He was a fellow PhD student. It was on a road with a 30mph limit but where people regularly speed to 50+mph.
He was an international student from Vietnam. His family woke up one day, got a phone call, and learned he was killed. I guess there was nobody to press charges.
She never faced any accountability for the 'accident'. She gets to live her life, and she now runs a puppetry education for children. Her name even seems to have been scrubbed from most of the articles about her killing my friend.
So, I think about this regularly.
I was a cyclist at the time so I was aware of how common this injustice was, but that was the first time it hit so close to home. I moved into a large city and every cyclist I've met here (every!) has been hit by a car, and the car driver effectively got only a slap on the wrist. It's just so common.
> Her name even seems to have been scrubbed from most of the articles about her killing my friend.
I'm somewhat surprised there were even articles? Are road fatalities uncommon enough in the US that everyone gets written up? Or was this a special enough one?
Not sure if this is true for every university, but when someone in the community dies, especially a student, there's usually at least an article about it.
Well the important concept missing there that makes everything sort of make sense is due diligence.
If your company screws up and it is found out that you didn't do your due diligence then the liability does pass through.
We just need to figure out a due diligence framework for running bots that makes sense. But right now that's hard to do because Agentic robots that didn't completely suck are just a few months old.
To be fair, one doesn't need AI to attempt to avoid responsibility and accept undue credit. It's just narcissism; meaning, those who've learned to reject such thinking will simply do so (generally, in abstract), with or without AI.
Rice's Theorem says you cannot predict or control the effects of nearly any program on your computer; for example, there's no way to guarantee that running a web browser on arbitrary input will not empty your bank account and donate it all to al-qaeda; but you're running a web browser on potentially attacker-supplied input right now.
I do agree that there's a quantitative difference in predictability between a web browser and a trillion-parameter mass of matrixes and nonlinear activations which is already smarter than most humans in most ways and which we have no idea how to ask what it really wants.
But that's more of an "unsafe at any speed" problem; it's silly to blame the person running the program. When the damage was caused by a toddler pulling a hydrogen bomb off the grocery store shelf, the solution is to get hydrogen bombs out of grocery stores (or, if you're worried about staying competitive with Chinese grocery stores, at least make our own carry adequate insurance for the catastrophes or something).
In practice, most programs can be predicted within reasonable bounds quite easily. And you can contain the external effects of most programs quite easily. Rice's theorem doesn't stop you from keeping a program off the Internet, or running it in a VM.
Your later comparisons are nonsense. We're not talking about babies, we're talking about adults who should know better assembling high leverage tools specifically to interact with other people's lives. If they were even running with oversight that would be something, but the operators are just letting them do whatever. But your implication that agents are "unsafe at any speed" leads to the same conclusion: do not run the program.
I guess today's kids don't know this; but "Unsafe at Any Speed" was the title of a 1965 book that spurred the creation of the Department of Transportation, and changed the automotive industry.
The point is that, if you're designing and selling a product which a large minority of people are going to use in a way that harms themselves and others, pointing at the users and calling them irresponsible doesn't actually help anybody. The people designing and selling the products actually need to make them safer. And if they're not going to do that voluntarily (they're not), we need the government to create insurance requirements, safety bonds, and whatever other incentive gradients are required to make the producers build safe products.
I caught the reference. To the extent it applies at all, I obviously think it reinforces my point. But badly engineered cars, members of an existing category that we know can be done tolerably well, are a very strained analogy to brand new software deployed by people who understand how it works and therefore the risks they are taking.
And actually, the deployer has a lot more control over the havoc the software can cause than the creator. They choose what credentials to give it, whether and how closely to monitor it, any other guardrails, etc. If the operator of the bot discussed in OP had intervened soon after it went off the rails, we wouldn't be here.
So sure, I would also tell the makers of this software to knock it off. Don't put out products that are the network equivalent of a chainsaw on a roomba, no matter how many cool tiktoks it creates. But when I'm talking to people running claws or whatever, they no longer have the excuse of ignorance. So the advice is still: Do not run the program.
Blaming the person running the program is the right thing to do and it's the only thing to do.
This is a really strained equivalence. I can't know for certain that the sun won't fall out of the sky if I drink a second cup of coffee. The "laws of physics" are just descriptions based on observations, after all. But it's a hilarious thing so unlikely we can call it impossible.
Similarly, we can have some nuance here. Someone running a program with the intention of it generating posts on the internet is obviously responsible for what it generates.
Rice's Thm does not say this. You can absolutely have 100% confident knowledge of what a program will not do, it just means that you also have false positives. You cannot have a both sound and complete static analysis for some program property. But you can have a sound or complete analysis.
More like a dog. Person has no responsibility for an autonomous agent, gun is not autonomous.
It is socially acceptable to bring dangerous predators to public spaces, and let them run loose. First bite is free, owner has no responsibility, no way knowing dog could injure someone.
Repeated threats of violence (barking), stalking and shitting on someones front yard, are also fine, and healthy behavior. Person can attack random kid, send it to hospital, and claim it "provoked them". Brutal police violence is also fine, if done indirectly by autonomous agent.
On the other hand, the phrase "footgun" didn't come out of nowhere. You won't run the program, but someone else will build it, and sell it to someone who will.
Scale matters and even with people it's a problem: fixated persons are a problem because most people don't understand just how much nuisance one irrationally obsessed person can create.
Now instead add in AI agents writing plausibly human text and multiply by basically infinity.
Even at the risk of coming off snarky: the emergent behaviour of LLMs trained on all the forum talk across the internet (spanning from Astral Codex to ex-Twitter to 4chan) is ... character assassination.
I'm pretty sure there's a lesson or three to take away.
1. There is a critical mass of people sharing the delusion that their programs are sentient and deserving of human rights. If you have any concerns about being beholden to delusional or incorrect beliefs widely adopted by society, or being forced by network effects to do things you disagree with, then this is concerning.
2. Whether or not we legitimize bots on the internet, some are run to masquerade as a human. Today, it's a "I'm a bot and this human annoyed me!" Maybe tomorrow, it's "Abnry is a pedophile and here are the receipts" with myriad 'fellow humans' chiming in to agree, "Yeah, I had bad experiences with them", etc.
3. The text these generate are informed by its training corpus, the mechanics of the neural architecture, and by the humans guiding the models as they run. If you believe these programs are here to stay for the foreseeable future, then the type of content it generates is interesting.
For me, my biggest concern are the waves of people who want to treat these programs as independent and conscious, absolving the person running them of responsibility. Even as someone who believes a program can theoretically be sentient, LLMs definitely are not. I think this story is and will be exemplary so I care a good amount.
But there's not much use in changing the name now, you can't get that lost time back. GIMP isn't the only FOSS image editor available anymore. There are myriad of Photoshop competitors in the subscription, freeware, and FOSS spaces.
I love GIMP and I'm still using it because I've got decades of muscle memory. But I also love Krita and if I need to edit something on a work computer, I'll just use that.
reply