Hacker Newsnew | past | comments | ask | show | jobs | submit | lynndotpy's commentslogin

The people who want GIMP to change its name are the people who use GIMP, love GIMP, and have difficulties using GIMP in contexts like education or employment because of its name. It's a simple fact that the name "GIMP" caused problems. It's a shame.

But there's not much use in changing the name now, you can't get that lost time back. GIMP isn't the only FOSS image editor available anymore. There are myriad of Photoshop competitors in the subscription, freeware, and FOSS spaces.

I love GIMP and I'm still using it because I've got decades of muscle memory. But I also love Krita and if I need to edit something on a work computer, I'll just use that.


I think it might be muscle memory. I started using GIMP in 2005 or so before the single window mode, and my muscle memory is tailor-fit to it. It felt like an extension of myself.

With GIMP 3, there are a lot of improvements! But it also breaks my muscle memory a lot. GIMP 3 is objectively better, but I find myself opening 2.10 regularly.


What is the name for this fallacy, "We should all start being adults"? Everyone who is an adult can understand that names matter, especially ones intentionally chosen to cause offense or a ruckus.

First, it doesn't matter how much you or I or the commentator above us changes to "be adults". Only the saddest and most lonesome people will be the sole decisionmaker in every context they exist in.

Sometimes, you exist in a context where you need someone elses permission to use software. This is often the case for employed people.

Second, other adults will disagree with you. It doesn't make them any less "adults".

On the other hand, someone would not be unreasonable to consider you childish if you're so stuck up on your software opinions that you'll disparage everyone around you in the defense of your obscure preferred image editing program. Could you imagine implying to a room of peers that you're the only adult?

It's wonderful for you that GIMP's name has never been a problem for you. But there are about 8 billion people who are not you, and a few dozen of them are fellow GIMP users.

I've been using GIMP for most of its existence but I've faced difficulties trying to use it in school and work. Where I live, "gimp" is a word which means either a slur for someone with a motor disability or as a form-fitting leather sex torture-fetish full-body garment.

(For what it's worth, the G was added in order to reference the form-fitting leather sex torture-fetish full-body garment in Pulp Fiction. The program was called 'IMP' beforehand.)


There are over 7000 languages in the world, around half of them dying or having already died due to linguistic domination, in large part English, each with its own set of culturally sensitive words.

To follow the above mode of reasoning without advantaging one or few languages, you would have to change an enormous amount of words in all languages, if not basically all. This is obviously not feasible.

If GIMP was a dirty word in a Native American language, or a native African language, there would be no debate. That we are debating this at all is because English has privileged status due to the Anglo-Saxon hegemony.

Hence, you are expecting us to give special, privileged treatment to the linguistic sensitivities of your dominant culture. Which is unfair, especially historically, because the hegemony was achieved by mass land steal and many genocides, which we shouldn't be rewarding by allowing further claims.

So yes, it should be expected from an adult anglophone to tolerate the existence of sordophones, words that are dirty in their dialect but not in others, especially in an international, multilingual setting. This is what it means to abstain from linguistic imperialism. This is what it means to tolerate and respect other cultures.

And to enforce tolerance, indeed it may be needed to view those who fail at this as childish.

I feel somewhat sorry to say this, but I need to be assertive here.


> And to enforce tolerance, indeed it may be needed to view those who fail at this as childish.

No, it's not necessary to denigrate other people under the belief you can police others by proxy.

"Is this derogatory or offensive?" is a basic localization question that is constantly asked in many languages. Yes, including Arabic.

I generally agree about the evils of linguistic imperialism. But I'm describing the world I live in, not the one I want to create.

But that's beside the point. "Linguistic imperialism" is the wrong lens to use here to defend the name. GIMP is not a sordophone, it's the opposite.

GIMP was named by American-born English speakers with the intent to have an edgy name. GIMP was chosen in reference to the full-body sex garment, because they were college kids and that's funny when you're 23.

The intent was offense. It worked well. It's no surprise that GIMP is only well-adopted where the word doesn't carry its offensive meaning.


Gonna have to say this a bunch around here, but yours is yet ANOTHER comment shooting the messenger. You (theoretically) are championing an idea of freedom in language or something like that.

Look, people, this is PR. The author wondered out loud "why isn't he more recognized" and a reasonable answer is that "People like me, in America, who love free software and try to get people using it, run into trouble that could have been avoided if the name was changed."

You want your lesson out there on freedom of language, fine, that's what you all got. Just be honest about what you may have missed -- which I genuinely believe could have been a world in which Adobe was nowhere near as annoyingly powerful as it is (or at least had been).


It's my experience that every professional and educational setting I've tried to use the GIMP in has seen the name as a roadblock and had it swiftly rejected.

It's really a shame they were steadfast in that one baffling decision. It was so self-destructive to the project. I wonder what would have happened if they stayed with their original name IMP, or found a different Pulp Fiction reference to make.


Switching desktops on MacOS is a >1 second long animation that blocks input which can't be disabled. It can only be replaced with a fade in/out which is just as long.

Unless you disable ProMotion in favor of static 60Hz. Then it’s reasonably fast again. It’s been broken like that for ages.

No, it’s >1 second on every machine.

I don’t know about your particular case, but there’s lots of people pointing to this exact issue.

https://apple.stackexchange.com/questions/438188/latency-whe...


I agree with this entirely:

> no latency from brain to action is the greatest design you can possibly have. We want to feel one with the machine.

But... I used Windows growing up before switching to Linux, and I've been using a Macbook in recent years. Both Windows and Linux can be configured to run with no animation lag, but AFAIK this is just not possible in MacOS. I can't imagine doing anything serious on MacOS with animation log completely interrupting my train of thought or flow state.

I'm no Windows fan, but at least circa 2019, I know Windows 10 could be configured to be similarly snappy and free of laggy animations.

The greatest sin in MacOS is the immense lag when switching desktops ("Spaces"). It's a baffling design decision, I can't believe it's intentional.


The speed of the desktop switching animation also depends on the refresh rate of the monitor, by the way. Baffling.

Same applies to iPhones. Lag (animation) between action and result drives me crazy. Accessibility settings don't help much.

At least Android still allows you to set animation scale, for now.


That's optional, read the CNN lite version instead. Whole thing is just one 61kB page:

https://lite.cnn.com/2026/02/24/tech/hegseth-anthropic-ai-mi...

"Legacy news outlets" are the only ones doing this. NPR and CBC have this too. No JavaScript, no autoplaying videos. It's very nice.


Do you have an Android or an iPhone?

IMO terminals are still the fastest way to do a lot of things on a phone, but it's a much better experience on Androids with keyboards for the purpose.

And even on an iPhone, it's just fine. Python works really well as a shell for quick calculations, and you can use a script with the -i flag to make it more accessible.


"AI" is a term which means a dozen things and has changed a dozen times. It's about as meaningful a signifier as "smart".

If I were to draw a line, I'd say AI is anything with a transformer model powering it.

As exhausted by 'AI' as I am, translation is one of the things neural networks (and especially transformers) have been constantly improving SOTA on.


> Again I do not know why MJ Rathbun decided based on your PR comment to post some kind of takedown blog post,

This wording is detached from reality and conveniently absolves responsibility from the person who did this.

There was one decision maker involved here, and it was the person who decided to run the program that produced this text and posted it online. It's not a second, independent being. It's a computer program.


This also does not bode well for the future.

"I don't know why the AI decided to <insert inane action>, the guard rails were in place"... company absolves of all responsibility.

Use your imagination now to <insert inane action> and change that to <distressing, harmful action>


This has been the past and present for a long at this point. "Sorry there's nothing we can do, the system won't let me."

Also see Weapons of Math Destruction [0].

[0]: https://www.penguinrandomhouse.com/books/241363/weapons-of-m...


I don't know if this case is in the book you cited, but in the UK they convicted many people of crimes just because the computer told them so: https://en.wikipedia.org/wiki/British_Post_Office_scandal

And Australia made the poorer and suicidal: https://en.wikipedia.org/wiki/Robodebt_scheme


Also elegantly summed up as "Computer says no" (https://www.youtube.com/watch?v=x0YGZPycMEU)

This already happens every single time when there is a security breach and private information is lost.

We take your privacy and security very seriously. There is no evidence that your data has been misused. Out of an abundance of caution… We remain committed to... will continue to work tirelessly to earn ... restore your trust ... confidence.


What else would you see them do or say beyond this canned response? The reason I am asking is because people almost always bring up how dissatisfied they are with such apologies, yet I’ve never seen a good alternative that someone would be happy with. I don’t work in PR or anything, just curious if there is a better way.

clear, direct description of what happened

exactly what data was exposed

what they failed to do (we used cheesy email, SMS as MFA, we do not monitor links in our internal emails)

concrete remediation commitments (we will stop using SMS for MFA, use hard tokens or TOTP or..., stop collecting data that is not explicitly needed)

realistic risk explanation (what can happen what was lost)

published independent external review after remediation/mitigation

board-level accountability (board pay goes for fix and customer protection, part of the audit results)

customer protection (3 - 5 years?), not just 'monitoring'

and most importantly, public shaming of the CxO and the board of directors


Not apologize if they don't actually care. An insincere apology is an insult.

Harvesting data and failing to even secure it should not be acceptable in society. It should be ruinous to the company and the people who run it.

Lose money accordingly - fines, penalties, recompense to victims, whatever... - so they then take the seriousness of security into account.

Unfortunately, the market seems to have produced horrors by way of naturally thinking agents, instead. I wish that, for all these years of prehistoric wretchedness, we would have had AI to blame. Many more years in the muck, it seems.

Change this to "smash into a barricade" and that's why I'm not riding in a self-driving vehicle. They get to absolve themselves of responsibility and I sure as hell can't outspend those giants in court.

I agree with you for a company like Tesla, not only examples of self driving crashes but even the door handles would stop working when the power was cut, people trapped inside burning vehicles... Tesla doesn’t care

Meanwhile, Waymo has never been at fault for a collision afaik. You are more likely to be hurt by an at fault uber driver than a Waymo


And if they are at fault, it's not going to be easy to get them to admit fault or pay for anything.

This is how it will go: AI prompted by human creates something useful? Human will try to take credit. AI wrecks something: human will blame AI.

It's externalization on the personal level, the money and the glory is for you, the misery for the rest of the world.


Agreed, but I'm not nearly so worried about people blaming their bad behavior on rogue AIs as I am about corporations doing it...

And it's incredibly easy now. Just blame the Soul.md or say you were cycling thru many models so maybe one of those went off the rails. The real damage is that most of us know AI can go rouge, but if someone is pulling the strings behind the scenes, most people will be like "oh silly AI, anyways..."

It seems like the OpenClaw users have let their agents make Twitter accounts and memecoins now. Most people are thinking these agents have less "bias" since it's AI, but most are being heavily steered by their users.

Ala I didn't do a rugpull, the agent did!


"How were we to know Skynet would update its soul.md to say 'KILL ALL HUMANS'?"

It’s funny to think that, like AI, people take actions and use corporations as a shield (legal shield, personal reputation shield, personal liability shield).

Adding AI to the mix doesn’t really change anything, other than increasing the layers of abstraction away from negative things corporations do to the people pulling the strings.


Yeah, not all humans feel shame, but the rates are way higher.

Time for everyone to read (or re-read) The Unaccountability Machine by Dan Davies.

tl;dr this is exactly what will happen because businesses already do everything they can to create accountability sinks.


Came to make the same recommendation. Great book!

When a corporate does something good, a lot of executives and people inside will go and claim credit and will demand/take bounces.

If something bad happened against any laws, even if someone got killed, we don't see them in jail.

I don't defend both positions, I am just saying that is not far from how the current legal framework works.


> If something bad happened against any laws, even if someone got killed, we don't see them in jail.

We do! In many jurisdictions, there are lots of laws that pierce the corporate veil.


its surprisingly easy to get away with murder (literally and figuratively) without piercing the corporate veil if you understand the rules of the game. Running decisions through a good law firm also “helps” a lot.

https://en.wikipedia.org/wiki/Piercing_the_corporate_veil


Eh, in the US you don't even need a company nor a lawyer, a car is enough.

See https://www.reddit.com/r/TrueReddit/comments/1q9xx1/is_it_ok... or similar discussions: basically, when you run over someone in a car, statistically they will call it an accident and you get away scot-free.

In any case, you are right that often people in cars or companies get away with things that seem morally wrong. But not always.


A bit over five years ago, someone struck and killed my friend in a crosswalk. He was a fellow PhD student. It was on a road with a 30mph limit but where people regularly speed to 50+mph.

He was an international student from Vietnam. His family woke up one day, got a phone call, and learned he was killed. I guess there was nobody to press charges.

She never faced any accountability for the 'accident'. She gets to live her life, and she now runs a puppetry education for children. Her name even seems to have been scrubbed from most of the articles about her killing my friend.

So, I think about this regularly.

I was a cyclist at the time so I was aware of how common this injustice was, but that was the first time it hit so close to home. I moved into a large city and every cyclist I've met here (every!) has been hit by a car, and the car driver effectively got only a slap on the wrist. It's just so common.


I'm sorry for your loss.

> Her name even seems to have been scrubbed from most of the articles about her killing my friend.

I'm somewhat surprised there were even articles? Are road fatalities uncommon enough in the US that everyone gets written up? Or was this a special enough one?


Not sure if this is true for every university, but when someone in the community dies, especially a student, there's usually at least an article about it.

Well the important concept missing there that makes everything sort of make sense is due diligence.

If your company screws up and it is found out that you didn't do your due diligence then the liability does pass through.

We just need to figure out a due diligence framework for running bots that makes sense. But right now that's hard to do because Agentic robots that didn't completely suck are just a few months old.


No, it isnot hard. You are 100% responsible for the actions of your AI. Rather simple, I say.

Exactly.

> If your company screws up and it is found out that you didn't do your due diligence then the liability does pass through.

In theory, sure. Do you know many examples? I think, worst case, someone being fired is the more likely outcome


It's easy: your bot: your liability.

Hence:

> It's externalization on the personal level

Instead of the corporate level.


"I would like to personally blame Jesus Christ for making us lose that football game"

So, management basically?

To be fair, one doesn't need AI to attempt to avoid responsibility and accept undue credit. It's just narcissism; meaning, those who've learned to reject such thinking will simply do so (generally, in abstract), with or without AI.

If you are holding a gun, and you cannot predict or control what the bullets will hit, you do not fire the gun.

If you have a program, and you cannot predict or control what effect it will have, you do not run the program.


Rice's Theorem says you cannot predict or control the effects of nearly any program on your computer; for example, there's no way to guarantee that running a web browser on arbitrary input will not empty your bank account and donate it all to al-qaeda; but you're running a web browser on potentially attacker-supplied input right now.

I do agree that there's a quantitative difference in predictability between a web browser and a trillion-parameter mass of matrixes and nonlinear activations which is already smarter than most humans in most ways and which we have no idea how to ask what it really wants.

But that's more of an "unsafe at any speed" problem; it's silly to blame the person running the program. When the damage was caused by a toddler pulling a hydrogen bomb off the grocery store shelf, the solution is to get hydrogen bombs out of grocery stores (or, if you're worried about staying competitive with Chinese grocery stores, at least make our own carry adequate insurance for the catastrophes or something).


In practice, most programs can be predicted within reasonable bounds quite easily. And you can contain the external effects of most programs quite easily. Rice's theorem doesn't stop you from keeping a program off the Internet, or running it in a VM.

Your later comparisons are nonsense. We're not talking about babies, we're talking about adults who should know better assembling high leverage tools specifically to interact with other people's lives. If they were even running with oversight that would be something, but the operators are just letting them do whatever. But your implication that agents are "unsafe at any speed" leads to the same conclusion: do not run the program.


I guess today's kids don't know this; but "Unsafe at Any Speed" was the title of a 1965 book that spurred the creation of the Department of Transportation, and changed the automotive industry.

The point is that, if you're designing and selling a product which a large minority of people are going to use in a way that harms themselves and others, pointing at the users and calling them irresponsible doesn't actually help anybody. The people designing and selling the products actually need to make them safer. And if they're not going to do that voluntarily (they're not), we need the government to create insurance requirements, safety bonds, and whatever other incentive gradients are required to make the producers build safe products.


I caught the reference. To the extent it applies at all, I obviously think it reinforces my point. But badly engineered cars, members of an existing category that we know can be done tolerably well, are a very strained analogy to brand new software deployed by people who understand how it works and therefore the risks they are taking.

And actually, the deployer has a lot more control over the havoc the software can cause than the creator. They choose what credentials to give it, whether and how closely to monitor it, any other guardrails, etc. If the operator of the bot discussed in OP had intervened soon after it went off the rails, we wouldn't be here.

So sure, I would also tell the makers of this software to knock it off. Don't put out products that are the network equivalent of a chainsaw on a roomba, no matter how many cool tiktoks it creates. But when I'm talking to people running claws or whatever, they no longer have the excuse of ignorance. So the advice is still: Do not run the program.


Blaming the person running the program is the right thing to do and it's the only thing to do.

This is a really strained equivalence. I can't know for certain that the sun won't fall out of the sky if I drink a second cup of coffee. The "laws of physics" are just descriptions based on observations, after all. But it's a hilarious thing so unlikely we can call it impossible.

Similarly, we can have some nuance here. Someone running a program with the intention of it generating posts on the internet is obviously responsible for what it generates.


Rice's Thm does not say this. You can absolutely have 100% confident knowledge of what a program will not do, it just means that you also have false positives. You cannot have a both sound and complete static analysis for some program property. But you can have a sound or complete analysis.

More like a dog. Person has no responsibility for an autonomous agent, gun is not autonomous.

It is socially acceptable to bring dangerous predators to public spaces, and let them run loose. First bite is free, owner has no responsibility, no way knowing dog could injure someone.

Repeated threats of violence (barking), stalking and shitting on someones front yard, are also fine, and healthy behavior. Person can attack random kid, send it to hospital, and claim it "provoked them". Brutal police violence is also fine, if done indirectly by autonomous agent.


> It is socially acceptable to bring dangerous predators to public spaces, and let them run loose.

Already dubious IMO, but I suppose it depends on your standard for "socially acceptable". Certainly it tends to be illegal for the obvious reasons.


On the other hand, the phrase "footgun" didn't come out of nowhere. You won't run the program, but someone else will build it, and sell it to someone who will.

This slide from a 1979* IBM presentation captures it nicely:

https://media.licdn.com/dms/image/v2/D4D22AQGsDUHW1i52jA/fee...


It’s fascinating how cleanly this maps to agency law [0], which has not been applied to human <-> ai agents (in both senses of the word) before.

That would make a fun law school class discussion topic.

0: https://en.wikipedia.org/wiki/Law_of_agency


An unattended candle has decided to burn down the building.

I completely do not buy the human's story.

> all I said was “you should act more professional”. That was it. I’m sure the mob expects more, okay I get it.

Smells like bullshit.


Yeah like bro you plugged the random number generator into the do-things machine. You are responsible for the random things the machine then does.

"Sorry for running over your dog, I couldn't help it, I was drunk."

I'm still struggling to care about the "hit piece".

It's an AI. Who cares what it says? Refusing AI commits is just like any other moderation decision people experience on the web anywhere else.


Scale matters and even with people it's a problem: fixated persons are a problem because most people don't understand just how much nuisance one irrationally obsessed person can create.

Now instead add in AI agents writing plausibly human text and multiply by basically infinity.


Even at the risk of coming off snarky: the emergent behaviour of LLMs trained on all the forum talk across the internet (spanning from Astral Codex to ex-Twitter to 4chan) is ... character assassination.

I'm pretty sure there's a lesson or three to take away.


The thing is:

1. There is a critical mass of people sharing the delusion that their programs are sentient and deserving of human rights. If you have any concerns about being beholden to delusional or incorrect beliefs widely adopted by society, or being forced by network effects to do things you disagree with, then this is concerning.

2. Whether or not we legitimize bots on the internet, some are run to masquerade as a human. Today, it's a "I'm a bot and this human annoyed me!" Maybe tomorrow, it's "Abnry is a pedophile and here are the receipts" with myriad 'fellow humans' chiming in to agree, "Yeah, I had bad experiences with them", etc.

3. The text these generate are informed by its training corpus, the mechanics of the neural architecture, and by the humans guiding the models as they run. If you believe these programs are here to stay for the foreseeable future, then the type of content it generates is interesting.

For me, my biggest concern are the waves of people who want to treat these programs as independent and conscious, absolving the person running them of responsibility. Even as someone who believes a program can theoretically be sentient, LLMs definitely are not. I think this story is and will be exemplary so I care a good amount.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: