Hacker Newsnew | past | comments | ask | show | jobs | submit | blamestross's commentslogin

mirror of: http://www.antipope.org/charlie/blog-static/fiction/accelera... (ctross, give your poor website more power)

Seemed relevant to recent AI lobster fads


I suppose brittle code is fine if you have cursor to update and fix it. Ideal really, keeps you dependent.


To be fair, that was always the case when working with external contractors. And if agentic AI companies can capture that market, then that's still a pretty massive opportunity.


At least AI is (and unlike many contract dev shops) keen to write unit tests…


A feature I have been looking for:

Let me indicate a location and point an arrow at it!


TBH, that’s a great idea! It’s actually on my roadmap for MBCompass, something like waypoint tracking, where you can mark a location and get a directional arrow to it. Appreciate the suggestion!


Like a waypoint on a proper “pure” GPS handset?


Exactly, you got it


My journey with stoicism has been useful and powerful at every phase, but for future and fellow walkers of this path I leave advice:

You you a mindful stoic or a dissociated one?

I'd argue dissociation, at least in the short term, is a critical part of the process. To not let the gut reactions carry you away. You do often need to realize, those reactions are still often happening. You body does it's own thing and you need to be mindful when it does that. Fear, shock, anxiety, elation, they all happen even if you keep a clear conscious mind. The in the situation, the work is in correcting for the biases they give.

In the medium term, if you aren't going back and holding the emotions you set aside, you are doing it wrong. Stoicism sells as "magical no emotion land" but you are flesh and flesh has emotions. Both reasonable and unreasonable. You job is to manage and integrate them effectively.

Stoicism is a good toolkit for managing and analyzing emotions, but if you don't add going back and feeling those emotions to the tools, you are just a timebomb running an emotional debt and dissociating from it. I've done that, and watched others do the same. Odds are this message won't actually change things if you are there right now, but maybe it will nudge you in the right direction.


> In the medium term, if you aren't going back and holding the emotions you set aside, you are doing it wrong. Stoicism sells as "magical no emotion land" but you are flesh and flesh has emotions. Both reasonable and unreasonable. You job is to manage and integrate them effectively.

I think it's helpful not to identify with your emotions. You may experience emotions, but you are not your emotions. That's the difference between saying "I'm angry" and "I feel anger arising within me."


That is a dissociating mode, a more mindful one, but still intentionally distancing yourself from your experiences. It works great for improving your perception of yourself and being mindful. Its a meditation.

It also isn't really available in a crisis, in the moment. All our long term work is really to train the anxious idiot part of ourselves who runs the show most of the time how to cope with what the world and body are doing right now. That person is very much connected to their emotions, no matter what story we make up about it. You need practice being that person feeling those emotions as well as practice analyzing them.


I guess what I don't get about this is: couldn't you apply the same mode to other internal states? "I understand this," vs "I feel understanding arising in me?"

Maybe that is good, now that I write it out. I think "understanding" is actually a pretty dumb mental state to invest a lot in.


> but if you don't add going back and feeling those emotions to the tools, you are just a timebomb running an emotional debt and dissociating from it

What would that entail? I can't imagine e.g. taking some time on Sunday afternoon to feel that panic I suppressed from the crisis on Monday.


> I can't imagine e.g. taking some time on Sunday afternoon to feel that panic I suppressed from the crisis on Monday.

Almost literally that. Revisit the moments that made you "suppress" things. Think of it as a post-mortem. It won't be the same, an echo distorted by time and distance, But pay attention to what you set aside. Suppressing emotions is the short term hack. The ideal is to be able to have them and still be centered. Only way to get better at that is practice.


Unexpected but intruiging. Thanks for the guidance!


Am I the only person Moire patterns induce instant and intense nausea in? Is it just a personal "crucifix glitch?"


I've called things shaped like this "polyentendre".

In my head I think of it has just really high linguistic compression. Minus intent, it is just superimposing multiple true statements into a small set of glyphs/phonemes.

Its always really context sensitive. Context is the shared dictionary of linguistic compression, and you need to hijack it to get more meanings out of words.

Places to get more compression in:

- Ambiguity of subject/object with vague pronouns (and membership in plural pronouns)

- Ambiguity of English word-meaning collisions

- Lack of specificity in word choice.

- Ambiguity of emphasis in written language or delivery. They can come out a bit flat verbally.

A group people in a situation:

- A is ill

- B poisoned A

- C is horrified about the situation but too afraid to say anything

- D thinks A is faking it.

- E is just really cool

"They really are sick" is uttered by an observer and we don't know how much of the above they have insight into.

I just get a kick out of finding statements like this for fun in my life. Doing it with intent is more complicated.

What the author describes seems more like strategic ambiguity but slightly more specific. I don't think it is a useful label they try to cast here.


For an arbitrary block, nothing.

It doesn't have to be arbitrary. You know when a block was "lucky" and you found it ahead of average by a given percentile. You leverage those blocks.


Intentionally or not, you are presenting a false equivalency.

I trust in your ability to actually differentiate between the machine learning tools that are generally useful and the current crop of unethically sourced "AI" tools being pushed on us.


One person's unethical AI product is another's accessibility tool. Where the line is drawn isn't as obvious as you're implying.


It is unethical to me to provide an accessibility tool that lies.


LLMs do not lie. That implies agency and intentionality that they do not have.

LLMs are approximately right. That means they're sometimes wrong, which sucks. But they can do things for which no 100% accurate tool exists, and maybe could not possibly exist. So take it or leave it.


No way to ever know in which condition that being somewhat accurate is going to be good enough or not. And no way to know how accurate the thing is before engaging with it so you have to babysit it... "Can do things" is carrying a lot of load in your statement. It makes the car with no brakes and you tell it not to do that so it makes you one without an accelerator either.


>That implies agency and intentionality that they do not have.

No, but the companies have agencies. LLMs lie, and they only get fixed when companies are sued. Close enough.


So provide one that "makes a mistake" instead.


Sure https://www.nbcnews.com/tech/tech-news/man-asked-chatgpt-cut...

Not going to go back and forth on thos as you inevitably try to nitpick "oh but the chatbot didn't say to do that"


If it was actually being given away as an accessiblity tool, then I would agree with you.

It kind of is that clear. It's IP laundering and oligarchic leveraging of communal resources.


1. Intellectual property is a fiction that should not exist.

2. Open source models exist.


Well yes on both counts.

The only thing worse than intellectual property is a special exception for people rich enough to use it.

I have hope for open source models, I use them.


Based.


How am I supposed to know what specific niche of AI the author is talking about when they don't elaborate? For all I know they woke up one day in 2023 and that was the first time they realized machine learning existed. Consider my comment a reminder that ethical use of AI has been around of quite some time, will continue to be, and even that much of that will be with LLMs.


You have reasonably available context here. "This year" seems more than enough on it's own.

I think there is ethical use cases for LLMs. I have no problem leveraging a "common" corpus to support the commons. If they weren't over-hyped and almost entirely used as extensions of the weath-concentration machine, they could be really cool. Locally hosted llms are kinda awesome. As it is, they are basically just theft from the public and IP laundering.


>Consider my comment a reminder that ethical use of AI has been around of quite some

You can be among a a swamp and say "but my corner is clean". This is the exact opposite of the rotten barrel metaphor. You're trying to claim your sole apple is so how not rotted compared to the fermenting that is came from.


Putting aside the "useful" comment, because many find LLMs useful; let me guess, you're the one deciding whether it's ethical or not?


I feel obligated to point out that basically no commercial service that relies on a big tech company has better than 99.99% uptime anymore. Your example isn't just hyperbolic, it avoid the actual problem. It isn't that "a bit more reliable" is "nontrivial less reliable than 5 years ago."


I worked in DHTs in grad school. I still double take that Google and other companies "computers dedicated to a task" numbers are missing 2 digits from what I expected. We have a lot of room left for expansion, we just have to relax centralized management expectations.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: