Perhaps it is because I am from a different generation, but I cannot understand what the post you replied even meant.
It makes your response that it cannot be explained meaningfully to most unsuspecting people a little intriguing. Can you try and explain? Perhaps I won't understand, but I feel some of the problem is in the absence of signal.
I'll give it a shot for ya. Parent, GP, GGP, GGGP, OP, or anyone else can feel feel to correct mistakes, please.
Original Post (OP): A post linking to https://www.fubardaily.com/ which is "Curated slop to enjoy with your morning coffee. Updated daily." Apparently a sort of "Drudge Report" or "Dashboard of news/posts".
--> OP MEANING DECODED: There's a big amount of "crappy stories" that a) purport to show "how crappy/dismal things are", probably b) with a healthy amount of "fake stories, or performative artifice, etc."
Great Great Grandparent (GGGP) post said, "front page has racial slurs, a link to goatse, and something crass about trans women. fantastic work /s"
--> GGGP MEANING DECODED: Even the "slop" the OP indexes daily has old-school shennanigans that perennially (every year for decades or forever) are sort of "shock value" or "jarring" things, and in a sense this is refreshing, since those kinds of concerns/topics are worth caring about and relatable or important. (But the GP admits they were being sarcastic. So, really they are saying, the OP's link is a pointless page and a waste to look at.)
Great Grandparent (GGP) post said, "This is almost hopeful in that it softens my lamentation that we're losing a whole generation of engineering labor to AI. It makes me realize that much of it was going to be wasted regardless."
--> GGP MEANING DECODED: The GGP points out that sometimes they are saddened that AI "takes the engineering out of engineering". For example, instead of designing and specifying out and making a thing, an engineer can, now, legitimately, sit down and type (or merely speak) "Make a high level design for _____. Ok, now spec it out. Ok, now make it.". The GP sees LLMs and agents as "taking the engineering out of engineering" since much its rote aspects can be externalized. The GP sees this as a concern, because they imagine that an ENTIRE GENERATION of engineers may learn to "ask a machine to do things for them" that once required knowledge of those things. (Note 1: you can trace this back to "learn C", then back to "learn assembly", then to "learn vacuum tubes", etc.; the lamentation of losing "necessary awareness of how systems work on a fundamental level" is not new. Note 2: it is not unique to software engineering, since as a ____ engineer you may now "draw a thing" but "someone in X country/company will actually make it for you" (outsourcing, again, externalizing "actual" engineering/production). In any case, even if "some people" still know how things work, and design and make those systems, the LABOR MARKET in which people "are paid to do things" could nearly evaporate, and this raises very real concerns or worries of the existential type (very much of the paying for food and shelter in the near future type, or the having a prosperous family ever type). Finally, the GP comes around to heir point that THE THINGS WE PAY PEOPLE TO ENGINEER ARE STUPID THINGS BY AND LARGE ANYWAY, SO IT IS OK TO WIPE OUT THIS LABOR, IT'S A WASTE, WHO CARES IF IT EVAPORATES. While this may sound like nihilism, there is an unwritten portion, which could say, AND MAYBE AT LEAST THIS PASSING OF EVENTS/EVOLUTION IN TECHNOLOGY WILL CONTINUE TO FORCE US TO THINK ABOUT WHAT IT IS THAT IS VALUABLE, AND WHAT TO BUILD AND HOW TO MAKE ACTUAL, FUNCTIONAL, SYSTEMS, AND PUSH US FIGURE OUT HOW TO GET WHERE WE WANT TO BE.
Grandparent (GP) says, "This is 100% true. It's fucking brutal and depressing. But genie is out of the bottle now. Ill was born 1985. It's over."
--> GRANDPARENT MEANING DECODED: Oh yeah, GGP is correct and nails it. And this realization is tough to process and handle. There's no going back, I know it because I have seen some things evolve in my half-life. The way the world once was (what felt like plentiful work of at least some modicum of utility and meaningfulness with truly pleasant human interactions and products and services rendered) is going the way of the dodo (ain't want it used to be and ain't coming back).
Parent says,
--> PARENT MEANING DECODED: I also concur with GGP. Furthermore, many people are too young to know how good it once was, what we had, saw, and experienced, what embodied and encompassed that all. And furthermore, young people process media in a different way than us for the most part and do so with less context, and mostly may never gain have access to knowing what's going on right now and has been or is still in a process of being being lost. And there is an entirely different set of people, who are of our generation, but are totally disconnected from either the white-collar working world, or from the semi-technical fields, and they too have nothing like a grasp on what is occurring and seems destined to continue down an inevitable path of removing meaning from labor, as well as removing the opportunity for much meaningful labor of the types we have known in the past or currently. It is possible to intellectualize and describe painstakingly some or all aspects of these concerns, but most such expressions will be unprocessed by any meaningful proportion of people, for they lack the attention span, or interest, or context for understanding either these facts or their importance to some people.
Note: Cogntive biases appearing heavily above include "declinism", "in those good old days", "rosy retrospect", "conservatism", etc.
NOTE: These are NOT my views, I am just trying to "translate" the chain for the post immediately above.
I'm using a (fairly crappy) HP laptop with 16 gig, running Linux.
I find that the combination of FireFox and Visual Studio gets to the point where it fills up to the point where things get killed (with swap filled as well).
Mate system monitor hilariously reports code using 71MB and firefox-bin using 1.1GB because it has a tree view that doesn't show the usage of collapsed nodes beneath it.
Using smem shows each using multi GB and at my current level I've got 6GB of cache to eat up before it kills code again. Ordering by size Ghostty is the first thing that is not firefox or code at 78MB total. (and about 1GB of non-cache kernel use) . So essentially it's only those two apps that are the problem. Can Macs get by simply because Safari is better with RAM?
Mac is good at managing low memory conditions. Linux is not. When I was on Linux, if I hit 16gb ram used the entire system would freeze for minutes. I would have to go in TTY2 to kill something to get it responsive again.
I did a conferencr talk maybe three years ago on using AI for games.
Since the talk, code, audio,and imagery have progressed leaps and bounds, almost all of the points on active AI use haven't changed much.
You have more capable smaller models now that means you could at least run local models in games, but you still don't get to see what it is going to do before the user does. Developers are acutely aware of the climate. if a game has a fault because the AI does something unexpected, it's not just a bug, It's a news story.
It still cannot be realistically used for multiplayer games. The models are prone to adversarial attacks, which can harm a multiplayer experience.
I did point out at the time that solo sandbox games could benefit. An active AI can be empowering for the player, unchecked empowerment can be great in a solo game. It destroys a multiplayer game.
Then there's just a lack of training in some of the specific areas of games. You pretty much have to filter game state down to text for a llm, I don't yet know of any gameplay embedding model.
I have played around with a little tower defence to see how much an AI can work on gameplay, It essentially has a REPL interface to the game logic where it can take actions and advance the game a tick at a time. It does an ok job, but there's still not much understanding of time and urgency to work with an active environment
It’s still also not at all trivial to ship a capable local model with the game. If you’re hitting online APIs then that is slow and expensive and the models will get deprecated in X years.
This is my experience too. I can rehearse words to say or simulate the conversation of others in my head. I just don't use words when I'm not doing wordy things myself.
I didn't know the Comic strip Partially Clips was a pun until I told someone about the strip, then as soon as the words came out of my mouth realised the joke.
On the other hand I can play back non verbal sounds I have heard in my head, which I think not everyone can do either. Not to the degree of my daughter though, I mentioned how I had noticed an ad was using a singer (not super famous but we knew who they were) and when I told her about it some days later her eyes went blank as she listened to it again and then she said, "Oh yes, it's Nataly"
> This is my experience too. I can rehearse words to say or simulate the conversation of others in my head. I just don't use words when I'm not doing wordy things myself.
Yep, same here. Most curiously though, I think I had an internal monologue in my childhood and teenage years, but sometime around 16–18 y.o. it went away. Sadly, I don’t remember the exact moment, as I’ve only learned about this topic around 20.
> the Comic strip Partially Clips was a pun
Whoah, took me a while too, even though you’ve explicitly told it’s there. xD
> I can play back non verbal sounds I have heard in my head, which I think not everyone can do
I'm the opposite of you two. My brain won't stfu. I took Ritalin since grade 3 until I was in my 40's. That never got rid of it, but it did make it easier to focus in spite of all the chatter and other mental distractions.
Now I'm old and lazy, and that seems to have a similar effect. The racing thoughts are still there, but they don't get in my way now that I have far fewer responsibilities to take care of.
It depends on your classification of effective. If it is to gather accurate information, it is ineffective. If it is to gather the justification for what you were going to do anyway, it can be most effective.
For a fairly recent example: the US' post-911 War on Terror when they were waterboarding people. This definitely didn't get them any real info, and they found out in the worst way that innocent people will confess when they think they are actively dying.
Prior to this, it was already known to produce false feedback and confessions. The US military has a strange way of repeating history to see if it'll turn out differently "this time." It sadly never does.
It does make me wonder how advanced remote sensing devices are now. With more advanced hardware, can you remotely capture EEG level signals with any accuracy?
As an aside, I briefly read that as the detection of cognitive dissonance. Which I think would be a much more difficult topic.
It has that AI referring to a point in the conversation feel about it too.
This site would be a particularly interesting example to see the conversation that generated it.
There was a case locally where a political party copped flack for using AI generated images of minorities implying that the people in the picture supported their policies. I didn't think the use of AI was terribly significant when it came to the output because, before AI, they would have just hired some actors for the photos and achieved much the same effect.
The thing that I did think was significant was, in creating the images, they must have committed to writing exactly what it is that they wanted an image of. I really wished that journalists asked for that text, and then challenged the inevitable refusal.
I believe that a lot of the problems with LLMs would be greatly ameliorated if every generated image had embedded in the metadata the prompt and by default, the user credentials (which could be turned off for folks who want privacy).
From my perspective, I have two projects that I have considered [Show HN] posts for. One of those I have not yet posted because I have not yet completed writing up the process I used to construct it (a non-trivial project in an artifact). Without that commentary it falls into a different class, which i agree shouldn't be outright excluded, but is of less general interest. The other project I think some people would be interested in it just for what it is in itself, I just want to add a bit more to it.
Perhaps [Show HN] for things that have commentary or highlight a particular thing. It's a bit nebulous because it gets to be like Wikipedia's notability and is more of a judgement call.
But if that is backed up with a [Creations], simply for things that have been made that people might like or because you are proud of your achievement.
So if you write a little Chess engine, it goes under [Creations]. If it is a Chess engine in 1k, or written in BrainFuck, or has a discussion on how you did it, it goes under [Show HN]
[Creations] would be much less likely to hit the front page of course, but I think there might need a nudge to push the culture towards recognising that being on the front page should not be the goal.
For reference here are the two things, coming to a [Show HN] near you (maybe).
I would say not, because it would lead some to think that what was said to the model represented what output was desired. While there is quite a bit of correlation with describing what you want with the output you receive, the nature of models as they stand mean you are not asking for what you want, you are crafting the text that elicits the response that you want. That distinction is important, and is model specific. Without keeping an archive of the entire model used to generate the output, the conversation can be very misleading.
Conversations may also be very non-linear. You can take a path attempting something, roll back to a fork in the conversation and take a different path using what you have learned from the models output. I think trying to interpret someone else's branching flow would be more likely to create an inaccurate impression than understanding.
It makes your response that it cannot be explained meaningfully to most unsuspecting people a little intriguing. Can you try and explain? Perhaps I won't understand, but I feel some of the problem is in the absence of signal.
reply