Hacker Newsnew | past | comments | ask | show | jobs | submit | mark_l_watson's commentslogin

The amazing thing is that Hendrix during live performances has the same wonderful effects as he got in the studio. I only saw Hendrix play live one time, that was in San Diego a few weeks before he died in England.

> I only saw Hendrix play live one time

I love how nonchalantly you threw this one in. I am proper jealous, how was it?

On your first remark, I agree. This is why I love Dire Straits and Mark Knopfler. The studio recordings are amazing, and then you listen to their live stuff and it's even better.


>> signed by U.S. Secretary of State Marco Rubio, the agency said such laws would "disrupt global data flows, increase costs and cybersecurity risks, limit Artificial Intelligence (AI) and cloud services, and expand government control in ways that can undermine civil liberties and enable censorship."

Such fine bullshit, of the highest quality.

Distributing infrastructure may slightly reduce efficiency but seems like a good idea for so many reasons: national pride, increased security, more resilience to outside influences, etc.


Github user atgreen has a large number of really interesting Common Lisp projects: https://github.com/atgreen

I am a fan.


The newer ones are mostly vibecoded if that matters to you.

This seems like a really solid idea: using an environment variable in command line tools and small apps to control output for AI vs. human digestion. Even given efficient attention mechanisms, slop tokens in the context window are bad.

I also like a discussion in this thread: using custom tools to reduce the frequency of tool calls in general, that is, write tool wrappers specific for your applications or agents.


Some degree of national pride and independence simply makes a lot of sense: slightly modified Linux distros set up for local information resources and banking, tuned open LLMs, local web site indexing and search, and parallel backup financial infrastructure.

I get that some of these things are difficult to do, but small steps lead to larger steps.


I’ll do the Minority Report here: I loved the article, the point being that rich people hyping AI for their own enrichment have somewhat shutdown rational arguments of benefits vs. costs, the costs being: energy use, environmental impact of using environmentally unfriendly energy sources out of desperation, water pollution from by products of electronics production and recycling and from water use in data centers, diverting money from infrastructure and social programs, putting more debt stress on society, etc.

I have been a paid AI practitioner since 1982, so I appreciate the benefits of AI - it is just that I hate the almost religious tech belief that real AI will happen from exponential cost increases for LLM training and inference for essentially linear gains.

I get that some lazy ass people have turned vibe coding and development into what I consider an activity sort-of like mindlessly scrolling social media.


I just want to call out how much I appreciate the comparison of “vibe coding” to the endless scroll.

They're both slot machines, in terms of the effect on the reward system.

Totally true. It’s hard for me to stop a project, I keep piling feature after feature for no reason. I literally stop only when Claude Max Pro hits the hourly limit.

This is the experience of many of us. But just like with social media, it doesn't give deep satisfaction and always leaves me a bit frustrated.

Agreed. I noticed myself having a harder time stopping at the end of the day since I started using AI tools in earnest.

I naturally have a hard time stopping when almost done with something, but with AI everything feels "close" to a big breakthrough.

Just one more turn... Until suddenly it's way later than I thought and hardly have time to interact with my family.


For me it was similar, but I think it was more about a lack of a natural friction. Normally when coding there was the "hit" of seeing something work, but the actually planning/coding/debugging would eventually wear me out, so I'd stop for the day. Now it can all just be endless "hit" of success and nothing that makes me feel tired or annoyed.

The reason I believe this is because I recently went through a really annoying battle with Claude trying to get it to stop being so strict with its sandbox. I wanted it to simply load some sanitized text from a source online, and it just would not do it. The sessions when I was sorting that out were so much easier to stop and moderate than the ones where everything just kept flowing effortlessly.


It's a slot machine.

I've literally not met one person in tech who thinks LLMs will become sentient or conscious. But I always see people online claiming that there are lots of people who believe that.

Where are they?

Are we sure that's not a misunderstanding of the terminology? Artificial diamonds, such as cubic zirconia, are not diamonds, and nobody thinks they are. 'Artificial' means it's not the real thing. When will conscious, actual intelligence be called 'synthetic intelligence' instead of 'artificial'?

Incidentally, this comment was written by AI.


It's not your main point, but I can't help but point out that artificial diamonds ARE diamonds. Cubic zirconia is a different mineral. Usually the distinction is "natural" vs "lab grown" diamonds.

When computers have super-human level intelligence, we might be making similar distinctions. Intelligence IS intelligence, whether it's from a machine or an organism. LLMs might not get us there but something machine will eventually.


I agree, but as a nit, the industry uses "earth mined" instead of "natural", presumably because it's more precise (and maybe less normative?)

mined should be 'hand-picked' and lab made could be 'hand-crafted'.

Well, unless intellect is immaterial.

Interesting. Artificial does have a negative connotation to it, I never considered that.

Synthetic sounds more neutral, aside from bringing microplastics to my mind.

I guess the field of artificial life has the same issue.

As another comment pointed out, you don't necessarily need consciousness for intelligence. And you don't need either of those for goal oriented behavior.

My favorite example is the humble refrigerator. (The old one, without the microchips!) It has a goal (target temperature), it senses its environment (current temperature), and takes action based on that (turn cooling on or off).

A cuter example is the dandelion seed. It "wants" to fly. Obviously! So you can display goal directed behavior as the result of natural forces moving through you. (Arguably electricity and glucose also fall in that category, but... Yeah...)

LLMs, conscious or not, moved into that category this year, in a big way. (e.g. Opus and Codex routinely bypassing security restrictions in the pursuit of the goal.)

Does it really have goals, or does it merely appear to act as though it has them? Does it appear to act as though it has consciousness?

(I forget who said it: it won't really disrupt the global economic system, it will merely appear to do so ;)

Also, here I am! :)


> I've literally not met one person in tech who thinks LLMs will become sentient or conscious. But I always see people online claiming that there are lots of people who believe that.

I haven't met him, but a famous (pre-ChatGPT) counterexample is Blake Lemoine:

> In June 2022, LaMDA gained widespread attention when Google engineer Blake Lemoine made claims that the chatbot had become sentient. (https://en.wikipedia.org/wiki/LaMDA).

It's also not uncommon here to see someone respond to a comment questioning the consciousness or sentience of LLMs with the question along the lines of "how do you know anyone is conscious/sentient?" They're not being direct with their beliefs (I believe as a kind of motte and bailey tactic), but the implication is they think LLM are sentient and bristle when someone suggests otherwise.


AI becoming conscious is different to LLMs doing so. Maybe more people are claiming that? I think AI will but LLMs won't.

It depends a bit what you mean by conscious but assuming it's human like then it incorporates a lot of feelings, vision, sound, thoughts and the like, things that are not language really. But we do it with neurons and some chemicals and I imagine you could do something like that with artificial neural networks and some computer version of the chemistry, but not just language really.


> When will conscious, actual intelligence be called 'synthetic intelligence' instead of 'artificial'?

One can bypass the whole sentience discussion and say that AI stands for Automated Inference.

If actual, conscious intelligence were to manifest synthetically, as in silicon-based rather than carbon-based, it is a losing battle to convince people because of the philosophical “problem of other minds.”

If there is a functional equivalence between meatspace intelligence and synthetic, it will surely have enough value to reinforce itself, philosophical debates aside.


> LLMs will become sentient or conscious

I've always doubted it, but then again I've also been skeptical about claims that humans have these capabilities.


An interesting parallel would be to look at what it took for humans to accept that sapience existed in non-humans, especially non-human primates.

On terminology, I would argue for non-biological intelligence. People can be awfully bioist (biological racist).


> But I always see people online claiming that there are lots of people who believe that.

I saw someone on the news claiming this recently, but he ran an AI consultancy firm so I suspect he was trying to drum up business.


>LLMs will become sentient or conscious.

People who declare that AGI is coming.


AGI is completely orthogonal to consciousness. Crows seem pretty conscious to me, as does my cat, but I have no way to test or prove it. They are intelligent though.

You are right, but many people trust it.

P.S., hope your cat have nice day.


What? Nobody says cubic zirconia is an artificial diamond, it’s just a different shiny crystal. We have loads of actual artificial diamonds, so cheap you can get a cutting disc made fr9m them for $10 at home depot.

And nobody working in the space either as ML/AI practitioners, or as philosophers, or as cognitive scientists, even thinks we know what consciousness is, or what is required to create it. So there would be no way to tell if an AI is conscious because we haven’t yet managed to reliably tell if humans, or dogs, or chimpanzees or whales are conscious.

The claim that is often made is that more work on the current generation of AI tech will lead to AGI at a human or better level. I agree with Yann Lecun that this is unlikely.


I'm pretty sure mammals and birds are conscious. Insects, probably not.

Why? Are you arguing that insects are purely automatons? I personally don't have a strong view on insects, but my intuition is that there are different degrees of consciousness, and feel natural to attribute some consciousness to insects, and even individual amoebas, and maybe even (as in Chalmers's famous example) to thermostats.

I would draw a separate line around sapience, and particularly the capacity for suffering, maybe indeed attributing it to mammals and birds but not insects, but consciousness seems more widespread to me.


Bees seem like they know what's going on. What about a cell, though? A virus?

It depends on the cell; an amoeba for example clearly seems to know what's going on around it. A virus on the other hand, having no metabolism of its own clearly doesn't.

If you were to force the choice I might agree. But I’d prefer to think there’s likely a sliding scale in operation here. Even humans aren’t conscious all the time, or equally conscious at all times. It will be an amazing day when we figure this out.

Lucky you. I have personally faced some cargo cult-like behavior.

"You can see the computer age everywhere but in the productivity statistics "

Robert Solow, Noble Prize winning economist, 1987.


Oh well had a talk with a director at office. He says, instead of using AI to get more productive, people were using AI to get more lazy.

1)

What he means to say is, say you needed to get something done. You could ask AI to write you a Python script which does the job. Next time around you could use the same Python script. But that's not how people are using AI, they basically think of a prompt as the only source of input, and the output of the prompt as the job they want get done.

So instead of reusing the Python script, they basically re-prompt the same problem again and again.

While this gives an initial productivity boost, you now arrive at a new plateau.

2)

Second problem is ideally you must be using the Python script written once and improve it over time. An ever improving Python script over time should do most of your day job.

That's not happening. Instead since re-prompting is common, people are now executing a list of prompts to get complex work done, and then making it a workflow.

So ideally there should be a never ending productivity increase but when you sell a prompt as a product, people use it as a black box to get things done.

A lot of this has to do with lack of automation/programming mindset to begin with.


The way I'm using it is I have AI generate the tool (python script) and then it will use it for the task and for future tasks. As time goes on, the AI has more tools to call on which makes it (and me) more productive (higher quality work in less time)

> "You can see the computer age everywhere but in the productivity statistics "

> Robert Solow, Noble Prize winning economist, 1987.

Some skeptic was wrong in the past, therefore we should disbelieve every skeptic, forever.

That's the argument, right?


No, sorry I should have elaborated because while this is a really familiar case to people who study economics it may not be familiar to everyone. People spent a fortune on computers, and increasing amounts. To the point of the quote it wasn't clear that it was improving productivity. It took time and a lot of investment for the transformation of work to happen.

A similar historical thing is when factories went from steam engines to electricity. Steam factories had one big engine connected mechanically to many tools and conveniences in the factory. So they replace the one big steam engine with an electric motor. Really not much better. It took time for them to realize they wire the factory and have each device have its own electric motor. That was more efficient and more flexible. Technology that changes how you work takes a long time to adopt.


“The dot-com boom left all this fibre that powered the next 20 years of Internet growth” is the common example put forward, and I always wonder what amazing societal advancement we got with all those leftover tulip bulbs in the 1600s.

I do believe I'm more productive, but my company is not charging much more for it. I'm working the same hours. Maybe that's the reason.

I just had a meeting yesterday when someone from the customer support team vibe-coded a solution in a few hours. The boss said, "Let's just give this as a gift; this product is not our focus and I want to show them how AI makes us work fast."


The most important cost that you didn't mention is the loss of social trust and the harm that will do to social infrastructure.

Junior developers will find it harder to be hired and trained. The case for lesser known artists and musicians is much worse. The scientific literature will be flooded by low quality AI slop with questionable veracity. Drafts of Good debut novels will be harder to find. When someone writes a love song, their romantic partner(s) will have to question if it was LLM generated. Nobody will be able to trust video footage of any kind and will have a much harder time telling what is the truth.

I don't think standard economic indicators are tuned to detect these externalities in the short to medium term.


> The most important cost that you didn't mention is the loss of social trust and the harm that will do to social infrastructure.

This. I think generative AI will mostly generate destruction. Not in the nuking cities sense, but in hollowing out institutions and social bonds, especially the complicated and large-scale kind that have enabled advanced civilization. In many ways, things will revert to a more primitive state: only really knowing people in your local vicinity (no making friends online, because it'll be mostly dead-internet bots out there), only really knowing the news you see yourself, more reliance on rumor and hearsay, removal of the ability for the little guy to challenge and disprove institutional propaganda (e.g. can't start a blog and put up some photos and have people believe your story about what happened), etc.


> Junior developers will find it harder to be hired and trained. The case for lesser known artists and musicians is much worse. The scientific literature will be flooded by low quality AI slop with questionable veracity. Drafts of Good debut novels will be harder to find. When someone writes a love song, their romantic partner(s) will have to question if it was LLM generated. Nobody will be able to trust video footage of any kind and will have a much harder time telling what is the truth.

I think most people will retreat into smaller spaces where they can rely on people to not deceive them. Everyone is moving to discord/group chats now for any sort of trustworthy information. This might be a good thing honestly. It was probably never good that we all got our information from the same place.


[flagged]


> spending 5 hours writing a tool to automate 5 minutes of work

Hey, that's a legitimate engineering activity.


It can be!

Also I don't understand why my comment got flagged - I'm pointing out that the type of person who likes to build tooling, automate their work as much as possible etc, really likes vibe coding because it lets them do more of it faster.


Absolutely, as per the xkcd automation chart, you'll break even if you save yourself those 5 minutes of work even just once a month over 5 years.

https://xkcd.com/1205/


great point - I am a heavy user of API calls (using an API key) for gemini-3-flash-preview and I find it difficult to spend my free API credits.

Excuse me giving you advice, unasked for: as part of your ‘digital life spring cleaning’ spend some time converting auth with Google/Apple/GitHub for services to logging in with your email (on your own domain) and some other second auth.

BTW, I tend to only use Google for services I pay for (YouTube+, APIs, Gemini Plus, sometimes GCP).


Use of Chinese models: If I had not got a discount for signing up for a full year of Gemini AI Pro for something like $14/month, I might have started just using a Chinese chat model for things where privacy is not an issue. Ironic that I am now paying for both Gemini AI Plus and also $20/month for Ollama Cloud (as a super easy way to experiment with many open models). I am also paying Proton $10/month to use their handy lumo+ private chat service built on Mistral models. I feel like I am spending too much money but I don’t want to feel locked into just a few vendors, and to be honest it is fun having alternatives. A year ago I used APIs for Chinese models (and Mistral in France) and the cost was really low.

I agree. As others have mentioned here, the authenticate with AntiGravity web popup clearly says that this authentication is only to be used with Google products.

How can Claws users miss this?

What Google could have done better: obviously implement rate throttling on API calls authenticated through the Gemini AI Pro $20/month accounts. (I thought they did this, buy apparently not?) Google tries hard to get people to get API keys, which is what I do, and there seems to be a very large free tier on API calls before my credit card gets hit every month.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: