Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This may not be AGI, but I think LLMs as is, with no other innovation, are capably enough for gigantic labor replacement with the right scaffolding. Even humans need a lot of scaffolding at scale (e.g. sales reps use CRMs even though they are generally intelligent). LLMs solve a “fuzzy input” problem that traditional software struggles with. I’m guessing something like 80% of current white collar jobs can be automated with LLMs plus scaffolding.


> LLMs solve a “fuzzy input” problem that traditional software struggles with

This is basically all of it.

Kind of how word processors solved the writing is tedious struggle and search solved the "can't curate the internet" struggle.


What's a white collar job than can be automated with LLMs plus scaffolding?


My first job out of uni was in creating automated tests to validate some set top box. It involved using library of "blocks" to operate a remote control. Some of the people I have been working with spent their whole career in this narrow area, building those libraries of block and using them for customer and I have no doubts a LLM can today produce the same tests without any human intervention


Replacing labor doesn't require replacing whole jobs, it's enough to only replace specific tasks within those jobs which will reduce the number of workers needed for the same amount of work.


But then it becomes a competitive advantage for another firm to use the same employees to do more work, leading to the jobs not being replaced.


To pick a rather extreme example, the fraction of the population involved in farming is rather lower than in the past. Due to productivity improvements.


It's not clear why your analogy wouldn't have implied the end of white collar work when computers were first invented or when the internet was invented. Both of those should have been massive productivity boosts which meant the workers would have to go elsewhere to feed themselves. Instead Jevon's paradox kicks in every time.


Counterexample, not analogy.


To pick a rather extreme example, the cotton gin...


Well the question here if LLMs are the cotton gin or if they’re the combine/tractor thing that killed all the farming jobs.


I think it was the GPS, automation (robotics), bioengineered crops, and conglomerates. My point is, I'm pretty sure it's a lot of factors. Even in the cotton gin case. It's probably naïve to give so much credit to one thing


Most QA, most analyst positions, a good chunk of the kludge in intellectually challenging jobs, like medical diagnostics or software engineering, most administrative work, including in education and in healthcare, about 80% of customer success, about 80% of sales, are all within striking distance of automation with current-generation LLMs. And taht's entirely ignoring the 2nd-order effects in robotics and manufacturing.


You don't need LLM to replace QA. Just fire them, push some testing to developers and the rest to the users. Shareholders will be pleased by budget efficiency!


CEO.


Business Intelligence Engineers


You sound like a manager that doesn't understand what your employees are doing.


I see LLMs in a similar way - a new UI paradigm that "clicks the right buttons" when you know what you need, but don't know exact names of the buttons to click.

And from my experience there are lots and lots of jobs that are just "clicking the right buttons".


Not sure it'll really work like that. Company A finds its programers 2x as productive with LLMs and thinks they'll fire half but competitor B has similar effects and uses the 2x to make more features so A has to do that to keep up.


Such a decision merely tips the scale into a brittle structure territory. It introduces critical points of failure (funneling responsibility through fewer "nodes", stronger reliance on compute, electricity, internet, and more) and reduces adaptability (e.g. data bias, data cutoff dates, unaware of minute evolving human needs, etc).


Theyre not deterministic and real wirld work is risk adverse. People aew confusing sinusidal growth with singularity.


I agree, AI image recognition is so good already that it can tell what someone is doing or what is happening in a picture. Just have that run at 30 fps and make the robot's movements align with that understanding and bam, you effectively have "AGI" in some sense no? I mean sure, maybe it doesn't really remember anything like a human would and it doesn't learn on the fly yet but it's definitely some kind of intelligent, autonomous thing that will be able to respond to just about anything in the world. Making it able to learn on demand is something people are working on. Chatgpt remembers some stuff already too after all. It's very small and very spotty, weird memory but hey, it's something. As soon as that becomes a tiny bit better you'll already beat humans at memory.


Same here. Scale existing chips 100-1000x - there's plenty left to do. With that - we'll likely need 100x more power production too.


Try talking to white collar workers outside your bubble. Better yet get a job.


This opinion is not based in reality. The only way to understand that is to go outside and talk to real people who are neither techies nor managers, and, better yet, try to do their jobs better than they do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: