This has been my experience as well. I treat LLMs like an intern or junior who can do the legwork that I have no bandwidth to do myself. I have to supervise it and help it along, checking for mistakes, but I do get useful results in the end.
Attitudinally, I suspect people who have had experience supervising interns or mentoring juniors are probably those who are able to get value out of LLMs (paid ones - free ones are no good) rather than grizzled lone individual contributors -- I myself have been in this camp for most of my early career -- who don't know how to coax value out of people.
One of the most interesting aspects of this thread is how it brings us back to the fundamentals of attention in machine learning [1]. This is a key point: while humans have intelligence, our attention is inherently limited. This is why the concept behind Attention Is All You Need [2] is so relevant to what we're discussing.
My 2 cents: our human intelligence is the glue that binds everything together.
Attitudinally, I suspect people who have had experience supervising interns or mentoring juniors are probably those who are able to get value out of LLMs (paid ones - free ones are no good) rather than grizzled lone individual contributors -- I myself have been in this camp for most of my early career -- who don't know how to coax value out of people.