I have the opposite experience. Claude can't get it all in the context window and make changes that will completely break something on the other side of the program.
Granted that's because the program is incredibly poorly written, but still, context window will stay a huge barrier for quite some time.
Between yours and GP's comments, I find echoes of my experience:
> Most of software work is maintaining "legacy" code, that is older systems that have been around for a long time and get a lot of use.
> Granted that's because the program is incredibly poorly written
LLMs can't fix big, shitty legacy codebases. That is where most maintenance work (in terms of hours) is, and where it will remain.
I would take it one step further and argue that LLMs and vibe-coding will compound into more big, shitty legacy codebases over time, and therefore, in the long arc, nothing will really change.
It has ever been thus. There are multi-million dollar businesses propped up by .NET applications on a foundation of shunted-around files, and at best, SQL used as APIs/queues. "Working" code is, in the long run, a liability outside the hands of those doing real engineering.
I want to voice the same bad experience, tried Claude and several more actually. I could get AI to understand some things but it quickly went of the rails trying to comprehend larger complexities and its suggested changes would have been between worse to detrimental had I allowed them to be committed.
Can it though? I thought it was most useful for writing new code, but have so far never had it correctly refactor existing code. Its refactoring attempts usually change behavior / logic, and sometimes even leave the code in a state where it's even harder to read.
Granted that's because the program is incredibly poorly written, but still, context window will stay a huge barrier for quite some time.