I wonder how much of this is an intrinsic limitation of LLMs, and how much that interdisciplinary thinking and mashing together of problem domains is missing in the training data. It's a pretty rare thing, and the only times these analogies and linkages become noted is when they happen to work out (and they don't seem so far out of left field anymore once this happens).