...but why wouldn't they use AI as an oracle? From an outsider's perspective, it seems that there's already plenty of incentive to test the margins of acceptable academic practice in order to produce more papers or publish more quickly. Sadly I feel like it'll become the norm to have a chatbot interpret your results and write your paper rather than using those expensive grad students.
I don't have answers; just the lingering question "why are we building this?"
We're building this because the ability to make narrow, specific predictions can be narrowly and specifically useful. This works if you have a good understanding of both the tools and the domain you're looking to make predictions in.
Unfortunately, from an outsider perspective, this looks like being widely and generically useful. If you don't understand your tools, you're going to misuse them, and this hype cycle is the result.
I don't have answers; just the lingering question "why are we building this?"