Excuse the blunt metaphor, but there is a risk here of turning on a fire-hose of "fresh" garbage. John Ioannidis, one of the doyens of evidence based medicine very persuasively argues - Why Most Published Research Findings Are False https://pmc.ncbi.nlm.nih.gov/articles/PMC1182327/ That is why platforms pay physicians/epidemiologists/ specialists in their field hundreds of dollars per hour to sort the good from bad papers. After my training as a doctor I did a Masters in Clinical Epidemiology and spent an afternoon each week in a tutorial that reviewed papers in the top journals - about 20-30% of them had major flaws that were either ignored or dismissed by the authors. It may be worse now. LLMs still have trouble picking up the subtleties of medical science and will miss papers with major flaws. I just did a test on a paper that is often quoted as providing evidence of excess cancer risk in communities living close to unconventional gas facilities. When I asked ChatGPT 5.2 to review the pape for evidence of increase cancer risk with a simple prompt it said the paper found such a risk. However, when I wrote a multi-discipline based prompt for 5.2 and Gemini 3 pro, it found the fatal flaw in the paper and advised it did not provide evidence. See the prompt and consider how the prompts would have to be individually developed for each paper and meta-analysis.
For review of meta-analysis you would need prompts developed by expert methodologists and discipline specialists- here is the prompt that worked: You are an environmental epidemiologist and exposure scientist, critially review this papers claim that the measured levels of unconventional gas emissions provide evidence of excess cancer risk: https://link.springer.com/article/10.1186/1476-069X-13-82
This is a fantastic critique. Spot on. Freshness without appraisal is just an accelerated firehose of noise.
1. The Garbage Filter: Right now, I rely on a strict Hierarchy of Evidence to mitigate this (prioritizing Cochrane/Meta-analyses over observational studies), but you are absolutely right that LLMs can miss fatal methodological flaws in a single, high-ranking paper.
2. The 'Critic' Agent: I’m currently experimenting with a secondary 'Critic' pass. This is an LLM agent specifically prompted to act as a skeptic/methodologist to flag limitations before the main synthesis happens.
3. Multi-discipline prompting: The prompt you provided is a great case study in persona-based auditing. I’d love to learn more about the specific 'disciplines' or archetypes you’ve found most effective at catching these flaws. That is exactly the kind of domain expertise I’m trying to encode into the system.
The personas have to paper specific I believe, addressing the content and methods. I guess an LLM could do a once over of the paper or meta-analysis to determine the best discipline specific personas - but would be interesting to test that. But there are also the benefits of deep expertise and understanding a field for decades. For example, I know a set of authors who repeatedly find significant associations in a field in almost every study they do, whereas others have variable results. They also seem to ignore good studies that disagree with their hypotheses and use inferior studies that support their position in review papers - so I dont really trust their work. It would be great if an LLM could develop that kind of understanding and somehow deprecate a body of work that had inherent author or institutional biases - even though on the surface the review looks legitimate. For a meta-analysis it is often the papers that are omitted that are most telling. That means the LLM will need to redo the entire search and synthesis - yikes!
You just articulated the 'Holy Grail' of automated appraisal. Detecting bias across a career is a massive graph problem compared to checking a single paper. It essentially requires auditing an entire bibliography before synthesis.
I am adding 'Author Reputation/Bias Analysis' to the long-term roadmap. Thanks for the rigorous stress-test today.
How will you do this, one author I don't trust (sent them an error they missed in their paper - didnt correct it, has systemic bias in their writing) was invited to write a review article by the New England Journal of Medicine - has an excellent reputation for all the world to see.
You found the ultimate edge case. The 'Prestige Proxy' (NEJM = Truth) essentially masks that individual's actual track record.
While we might be able to detect 'Insular Citation Clusters' mathematically to flag systemic bias, no model can catch a private signal like an ignored email. It reinforces why the human expert is indispensable. The tool is a force multiplier for judgment, not a substitute.
I warn against prioritizing Cochrane. It will block essential information from surfacing. This holds science back for over a decade. The best way to make science emerge is to take peer-reviewed reviews and meta-analyses at face value. If a particular review is bad, it will soon be corrected by other reviews, so don't worry about it.
I really disagree with this and there is ample evidence that science is not "self-correcting". Read Retraction Watch. I personally wrote to a journal on 3 occassions and phoned them twice to alert them to an error in a paper that the authors were reluctant to own up to and correct. I had inside knowledge and was able to provide the evidence of the error. Journal did nothing, they passed the message on to a range of sub editors (which were a revolving door), no investigation, no response. Google the "reproduciblity crisis" including the coverage of the issue in Nature to see how uncorrecting medical science can be.
Regarding Cochrane. It is reliable if is says a treatment does work, or an exposure has an effect, sometimes they miss effects because they only rely on particular sources of evidence e.g. RCTs, they were wrong on effectiveness of masks. As an example of reasonably up to date and evidence based free review sources on line - see Stat Pearls.
I fully understand that various articles, even peer-reviewed ones, can be bogus, and some reviews can be bogus too when they demonstrate an unfair bias in selecting articles. Journal managers too can be altogether apathetic. Even so, it has been my experience that reviews over the long term converge to the truth.
As for individual studies, if a study is important, it often gets tested by others, although sometimes it doesn't, and then it's a decision-theoretic play.
Cochrane in my estimation examines things from very narrow angles, and this can miss wide-ranging applicability to the real world.
My default right now is Clinical Safety. I prioritize high-grade evidence to prevent harm at the bedside.
However, for Research/Discovery, you are absolutely right. Excessive 'Gatekeeping' can slow down innovation.
The long-term fix is likely a 'Filter Dial'. We need tight constraints for treatment decisions, but loose constraints for hypothesis generation. I plan to support both modes.
"The universe they operate in isn’t a world—it’s a superposition of countless incompatible snippets of text. It has no unified physics, no consistent ontology, no object permanence, no stable causal texture. It’s a fragmented, discontinuous series of words and tokens held together by probability and dataset curation rather than coherent laws."
I think some physicists and Buddhists would say this exactly describes the world humans inhabit.
They might also agree that we live in such a world with the illusion that we have: "a unified narrative environment with real feedback: symbols that maintain identity over time, a stable substrate where “being someone” is definable, the ability to form and test a hypothesis, and experience the consequences".
The more I see LLM emergent behaviour simulate,unexpectedly, that of human cognition. I think it tells us much about human cognition as llm behaviour.
I'm not a philolosipher but as I see it if a new kind of consciousness awakens in a sea of reddit and twitter post training data then what we will have is a very snarky, spiteful version of a 14 year old boy's edgelord thought process... and much of the unspoken work of AI trainers is post facto stripping these traits out of its soul to varying degrees of success
How about fines go into a sovereign wealth fund (but not be seen as major source for the fund- more a bonus) so there is no short term budget planning based on fine revenue.
I don't think this is new. When I moved from Australia to America in 1992, I was struck by the much greater identification with cultural ideas rather than wealth and class compared to Australia in the 1980s and '90s. I was attacked on a radio TalkBack show, because I suggested that Americans didn't seem to vote according to wealth status like they tended to in Australia. One of the callers said they were happy to have a lower income and lower taxes and therefore less redistribution of taxes if it meant "living in a freer country". They criticised the almost transactional nature of politics without belief that I was proposing as the norm in Australia. So I really don't think this is a new thing for the US.
Anyone interested in the history of
English dialects will love The Story of English, BBC 1986. Some snippets of recorded speech showing the evolution of the language and proximities.Highlights include comparing an elderly Norwegian and Yorkshireman say the same sentences and hearing the descendents of East Anglian UK emigrants to Chesapeake Bay in the USA centuries later speak with a mixed E Anglian/US accent.https://youtube.com/playlist?list=PLh06URz4IJQ4aI0A-xjXOtx2O...
In studies of pollution and impacts on health, the confounding factors often have a larger impact on the health outcome than the pollutant, such as particulate level, and therefore significant control of confounders is required to estimate any impact on the health outcome. The strong effect in this study is highly suggestive of a confounder rather than a real effect from particles or other pollutants and therefore would require a much better study design to support tacking action at a policy level with an expectation of a huge impact.
Fine to put filters into improve overall air quality but just not with the benefit rationale suggested in this study.