Thanks for this. It is not true that researchers think the solution to AI hallucinations and other problems is merely in ingesting 'more' training data. There is a veritable swarm of research going on in 'how to reason', rather than 'what to reason on' . Chain-of logic, chain-of-thought, world-models, haptics and other fancies. My view is that these attempts will approach human-like thinking approach as some point in the not-too-distant future (I am a Prof and spend a lot of time reading what is happening in the labs - the problems you articulate are top-of-mind)