Thanks for this. It is not true that researchers think the solution to AI hallucinations and other problems is merely in ingesting 'more' training data. There is a veritable swarm of research going on in 'how to reason', rather than 'what to reason on' . Chain-of logic, chain-of-thought, world-models, haptics and other fancies. My view is that these attempts will approach human-like thinking approach as some point in the not-too-distant future (I am a Prof and spend a lot of time reading what is happening in the labs - the problems you articulate are top-of-mind)

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Steven Boykey Sidley
Steven Boykey Sidley

Written by Steven Boykey Sidley

Award-winning author of 5 novels and 2 non-fictions, playwright and columnist covering all things crypto and AI. Professor, JBS, University of Johannesburg.

No responses yet

Write a response