The Chinese Room Problem With the 'LLMs only predict the next token' Argument
Read OriginalThis article critiques the argument that LLMs aren't truly thinking because they merely predict the next token. It draws a parallel to the human brain, which also operates as an opaque system generating outputs from inputs without our conscious understanding of the process. The author uses the Chinese Room Argument to highlight that both brains and LLMs are 'Chinese rooms,' challenging the notion that one system possesses understanding while the other does not.
Comments
No comments yet
Be the first to share your thoughts!
Browser Extension
Get instant access to AllDevBlogs from your browser
Top of the Week
1
Quoting Thariq Shihipar
Simon Willison
•
2 votes
2
Using Browser Apis In React Practical Guide
Jivbcoop
•
2 votes
3
Better react-hook-form Smart Form Components
Maarten Hus
•
2 votes
4
Top picks — 2026 January
Paweł Grzybek
•
1 votes
5
In Praise of –dry-run
Henrik Warne
•
1 votes
6
Deep Learning is Powerful Because It Makes Hard Things Easy - Reflections 10 Years On
Ferenc Huszár
•
1 votes
7
Vibe coding your first iOS app
William Denniss
•
1 votes
8
AGI, ASI, A*I – Do we have all we need to get there?
John D. Cook
•
1 votes
9
Dew Drop – January 15, 2026 (#4583)
Alvin Ashcraft
•
1 votes