Daniel Miessler 6/8/2025

The Chinese Room Problem With the 'LLMs only predict the next token' Argument

Read Original

This article critiques the argument that LLMs aren't truly thinking because they merely predict the next token. It draws a parallel to the human brain, which also operates as an opaque system generating outputs from inputs without our conscious understanding of the process. The author uses the Chinese Room Argument to highlight that both brains and LLMs are 'Chinese rooms,' challenging the notion that one system possesses understanding while the other does not.

The Chinese Room Problem With the 'LLMs only predict the next token' Argument

Comments

No comments yet

Be the first to share your thoughts!

Browser Extension

Get instant access to AllDevBlogs from your browser

Top of the Week