Do large language models understand?

Image taken from The Illustrated Transformer. https://jalammar.github.io/illustrated-transformer/

Large language models based on deep neural networks with transformer architecture, such GPT-3 and PaLM, and DeBERTa, have become extremely powerful in the past few years. They can hold conversations, make jokes, and explain the answers to algebra problems. It is hard to avoid the impression that they are reasoning about the world. Skeptics say that the performance of large language models depends entirely on statistical associations between words, rather than on an understanding of what they mean. Others are more impressed, and say that large language models understand words in roughly the way that we do. In my view, both positions are exaggerations. The truth is in the middle, but also more stranger, and more difficult to express than either of these. I’m interested in developing better ways of explaining the performance of large language models, both by testing their capacities, and by developing new conceptual resources for describing how they work.

Charles Rathkopf
Charles Rathkopf
Research Associate at the Institute for Brain and Behavior

I am interested in how mental properties emerge from physical stuff.