Functionalism in the Age of Deep Learning

Functionalism holds that mental states are defined by their functional roles rather than their physical implementation. This suggests that sufficiently similar functional organization should give rise to the same psychological kinds, regardless of substrate—a thesis known as multiple realizability.

Deep learning systems pose a challenge to this framework. As AI systems become more sophisticated, we face a question that functionalism seemed to settle: when, if ever, will machines multiply realize the same psychological kinds as humans?

I argue that there may be no determinate fact of the matter. The architectural differences between biological and artificial neural networks may be significant enough that questions about shared psychological kinds become underdetermined by functional considerations alone. This challenges us to rethink functionalist commitments in light of actual attempts to engineer cognition.

Work in progress

This project centers on a paper under development on multiple realizability and the indeterminacy of cross-substrate psychological kind attribution.

Charles Rathkopf
Charles Rathkopf

I am interested in how mental properties emerge from physical stuff.