Charles Rathkopf
Charles Rathkopf
Home
Research
Projects
Publications
Talks
Contact
CV
Recent & Upcoming Talks
2026
Is AI deception real?
I examine whether alignment faking in Claude 3 Opus constitutes genuine deception, arguing it represents shallow deception - genuine intentional behavior that differs systematically from human deception.
Jan 23, 2026 2:00 PM — 3:30 PM
Forschungszentrum Jülich
2025
Hallucination and reliability
Scientific workflows can discipline generative AI to support reliable inference despite hallucination risks.
Dec 12, 2025 1:00 PM
L’Institut d’histoire et de philosophie des sciences et des techniques
Charles Rathkopf
Do Large Language Models Believe?
Some normative properties of belief are not instantiated in the belief-like states of LLMs.
Dec 8, 2025 1:00 PM
University of Tübingen, Linguistics Department
Charles Rathkopf
Shallow belief in LLMs
LLMs exhibit belief-like behavior but lack the reconciliation mechanisms that stabilize human belief across contexts.
Dec 4, 2025 1:00 PM
Berlin
Charles Rathkopf
Anthropocentric bias in language model evaluation
Evaluating LLMs requires overcoming anthropocentric biases beyond anthropomorphism.
Nov 27, 2025 1:00 PM
Online Seminar Tübingen-Nancy
Charles Rathkopf
Hallucination and reliability
How scientific workflows can make generative AI reliable despite hallucination.
Jun 12, 2025 1:00 PM
Cambridge University
Charles Rathkopf
2024
Hallucination, justification, and the role of generative AI in science
Generative AI systems are disposed to ‘hallucinate,’ or fabricate incorrect answers. But they are also used for a variety of scientific modeling tasks. In this talk I investigate how hallucination threatens the reliability of scientific inference, and how that threat can be mitigated.
Nov 8, 2024
Jülich
Follow
Hallucination, justification, and the role of generative AI in science
Generative AI systems are disposed to ‘hallucinate,’ or fabricate incorrect answers. But they are also used for a variety of scientific modeling tasks. In this talk I investigate how hallucination threatens the reliability of scientific inference, and how that threat can be mitigated.
Oct 25, 2024
Uppsala
Follow
Hallucination, justification, and the role of generative AI in science
Generative AI systems are disposed to ‘hallucinate,’ or fabricate incorrect answers. But they are also used for a variety of scientific modeling tasks. In this talk I investigate how hallucination threatens the reliability of scientific inference, and how that threat can be mitigated.
Oct 17, 2024
London
Follow
Anthropocentric bias and the possibility of artificial cognition
Much has been written about
anthropomorphic
bias in the study of LLMs. Here we discuss various kinds of
anthropocentric
bias.
Jul 26, 2024
Vienna
Charles Rathopf
,
Raphael Milliere
Follow
Extending ourselves with generative AI
The tradeoff between interpretability and predictive power is well known. Here, I argue that the sort of
generative
AI models currently gaining traction in the natural sciences are bound to make that tradeoff more severe.
Jun 15, 2024
Paris
Follow
Deep learning models in science: some risks and opportunities
Under some conditions, we ought to trade interpretability for predictive power.
Jun 11, 2024
Jülich/Düsseldorf
Follow
Cognitive ontology for large language models
This talk describes some of the conceptual and methodological difficulties involved in articulating the cognitive capacities of large language models.
Apr 26, 2024
Dubrovnik
Follow
Two constraints on the neuroscience of content
This talk describes theoretical constraints on current attempts to work decode mental content from brain data.
Mar 21, 2024
Antwerp
Follow
2023
Do large language models believe?
This is a talk about whether LLMs can be said to have beliefs.
Nov 9, 2023
Erlangen, Germany
Follow
Might deep learning vindicate functionalism?
Deep neural networks optimized to perform object recognition tasks predict patterns of neural activation in humans and monkeys, despite not having been trained on brain data. I discuss whether this can be viewed as a case of multiple realization.
Nov 9, 2023
Warsaw, Poland
Follow
Might deep learning vindicate functionalism?
Deep neural networks optimized to perform object recognition tasks predict patterns of neural activation in humans and monkeys, despite not having been trained on brain data. I discuss whether this can be viewed as a case of multiple realization.
Nov 9, 2023
Online
Follow
Culpability and control in BCI-mediated action
This is a talk about brain-computer interfaces and their relationship to intentional mental states.
Jul 8, 2023
London
Follow
Strange error and the possibility of machine knowledge
Rather than merely demonstrating the fragility of ML models, strange error might be evidence of hidden knowledge.
Jan 1, 2023
Stuttgart
Follow
2021
Strange risk in AI ethics
Where ML models are used as the centerpiece of an epistemic classification procedure, reliability is not sufficient for ethical use. The nature of classification errors should be taken into account.
Dec 1, 2021
Delft University of Technology
Follow
Knowledge transfer from machine learning to neuroscience
An invited talk for the Max Planck School of Cognition
Dec 1, 2021
Berlin
Follow
Culpability and control in BCI-mediated action
A neuroethics talk for our large neuroscience group in Jülich. The accompanying paper will be a chapter in a forthcoming neuroethics book.
Dec 1, 2021
Jülich
Follow
Cite
×