Charles Rathkopf
Charles Rathkopf
Home
Research
Projects
Publications
Talks
Contact
CV
LLMs
Is AI deception real?
I examine whether alignment faking in Claude 3 Opus constitutes genuine deception, arguing it represents shallow deception - genuine intentional behavior that differs systematically from human deception.
Jan 23, 2026 2:00 PM — 3:30 PM
Forschungszentrum Jülich
From Hallucination to Reliability: Generative Modeling and the Structure of Scientific Inference
Generative AI increasingly supports scientific inference, from protein structure prediction to weather forecasting. Yet its distinctive …
Charles Rathkopf
PDF
Anthropocentric bias and the possibility of artificial cognition
Much has been written about
anthropomorphic
bias in the study of LLMs. Here we discuss various kinds of
anthropocentric
bias.
Jul 26, 2024
Vienna
Charles Rathopf
,
Raphael Milliere
Follow
Why its important to remember that AI isn't human
A popular article arguing that, when evaluating LLMs, anthropocentrism is just as misleading as anthropomorphism.
Raphael Milliere
,
Charles Rathkopf
Article
Deep learning models in science: some risks and opportunities
Under some conditions, we ought to trade interpretability for predictive power.
Jun 11, 2024
Jülich/Düsseldorf
Follow
Cognitive ontology for large language models
This talk describes some of the conceptual and methodological difficulties involved in articulating the cognitive capacities of large language models.
Apr 26, 2024
Dubrovnik
Follow
Do large language models believe?
This is a talk about whether LLMs can be said to have beliefs.
Nov 9, 2023
Erlangen, Germany
Follow
Cite
×