I am Permanent Research Associate in Philosophy and Neuroscience at the Jülich Research Center, and a lecturer at the University of Bonn. My research is in the philosophy of mind and the philosophy of science. Most of what I work on has to do with neuroscience and artificial intelligence, how these two fields are related to each other, and how scientific work in these fields relates to older philosophical ideas.
Rather than merely demonstrating the fragility of ML models, strange error might be evidence of hidden knowledge.
A neuroethics talk for our large neuroscience group in Jülich. The accompanying paper will be a chapter in a forthcoming neuroethics book.
An invited talk for the Max Planck School of Cognition
In order to tell whether someone is culpable for an action initiated by a brain-computer interface, it is not necessary to work out whether the brain-computer interface correctly decoded their intention.
Where ML models are used as the centerpiece of an epistemic classification procedure, reliability is not sufficient for ethical use. The nature of classification errors should be taken into account.
There can be an objective fact about the number of bits in a biological signal, despite the fact that the signal is receiver-relative.
In the brain, semantic information is intertwined with Shannon information.
Network models support novel forms of discovery, prediction, and explanation. They also raise a philosophical puzzle about unification.
Neural reuse helped to liberate humans from evolutionary constraints faced by our ancestors.
The concept of neural coding makes sense, if the codes can be learned by neurons.
Network representation compresses information about complex systems without abstracting away from the properties that make them complex.