Home
Experience
Projects
Talks
Publications
Contact
Projects
Interpreting neural networks
Artificial neural networks are epistemically opaque. It is impossible for a human to examine each step in the computation that generates an output. Nevertheless, there is sense in which the outputs can be explained.
Argument mapping
Argument maps are a great way to represent and evaluate arguments. I use them to teach critical thinking, but also to work out my own ideas.
The neuroscience of mental content
Can mental content be decoded from brain data? With the aid of machine learning, there is a good case to be made that it can.
Large language models
Machines can now talk. Do they understand what they are saying?
The evolution of cognition
Thinking about how cognition evolved, both biologically and culturally, can help us understand what minds are.
Cite
×