
My PhilPeople page is here.
See CV for list of publications and presentations.
My primary area of research is in the philosophy of AI, the philosophy of mind, and the foundations of cognitive science. The three areas come together in much of my recent work in the form of inquiries into some of the standard questions of cognitive science and philosophy of psychology/mind within the context of recent breakthroughs in machine learning. This integrative approach is motivated by a conviction that AI systems provide philosophers a quasi-empirical testbed for many of our philosophical convictions about the nature of the mind.
My dissertation was a collection of three independent but thematically connected papers on language models. The first assesses a currently popular account of the nature of the linguistic competence ostensibly manifested by LMs. According to this account LMs are “stochastic parrots”, whose linguistic outputs, though they look outwardly meaningful, are actually only cleverly stitched together bits and pieces of language which have no meaning at all except accidentally. This claim depends on what I call the Communicative Intention argument. In this first paper I demonstrate that the argument is largely driven by strong Gricean assumptions about the nature of linguistic competence that I argue are now known to be empirically untenable. Moreover, I show that once we give up these assumptions and fairly assess the architecture of both LLM and human cognitive architectures, we will find no good reasons to suppose large language models lack communicative intentions. The second paper addresses the larger debate in the context of which the Communicative Intention argument is made: the debate about meaning in LMs. I argue here that there are two senses of “meaning” in the debate, which I term semantic meaning and content meaning. A failure to distinguish between these two varieties of meaning has led to many of the characteristic confusions in the debate. The final paper deals with metalinguistic capacities in LMs. This paper introduces a novel way of accounting for any ostensible differences in the competences manifested by humans and LMs. The account has to do with what I call metalinguistic agency — roughly, the claim is that the agential structure of the linguistic competences in LMs and humans differ; any ostensible differences between them can be accounted for in terms of this difference in agential structure.
In addition to these foundational conceptual issues, I am also interested in the ethical, scientific, and existential ramifications of artificial intelligence systems, especially LMs and automated art machines (like Midjourney and Stable Diffusion).
I have an interest in the general history and philosophy of science as well. I am especially interested in questions about explanation in the life and mind sciences and the history of psychology. I also have interests in informal logic, argumentation theory, and Islamic philosophy.