Gabriel Dupre

(What) Can Machine-Learning Contribute to Theoretical Linguistics?

2020/21 Keele Royal Institute of Philosophy Lecture

Machine-Learning (ML) techniques have revolutionised artificial systems' performance on myriad tasks, from playing Go to medical diagnosis. Recent developments have extended such successes to natural language processing, an area once deemed beyond such systems' reach. Despite their different goals (technological development vs. theoretical insight), these successes have suggested that such systems may be pertinent to theoretical linguistics. The competence/performance distinction presents a fundamental barrier to such inferences. While ML systems are trained on linguistic performance, linguistic theories are aimed at competence. Such a barrier has traditionally been sidestepped by assuming a fairly close correspondence: performance as competence plus noise. I argue this assumption is unmotivated. Competence and performance can differ arbitrarily. Thus, we should not expect ML models to illuminate linguistic theory.
About the speaker:
Gabriel Dupre is a Leverhulme Early Career Fellow in Philosophy at Keele University. He works in the philosophy of linguistics, especially on the ways in which a correct account of contemporary linguistic methodology can inform traditional debates in the philosophy of language, mind, and science. Previously, he was a lecturer at Reading University and the University of California, Los Angeles, where he completed his PhD. 
All Welcome!  

On Zoom – Join Zoom Meeting 

 Meeting ID: 341 824 7201 

Passcode: Keele 

Event date
Event Time
Dr Jonathan Head
Contact email
Contact telephone
01782 733515
Accreditation logo for Athena Swan Bronze