Annual Lecture CCiL 2019

"Neural network linguistics" 

Prof. Marco Baroni
ICREA Research Professor at Universitat Pompeu Fabra (Barcelona). Affiliated with the Linguistics department, UPF.

Date: October 8th, 2019

Time: 16:00h

Place: Sala Gran (4th Floor, Faculty of Philosophy)


Abstract:

In the last decade, deep artificial neural networks have reached outstanding results in many cognitive tasks, including some involving the uniquely human ability to use natural language. This has rekindled the question of whether these algorithms, that "learn" generalizations by simply being exposed to large amounts of data, can provide insights on language acquisition and processing in humans. Following a 30-year-old suggestion by Michael McCloskey, I argue that we should not look at artificial neural networks as models of the human mind/brain, but treat them as an alien animal species with intriguing human-like cognitive abilities. Wearing our neural network ethologist hats, we should be careful not to interpret the signals emitted by these models as human language, but rather study their own emergent communication
abilities in carefully controlled environments. I will discuss recent studies of the emergence of language-like communication systems among neural-network specimens, trying to gauge what they teach us about the uniqueness of human language.


Marco Baroni received a PhD in Linguistics from the University of California, Los Angeles, in the year 2000. After several experiences in research and industry, he joined the Center for Mind/Brain Sciences of the University of Trento, where he became associate professor in 2013. In 2016, he joined the Facebook Artificial Intelligence Research team. In 2019, he became ICREA research professor, affiliated with the Linguistics Department of Pompeu Fabra University in Barcelona. His work in the areas of multimodal and compositional distributed semantics has received widespread recognition, including a Google Research Award, an ERC Starting Grant and and the ICAI-JAIR best paper prize. His current research focuses on a better understanding of artificial neural networks, focusing in particular on what they can teach us about human language acquisition and processing.