Research Group
in Analytic Philosophy

Artificial Implicature: Chatbots and Conversation

22 May 2024  |  15:00  |  Seminari de Filosofia UB

Abstract

The problem I consider in this talk emerges from the tension we find when we look at the design and architecture of chatbots on the one hand and consider their conversational aptitude on the other. In the way that LLM chatbots (such as ChatGPT, Bard and Claude) are designed and built, there seems no good reason to suppose they possess second-order capacities such as intention, belief or knowledge. Yet we have developed theories of conversation that make great use of second-order capacities of speakers and their audiences to explain how aspects of conversation succeed. As we can all bear witness to now though, at the point of use chatbots appear capable of performing language tasks at a level close to that of humans. This creates a tension when we consider something like, for example, the classic Gricean theory of implicature. On a broad summary of this type of account, to utter p and implicate q requires the reflexive occurrence of an audience supposing a speaker believes that q, and the speaker believing that their audience can determine they believe it when they utter p. So taken at face value, if a chatbot doesn’t have the capacity for belief, then either in their role as speaker or audience, they would not seem capable of either generating or comprehending implicatures. However, on the surface at least, it does seem that chatbots are capable of dealing with (some) implicatures, and as such it raises questions about how we should then correlate this with what we think occurs in cases of implicature with chatbots.