Conference Programme
Venice, Italy · 21-22 May 2026
The conference opens at 1.00pm on Thursday 21 May 2026 with a brief introduction by the organisers. All talks are scheduled in 35-minute slots, with coffee breaks on both days and a lunch break on Monday 25 May 2026.
Introduction by the Organisers.
Coffee break.
Coffee break.
Lunch break.
Coffee break.
Discussion.
The core of performance art is that one set of people (“the audience”) perceive the body movements made by other people (“the artists”). We have recently extended this model of embodied communication from performance art to at least some forms of visual art, with an interesting agentic twist.
Visual art has long emphasized body likeness, for example through portraiture. A more recent, more embodied form of visual art eschews visual likeness, and moreover blurs the distinction between the artist and the audience, by generating movements that are sourced from the body of the observer themselves.
We will present a theoretical scheme in which this turn defines a new digital art that is both generative and interactive. Generative, interactive art potentially provides the same compelling salient experience as some embodiment illusions studied by laboratory cognitive scientists. Crucially, the viewer becomes an active participant in the production of the artwork, and the participant's control of their own body becomes, in itself, both source and object of aesthetic attention.
How the brain constructs bodily experience has traditionally been framed in terms of how sensory signals are integrated to generate perceptual representations. Embodiment is traditionally explained in line with this framework. This outside-in view assumes that bodily perception arises primarily from the accumulation and combination of sensory evidence. In contrast, inside-out perspectives emphasize the role of internally generated neural dynamics and models, with sensory input serving to constrain, update or correct these endogenous processes.
Most accounts of body perception implicitly assume an outside-in architecture, in which bodily experience is constructed from the integration of sensory signals. Although this approach has been effective in cataloguing relevant signals and cortical pathways, it struggles to explain the stability, speed and affective potency of bodily experience.
Inside-out accounts invert this logic. They propose that the brain continuously generates a model of the body, and that perception reflects the constraint of this endogenous activity by sensory evidence. From this perspective, the body is not inferred from sensory input but predicted, and both bodily self-consciousness and the perception of others' bodies arise from the same generative model. Crucially, self-other distinctions are not encoded a priori but emerge through causal inference about the source of sensory signals.
Awaiting abstract.
Virtual reality offers a powerful platform for manipulating subjective experience and creating novel modes of interaction with the environment. Understanding the neural mechanisms through which the brain constructs subjective experience can inform the development of more effective and biologically grounded virtual reality applications.
A key mechanism underlying the experience of the self here and now is the multisensory integration of bodily and environmental signals within peripersonal space. Peripersonal space refers to the space surrounding the body, supported by neural systems that integrate external and tactile signals as a function of their interaction potential.
This presentation introduces the neural and computational mechanisms underlying peripersonal space in humans, then shows how applying this research in virtual reality opens new translational applications. Recent findings indicate that avatars displaying signs of infection entering peripersonal space selectively engage threat-related brain regions and can signal to the immune system to trigger anticipatory immune responses.
Robots, and especially humanoids, have recently received a great deal of attention in the scientific community and beyond. In this talk, however, the focus is on a less explored use of humanoids: as tools to understand human cognition.
Because of their physical bodies, humanoid robots allow researchers to explore the role of embodiment in different cognitive mechanisms. The talk presents two lines of research: first, social cognition studies showing that the physical embodiment of an interaction partner is necessary for evoking mechanisms such as gaze-induced joint attention, cognitive control, or vicarious sense of agency; second, robot teleoperation studies examining what happens when people act through a robot body.
Examples include increased self-other overlap after operating the robot, or increased generosity when controlling the robot from the first-person perspective rather than the third-person perspective. The talk concludes with a broader discussion of embodied AI as a tool for studying human cognition.
Identifying animate agents is essential for fitness. Agents can be recognized based on what they look like and how they move, and the human visual system is highly sensitive to both kinds of information. Yet humans live in a social world, which requires identifying not just agents, but social beings.
This talk argues that there is a perceptual distinction between intentional agents and social beings, and that this distinction is visible in empirical data. Behavioral and neuroimaging findings suggest that the visual system is not only finely tuned to cues of agency, but also implements sophisticated mechanisms for the rapid identification of social beings, defined as individuals connected through relationships with others.
These mechanisms can be dissociated from those involved in the more general representation of animate agents. Together, the findings point to a new perceptual category, that of social beings, and invite a reconsideration of the distinctions between biological agents, intentional agents, and social agents.
Dynamic faces and bodies are important signals in the communication of human and non-human primates. The study of the neural mechanisms that underlie social perception requires highly controlled realistic stimuli that remain suitable for causal manipulations. Highly realistic avatars are ideally suited to this task.
This talk presents work on the development of monkey facial and body avatars for studying neural encoding principles in the visual brain. A highly realistic dynamic monkey face avatar has been used to show that facial expressions may be encoded using norm-referenced representations, helping explain why primate expression recognition transfers between different head shapes, including cartoon faces.
The generation of realistic dynamic monkey body avatars is technically more difficult because of the lack of sufficiently accurate 3D motion data. A new method enables highly realistic dynamic body avatars and a dataset with more than 1000 natural monkey actions and corresponding animations, which can be used to study the neural encoding of body shape and social interactions in body patches of the superior temporal sulcus.
Over a decade ago, the EU VERE project asked whether embodiment could serve as a tool for behavioral transformation and social healing. Since then, the work has advanced from theoretical perspective-taking to large-scale judicial implementation, reaching more than 1,000 users in the prison system.
Along the way, the research investigated neural signatures of embodying the victim and identified shifts in emotion recognition among offenders. The same approach is now being extended to aggression in the metaverse and in corporate environments.
The guiding principle is straightforward: when we change the body, we change the mind.
Social interaction defines who we are, and our relationships are tied to our strongest emotions. Yet social interaction can also be stressful, and our own bias and prejudice can limit the quality and outcome of our social activities.
This talk discusses how embodiment in virtual reality, temporarily becoming someone else, can shift behaviour and cognition. The presentation showcases work on communication skills and bias in medical consultations, behavioural change for pro-climate actions, and the way owning a non-human body can reduce feelings of loneliness.
Moral cognition is often portrayed as an exercise in abstract reasoning. This talk instead starts from the growing evidence that bodily states and sensations shape how we judge what is right or wrong. Within an embodied morality framework, the presentation investigates whether manipulating body ownership and agency influences dishonest behaviour in both real and virtual interactions.
Experimental studies examine how altering ownership and agency over an artificial agent affects honest and dishonest decisions in healthy individuals as well as in patients with Parkinson's disease. The talk also extends the discussion to less studied internal bodily signals, including respiratory and gastrointestinal activity.
These signals may contribute to homeostatic regulation, allostatic interactions, corporeal awareness, and moral decision-making.
The perceptual experience of limbs, body parts, and whole bodies as belonging to oneself is a fundamental component of embodiment and bodily self-consciousness. Over the past two decades, behavioral and neuroimaging studies have established core perceptual rules governing body ownership and identified a distributed network of frontoparietal and subcortical regions involved in this process.
This talk presents recent work combining psychophysics, computational modeling, and neurophysiology. First, studies are described that link individual alpha and beta oscillatory frequencies to the temporal integration of bodily signals and perceptual sensitivity in the rubber hand illusion.
The talk then presents magnetoencephalography results revealing time-resolved whole-brain activity patterns, showing how frontal, parietal, occipital, and subcortical regions contribute in a temporally structured and sequential manner to the emergence of ownership.
Neuroscience traditionally focuses on brains and less on bodies. This presentation discusses the relation of trunks and noses to the brain of pigs, elephants, and primates. The pig is heavily specialized for rostrum and trunk sensing, and the pig somatosensory cortex contains a large three-dimensional rostrum model.
The talk also considers other somatosensory cortical three-dimensional body-part models in raccoons and elephants. Elephants have elaborate tactile trunk sensing, an immense olfactory bulb, and one of the largest known repertoires of olfactory receptor genes across vertebrates.
Comparative evidence is used to argue that the size and structure of the body, and in particular trunks and noses, shape brains and genomes.
Episodic autobiographical memory is a building block of self-consciousness, involving the recollection and subjective re-experiencing of personal past experiences. Because life events are intrinsically linked to the subject who experiences them from within the body, the body may play a crucial role in both encoding and retrieving an event.
This talk presents evidence for a bodily contribution to autobiographical memory. It includes work showing that aesthetic experiences in a museum are automatically associated with spatial memory centered on the body, a case study of a patient with severe amnesia in which virtual-reality manipulations of the bodily self during encoding affected memory, and neuroimaging data showing that retrieval automatically reactivates bodily traces associated with the remembered episode.
Overall, the talk argues that bodily signals are automatically associated with our personal memories, shaping them and contributing to our ability to relive past episodes.
Reminiscence therapy is a well-known technique that has been used to improve cognitive and physical functioning in older people. This presentation revisits the famous Counterclockwise study carried out by Ellen Langer in 1979 and then describes virtual counterclockwise studies built around embodiment.
Older participants were embodied in a 1960s apartment and from there transported to the Royal Albert Hall in London, where Spain won the 1968 Eurovision Song Contest. A critical difference from earlier work is that participants were not only placed in their past; they were embodied in virtual bodies that looked like themselves in the 1960s, based on their photographs.
The results suggest that this method is viable as a technique to improve cognitive and physical functioning. A second recently completed experiment will also be reported.
To design devices for the human body, engineers often use the body itself as the ideal template. Likewise, for individuals missing a limb, the development of artificial prosthetic limbs often treats embodiment as the ultimate goal. This talk argues that the relevant neurocognitive resources may differ radically depending on the user's life experiences and needs.
The presentation reviews a series of studies investigating the neural basis of artificial limb use for both substitution and augmentation technologies. Collectively, these studies suggest that although opportunities exist for harnessing hand neural and cognitive resources to control artificial limbs, alternative non-biomimetic approaches may also be well suited for successful human-device interfaces.
To be decided.