Much of our lives is social: meeting friends, discussing with colleagues, negotiating space on the bus. So, it is unsurprising that humans have adapted to recognize the social structure of events in the blink of an eye. Yet, we are susceptible to linguistic framing: We may think about the same event differently depending on whether it is described as 'buying' or 'selling': Linguistic input such as ‘Dan buys bananas’ triggers expectations to be integrated with visual cues, like a scene in which the Dan’s buying is the construed event; ‘Tom sells bananas’, on the other hand leads to an event construal of selling. However, what is unknown is how, and through which neural mechanisms, language impacts event cognition. Despite efforts in cognitive science, philosophy, and linguistics, answering the question of how the human brain integrates linguistic with non-linguistic information is an ongoing endeavor.
Hypotheses / research questions / objectives. This project aims to uncover the processes through which linguistic framing influences how we think about events we see. The proposed experiments will enable quantitative testing of cognitive theories on how language guides event construal during visual event processing. In four Objectives, we answer the following questions:
OBJECTIVE 1: How are social event descriptions understood?
OBJECTIVE 2: What is the impact of language on role binding in dynamic events?
OBJECTIVE 3: What is the effect of language onto episodic memory?
OBJECTIVE 4: How do these data inform our theories of the language-cognition interface?
The results of this project will allow us to build more precise and cognitively founded models of language connecting to event processing in the human mind.
Methods. The central method combines linguistic priming with two distinct measures: neural oscillations, to trace ongoing linguistic impacts on event processing, and recognition memory, to understand the event construals that result from the combination of linguistic frames priming a visual scene. These approaches are enabled by a cross-linguistic database (LISEDA) built from judgments of how people understand social verbs, like 'meet' or 'hug', in several possible grammatical forms in three languages: English, German, and Hungarian.