The repository contains:
- a .rdm file with analysis script (analysis-ANON.rmd)
- a .csv Testable output file of results (raw, anonymised data: anon-data.csv) (variables explained here)
- a .csv file with anonymised IDs of participants that were excluded from the analysis (to_exclude.csv). The file includes the following variables:
- to.exclude.id (IDs)
- why (reason for exclusion)
- response (response provided to the question "What do you think it is the purpose of the experiment?
Enter NA if you have no idea"
Pre-registration: https://doi.org/10.17605/OSF.IO/EZYTH
Speakers entrain at the lexical level, both when interacting with humans and computerised interlocutors. Branigan et al. (2011) showed that people entrain more if they think they are interacting with a computer than a human interlocutor and with an unsophisticated computer than with a sophisticated one. While these results can be explained by audience design and priming mechanisms combined, it is still unclear how they interact. Ivanova et al. ( 2020) proposed that speakers allocate their attention to different interlocutors to different extents, and the more attention is paid, the more speakers are likely to be primed and entrain. In our experiment, we asked our participants to play an online picture naming and matching task with a virtual agent, presented as highly or poorly competent. Additionally, we tested the effect of attention by having participants in one group perform a secondary task. All participants also performed a surprise follow-up memory task. Participants who dedicated their full attention to the main task replicated Branigan et al.โs results, while we found the opposite pattern in the participants who performed a secondary task. Moreover, we found that participants who entrained the most are also those who are more accurate in the surprise task.