SLED Research Lab @ University of Michigan
Overview
Language use in human communication is shaped by our goals, our shared experiences, and our understanding of each other’s abilities, knowledge, and beliefs. At the Situated Language and Embodied Dialogue (SLED) group (formerly, the LAIR group), we develop computational models for language processing that take into account the rich situational and communicative context and build embodied AI agents that can communicate with humans in language.
Contact Info
- Location: Bob and Betty Beyster Building 3632, 2260 Hayward Street, Ann Arbor, MI 48109.
- Email: [email protected]
News
- [Oct. 2021] Our work MindCraft: Theory of Mind Modeling for Situated Dialogue in Collaborative Tasks is selected as the Outstanding Paper of EMNLP 2021! Congratulations to Paul!!
- [Oct. 2021] Congratulations to Yichi for successfully passing the prelim exam!!
- [Aug. 2021] SLED has 3 papers accepted to EMNLP 2021! Congrats to Shane and Paul!
🙌 🙌 - [June 2021] Congratulations to Emily and Shane for successfully passing the prelim exam!!
- [Apr. 2021] Very excited to have Ziqiao (Martin) Ma and Jianing (Jed) Yang join SLED in Fall 2021!
- [Oct. 2020] Congratulations to Paul for successfully passing the prelim exam!!
Current Members
Faculty
Ph.D. Students
- Cristian-Paul Bara
- Keunwoo Peter Yu
- Yuwei Emily Bao
- Shane Storks
- Yichi Zhang
- Jianing Jed Yang
- Ziqiao Martin Ma
A list of active assitants and alumni can be found here.
Research
Mission Statement
We develop computational models for language processing that takes into account the rich situational and communicative context and build embodied AI agents that can communicate with humans in language.
Grounded Language Understanding towards Physical Interaction
With the emergence of a new generation of cognitive robots, the capability to communicate with these robots using natural language has become increasingly important. Verbal communication often involves the use of verbs, for example, to ask a robot to perform some tasks or to monitor some physical activities. Concrete action verbs often denote some change of state as a result of an action; for example, “slice a pizza” implies the state of the object pizza will be changed from one piece to several smaller pieces. The change of state can be perceived from the physical world through different sensors. Given a human utterance, if the robot can anticipate the potential change of the state signaled by the verbs, it can then actively sense the environment and better connect language with the perceived physical world such as who performs the action and what objects and locations are involved. This improved connection will benefit many applications relying on human-robot communication.
Situated Human Robot Dialogue
A new generation of interactive robots have emerged in recent years to provide service, care, and companionship for humans. To support natural interaction between a human and these robots, technology enabling situated dialogue becomes increasingly important. Situated dialogue is drastically different from traditional spoken dialogue systems, multimodal conversational interfaces, and tele-operated human robot interaction. In situated human robot dialogue, human partners and robots are situated and co-present in a shared physical environment. The shared surrounding significantly affects how they interact with each other and how the robot interprets human language and performs tasks. In the last couple years, we have started a couple projects on situated human robot dialogue, particularly focusing on how the situated-ness affects human robot language based interaction, and thus techniques for situated language processing and conversation grounding.
Commonsense Reasoning, Semantic and Discourse Processing
Given a large amount of textual data (e.g., news articles, Wikipedia, weblogs, etc.) available online, it has become increasingly important for techniques that can automatically process this data, for example, to extract event information, to answer user questions, and to make inferences. Along these lines, we are particularly interested in the role of discourse and pragmatics in natural language processing and their applications.
Situated and Multimodal Language Processing in Virtual World
Using situated dialogue (in virtual world) and conversational interfaces as our setting, we have investigated the use of non-verbal modalities (e.g., eye gaze and gestures) in language processing and in conversation grounding. The virtual world setting not only has important applications in education, training, and entertainment; but also provides a simplified simulation environment to support studies on situated language processing towards physical world interaction.
Prospective Students
- PhD students: Please apply to the CSE department.
- Masters/Undergraduates/Visitors:
Thank you for your interest in participating in our research. To apply for a research position, please complete this Google form.
- For current Michigan students: We expect you to commit at least 15 hours per week to gain meaningful research experience. If you are chosen for the position, you will need to take EECS X99: Independent Studies for this research experience:
Responses may be slow due to the volume of applications we received. Positions are offered upon availability and qualification.