View Code? Open in Web Editor
NEW
This project forked from ibm/multidoc2dial
MultiDoc2Dial: Modeling Dialogues Grounded in Multiple Documents
License: Apache License 2.0
Python 96.23%
Shell 3.77%
multidoc2dial's Introduction
- ๐ญ My research area: Multi-modal Learning, Human-robot Interaction, and Natural Language Processing
- ๐ซ How to reach me: [email protected]
- โก What I'm interested in: Keeping Fitness and Delicious foods
- โจ More details: Google Scholar, Personal Website
- Scene-Aware Prompt for Multi-modal Dialogue Understanding and Generation (NLPCC 2022)
- Learning to Locate Visual Answer in Video Corpus Using Question (ICASSP 2023)
- MedConQA: Medical Conversational Question Answering System based on Knowledge Graphs (EMNLP 2022 demo)
- Towards Visual-Prompt Temporal Answering Grounding in Medical Instructional Video (Arxiv)
multidoc2dial's People