This repository contains implementations of Retrieval-Augmented Generation (RAG) models aimed at enhancing medical education. These models generate study questions from both text and images, making them valuable tools for medical students.
- Multimodal RAG: Integrates text and image data to create comprehensive questions, aiding in subjects like radiology and pathology.
- Modal RAG: Utilizes text data to generate questions, suitable for text-heavy subjects such as medical literature and case studies.
The primary purpose of these models is to assist medical students in their studies by automatically generating relevant questions. This can help students better understand and retain complex medical concepts through active learning.
- Text Processing: The modal RAG model processes text data to generate questions that target key concepts and details.
- Image Integration: The multimodal RAG model combines text and image inputs to generate questions that address both visual and textual information.
- Question Generation: The models use advanced natural language processing techniques to formulate questions that are educational and relevant.
- Medical Education: Helps students prepare for exams by generating practice questions from their study materials.
- Self-assessment: Allows students to test their knowledge on specific topics.
- Interactive Learning: Enhances engagement by providing a mix of question types based on text and images.
We welcome contributions from the community to improve and expand these models. Please feel free to fork the repository, make your changes, and submit a pull request.
This project is licensed under the MIT License. See the LICENSE file for more details.
We hope these RAG implementations will be valuable in supporting your medical education journey. For any questions or feedback, please open an issue in this repository.