GithubHelp home page GithubHelp logo

14h034160212 / hhh-an-online-question-answering-system-for-medical-questions Goto Github PK

View Code? Open in Web Editor NEW
85.0 3.0 32.0 13.16 MB

HBAM: Hierarchical Bi-directional Word Attention Model

Home Page: https://arxiv.org/pdf/2002.03140.pdf

Python 80.70% Jupyter Notebook 18.25% HTML 1.05%
question-answering deep-learning attention-model medical-chatbot

hhh-an-online-question-answering-system-for-medical-questions's Introduction

Qiming Bao

Homepage   LinkedIn   GitHub   Gmail   Google Scholar   DBLP   Twitter   CV   简历

Qiming Bao is a Ph.D. Candidate at the Strong AI Lab, NAOInstitute, University of Auckland, New Zealand, supervised by Professor Michael Witbrock. His research interests include natural language processing and reasoning. He has over five years of research and development experience, and has published several papers in top conferences in the fields of AI/NLP/Reasoning, including ACL, IJCAI, ICLR, EACL, AAAI/EAAI, LLM@IJCAI, AGI@ICLR, and IJCLR-NeSy. His method named AMR-LDA (GPT-4 + AMR-LDA Prompt Augmentation) has achieved the #1 ranking on one of the most challenged logical reasoning reading comprehension leaderboards (ReClor) and we are the first group scored above 90% on the hidden test set around the world. Two of his logical reasoning datasets called PARARULE-Plus and AbductionRules have been collected by LogiTorch, ReasoningNLP, Prompt4ReasoningPapers and OpenAI/Evals. Qiming has given public guest talks and academic visit at Microsoft Research Asia, Samsung AI Center Cambridge UK, IEEE Vehicular Technology Society, ZJU-NLP Group, Zhejiang University, The University of Melbourne, Institute of Automation, Chinese Academy of Sciences and Shenzhen MSU-BIT University on his main research topic, "Natural Language Processing and Reasoning".

Qiming is an AI engineer at Xtracta in Auckland, New Zealand, where he he investigated and implemented alternative attention mechanisms to extend the effective sequence length in multi-modal document processing models such as LayoutLMv3 and ERNIE-LayoutX. He replicated the multi-task, multimodal pre-training code for LayoutLMv3, which Microsoft did not open source, including masked language modelling, masked image modelling, and word-patch alignment. He integrated DeepSpeed and adapters into ERNIE-LayoutX and LayoutLMv3, which can reduce training costs, result in a smaller model size, and make it easier to deploy in the production environment. He successfully applied for the Research & Development Tax Incentive (RDTI) grants from Callaghan Innovation (New Zealand's Innovation Agency) for both 2022 and 2023, each offering a tax credit equal to 15% of eligible R&D expenditure. This credit can be utilised to reduce the income tax payable by the company. Prior to this role, he worked as a research and development engineer in AIIT at Peking University, where he focused on automatic abstract generation and GPT-2 based dialog chatbot development. Qiming also has a great deal of teaching experience, having worked as a teaching assistant for three years. He earned a Bachelor of Science (Honours) in Computer Science (First Class) from the University of Auckland and completed a Summer Research Internship with Scholarship in Precision Driven Health. In addition, he was selected as one of ten students to participate in the Summer Research Program funded by Precision Driven Health, where the main topic was developing a Medical Chatbot based on Deep Learning and Knowledge Graph.

Papers/Projects

  • [17 April 2024] Our paper (Gaël Gendron, Qiming Bao, Michael Witbrock, Gillian Dobbie) "Large Language Models Are Not Strong Abstract Reasoners" has been accepted by IJCAI 2024 [Paper link] [Source code and evaluation platform].

  • [05 March 2024] Our paper (Qiming Bao, Juho Leinonen, Alex Peng, Wanjun Zhong, Timothy Pistotti, Alice Huang, Paul Denny, Michael Witbrock and Jiamou Liu) "Exploring Iterative Enhancement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models" has been accepted by AGI@ICLR 2024 [Paper link] [Source code].

  • [05 March 2024] Our paper (Gaël Gendron, Qiming Bao, Michael Witbrock, Gillian Dobbie) "Large Language Models Are Not Strong Abstract Reasoners Yet" has been accepted by AGI@ICLR 2024 [Paper link] [Source code and evaluation platform].

  • [01 February 2024] Our paper Our paper (Zhongsheng Wang, Jiamou Liu, Qiming Bao, Hongfei Rong, Jingfeng Zhang) "ChatLogic: Integrating Logic Programming with Large Language Models for Multi-step Reasoning" has been accepted by NucLeaR@AAAI 2024 [Paper link] [Source code].

  • [24 June 2023] Our paper (Qiming Bao, Gaël Gendron, Alex Peng, Wanjun Zhong, Neset Tan, Yang Chen, Michael Witbrock, Jiamou Liu) "A Systematic Evaluation of Large Language Models on Out-of-Distribution Logical Reasoning Tasks" has been accepted by LLM@IJCAI'23 [Paper link] [Source code].

  • [24 June 2023] Our paper (Qiming Bao, Alex Peng, Zhenyun Deng, Wanjun Zhong, Gaël Gendron, Timothy Pistotti, Neşet Tan, Nathan Young, Yang Chen, Yonghua Zhu, Michael Witbrock and Jiamou Liu) "Enhancing Logical Reasoning of Large Language Models through Logic-Driven Data Augmentation" has been accepted by LLM@IJCAI'23 [#1 on the ReClor Leaderboard] [Paper link] [Source code].

News

Qiming Bao's GitHub stats

Wekatime States(Since April 24, 2023)

14H034160212

hhh-an-online-question-answering-system-for-medical-questions's People

Contributors

14h034160212 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

hhh-an-online-question-answering-system-for-medical-questions's Issues

QA pair dataset - answer retrieval

Hi, i didnt find the implementation for the retrieval of answers from QA pair using the HBAM. In paper its seen that, when the answer is not found in KB, the answer is fetched from the QA pair dataset using the top k similar questions using HBAM

How to run code locally

Good afternoon, I would like to run the code locally. Which file should I run to start the GUI? Is there anything I have to do besides loading the data into the database?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.