Yixuan Weng's Projects
SOTA of SDU@AAAI2022 AD task
ATTEMPT: A Two-stage Taxonomy Expansion Framework based on Pre-trained models
中文医学NLP公开资源整理:术语集/语料库/词向量/预训练模型/知识图谱/命名实体识别/QA/信息抽取/模型/论文/etc
CCKS2021答非所问竞赛冠军方案
A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".
中文LLaMA&Alpaca大语言模型+本地部署 (Chinese LLaMA & Alpaca LLMs)
CINO: Pre-trained Language Models for Chinese Minority (少数民族语言预训练模型)
Chinese Word2vec Medicine,中文医学词向量
Apply the Circular to the Pretraining Model
A large Chinese Medical CQA
CMIVQA
The first Chinese Multimodal Medical Knowledge Graph
A Multi-tasking and Multi-stage Chinese Minority Pre-Trained Language Model
[TMLR'23] Contrastive Search Is What You Need For Neural Text Generation
ControlLM is a method to control the personality traits and behaviors of language models in real-time at inference without costly training interventions.
Example models using DeepSpeed
Official implementations for various pre-training models of ERNIE-family, covering topics of Language Understanding & Generation, Multimodal Understanding & Generation, and beyond.
Large Language Models need Holistically Thought in Medical Conversational QA
A large Chinese medical questionnaire dataset
基于关键词的故事生成系统
面向文档的问答系统
LingYi: Multi-modal Medical Conversational Question Answering System based on Knowledge Graph
Bin Li's homepage.
LMTuner: Make the LLM Better for Everyone
科大讯飞低资源多语种文本翻译挑战赛获奖方案