Fangkai Jiao's Projects
An open-source NLP research library, built on PyTorch.
Experiment for lsat
Reading list for research topics in multimodal machine learning
Re-implementation of BiDAF(Bidirectional Attention Flow for Machine Comprehension, Minjoon Seo et al., ICLR 2017) on PyTorch.
PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Official implementation of AAAI2022 paper "I can find you! Boundary-guided Separated Attention Network for Camouflaged Object Detection"
simple and practical code for computer graphics
A repository for converting between CoQA, SQuAD2, and QuAC and visualizing the data.
The baselines used in the CoQA paper
model running on CoQA dataset including baseline.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Large-scale pretraining for dialogue
Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".
Dense Passage Retriever - is a set of tools and models for open domain Q&A task.
Reading Wikipedia to Answer Open-Domain Questions
Slot Self-Attentive Dialogue State Tracking
Source code for ACL 2021 paper "ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning"
Implementation of conversational QA model: FlowQA (with slight improvement)
山东大学暑期创新实训:高可配互联网采集系统及分布式检索应用
A list of papers for machine learning, reinforcement learning, NLP or something interesting
A novel contextuaL imAge seaRch sCHeme (LARCH)
The official implementation of ICLR 2020, "Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering".
链家网和贝壳网房价爬虫,采集北京上海广州深圳等21个**主要城市的房价数据(小区,二手房,出租房,新房),稳定可靠快速!支持csv,MySQL, MongoDB,Excel, json存储,支持Python2和3,图表展示数据,注释丰富 🚁,点星支持
A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to copy code and launch discussions about the problems you have encoured.
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.