GithubHelp home page GithubHelp logo

fuxiaoliu / mmc Goto Github PK

View Code? Open in Web Editor NEW
59.0 9.0 3.0 8.58 MB

[NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning

Python 100.00%
arxiv benchmark chart dataset gpt instruction-tuning llava minigpt4 mplug-owl multimodal

mmc's Introduction

arXiv Hugging Face NAACL 2024

This is the official GitHub repo of the paper MMC: Advancing Multimodal Chart Understanding with Large-scale Instruction Tuning.

News

  • [Jul. 9, 2024] ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ Our dataset is now released through Hugging Face Datasets.
  • [Mar. 13, 2024] Our paper is accepted to NAACL 2024.
  • [Nov. 15, 2023] Our paper is available on arXiv.

Highlights

  • We introduce a large-scale MultiModal Chart Instruction (MMC-Instruction) dataset supporting diverse tasks and chart types. Leveraging this data.
  • We also propose a Multi-Modal Chart Benchmark (MMC-Benchmark), a comprehensive human-annotated benchmark with nine distinct tasks evaluating reasoning capabilities over charts. Extensive experiments on MMC-Benchmark reveal the limitations of existing LMMs on correctly interpreting charts, even for the most recent GPT-4V model.
  • We develop Multi-Modal Chart Assistant (MMCA), an LMM that achieves state-of-the-art performance on existing chart QA benchmarks.

Data Release

The chart-text alignment data (MMC-Alignment), chart instruction-tuning data (MMC-Instruction), and benchmark data (MMC-Benchmark) introduced in our paper can be downloaded from Hugging Face Datasets using git clone:

git lfs install
git clone https://huggingface.co/datasets/xywang1/MMC

It contains three sub-directories MMC-Alignment, MMC-Benchmark, and MMC-Instruction:

MMC-Alignment

  • mmc_chart_text_alignment_arxiv_text.jsonl: 250,000 samples for chart-text alignment training.
  • mmc_chart_text_alignment_arxiv_images.tar.gz: images for mmc_chart_text_alignment_arxiv_text.jsonl.

MMC-Benchmark

  • mmc_benchmark_text.jsonl: 2,126 instances for testing and benchmarking.
  • mmc_benchmark_images.tar.gz: images for mmc_benchmark_text.jsonl.

MMC-Instruction

  • mmc_instruction_arxiv_text.jsonl: 300,000 question-answer pairs synthesized with arXiv data for instruction tuning.
  • mmc_instruction_arxiv_images.tar.gz: images for mmc_instruction_arxiv_text.jsonl.
  • mmc_instruction_non-arxiv_text.jsonl: 110,020 extra question-answer pairs for instruction tuning.
  • mmc_instruction_non-arxiv_images.tar.gz: images for mmc_instruction_non-arxiv_text.jsonl.

Existing Datasets

As mentioned in the paper, chart summarization datasets from Statist, PlotQA, VisText, ChartInfo, and Unichart are used in our experiments for chart-text alignment training. Please refer to the following script for details:

# Existing chart-text alignment images
gdown https://drive.google.com/uc?id=1e1mx_nb5PWjPkuIsJkY8B4xSET9DOWTa
# Existing chart-text alignment text
gdown https://drive.google.com/uc?id=18SJ13V4qEt1ixOQPbRmEnZKQrjS5v14T

For existing Chart QA training data, please refer to the following script:

# Existing chart qa images
gdown https://drive.google.com/uc?id=1Y17wNYdBlPxhB5KKiux2BD8C2FlA5MC9
# Existing chart qa text
gdown https://drive.google.com/uc?id=1tUtntLRgsBJ9v5NcdTMvVI32ruLHAyFe

MMCA Gradio demo

1. Install the environment according to mplug-owl.

We finetuned mplug-owl on 8 V100. If you meet any questions when implement on V100, feel free to let me know!

2. Download the Checkpoint

gdown https://drive.google.com/uc?id=11KJA8bSNi1yxgcijsG3xfBHvWe8C748F

3. Edit the Code

As for the mplug-owl/serve/model_worker.py, edit the following code and enter the path of the lora model weight in lora_path.

self.image_processor = MplugOwlImageProcessor.from_pretrained(base_model)
self.tokenizer = AutoTokenizer.from_pretrained(base_model)
self.processor = MplugOwlProcessor(self.image_processor, self.tokenizer)
self.model = MplugOwlForConditionalGeneration.from_pretrained(
     base_model,
     load_in_8bit=load_in_8bit,
     torch_dtype=torch.bfloat16 if bf16 else torch.half,
     device_map="auto"
 )
self.tokenizer = self.processor.tokenizer

        
peft_config = LoraConfig(target_modules=r'.*language_model.*\.(q_proj|v_proj)', inference_mode=False, r=8,lora_alpha=32, lora_dropout=0.05)
self.model = get_peft_model(self.model, peft_config)
lora_path = 'Your lora model path'
prefix_state_dict = torch.load(lora_path, map_location='cpu')
self.model.load_state_dict(prefix_state_dict)

4. Local Demo

When you launch the demo in local machine, you might find there is no space for the text input. This is because of the version conflict between python and gradio. The simplest solution is to do conda activate LRV

python -m serve.web_server --base-model 'the mplug-owl checkpoint directory' --bf16

Contact

If you have any questions about this work, please email Fuxiao Liu [email protected].

Citation

@article{liu2023mmc,
  title={MMC: Advancing Multimodal Chart Understanding with Large-scale Instruction Tuning},
  author={Liu, Fuxiao and Wang, Xiaoyang and Yao, Wenlin and Chen, Jianshu and Song, Kaiqiang and Cho, Sangwoo and Yacoob, Yaser and Yu, Dong},
  journal={arXiv preprint arXiv:2311.10774},
  year={2023}
}

Disclaimer

We develop this repository for RESEARCH purposes, so it can only be used for personal/research/non-commercial purposes.

mmc's People

Contributors

fuxiaoliu avatar xyang0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mmc's Issues

Request for Chart-Text Alignment Data Download Link

Hello,

I have been exploring the MMC project repository and find it immensely valuable. I am particularly interested in accessing the Chart-Text Alignment Data to further my research. However, I noticed that the download link provided in the README only includes the Chart Instruction-Tuning Data.

Would it be possible for you to kindly provide the download link for the Chart-Text Alignment Data as well? Access to this dataset would greatly contribute to my work and research efforts.

Thank you very much for your time and consideration.

Best regards,

How many epochs and what is the best loss of your model in the Chart-Text Alignment stage?

Hi,
I am trying to replicate the Chart-Text Alignment stage as described in your paper. It's my first time pre-training such a big model on a large-scale dataset so I don't know when I should stop the pre-training loop. Could you please kindly provide the information about the number of epochs and the best loss that you pre-trained your model in the Chart-Text Alignment stage.

Thank you for your attention to this matter.

Inquiry Regarding Chart-Text Alignment Data and Instructions

Hello,

I hope this message finds you well. I've been actively exploring the MMC project repository and must say it's an invaluable resource.

I have a few queries regarding the Chart-Text Alignment Data:

  • Regarding the non-arXiv JSON files, is the "text" section utilized for the textual alignment?
  • Regarding the arXiv JSON files, is the "caption" section utilized for the textual alignment?
  • Does the "reference_sentence_in_article" field in the arXiv JSON files play a role in the alignment process?

Your guidance on the construction of instructions for generating the Chart-Text Alignment Data would also be immensely helpful. If possible, could you provide some specifics or some prompts to follow?

Thank you sincerely for your time and consideration.

Warm regards,

Chart-Text Alignment Data

Great Work! Can you provide Chart-Text Alignment Data? Or how to separate this apart from instruction tuning data?

Incomplete Arxiv Image Dataset

Hello,

I have downloaded 202k Arxiv images, but it seems incomplete. The JSON data for Scientific (Arxiv) Chart-Caption comprises a total of 250,000 entries. Yet, I found that 79,040 entries reference images that are missing from the dataset provided in this Google Drive link.

Could you kindly provide the missing images or offer insight into why they are not included in the provided dataset?

Thank you for your attention to this matter.

Best regards,

Inquiry Regarding MMC Dataset

Thank you very much for providing the data, it has been tremendously helpful to me! However, I have encountered a few points of confusion regarding the dataset obtained through the current download links. I would greatly appreciate clarification on the following:.

  1. In Chart-Text Alignment Data, MMC-Instruction, the Scientific (Arxiv) Chart-Caption section contains 250k data, whereas the paper mentions 210k. Is this difference due to an addition made after the paper's publication?
  2. The Filtered Existing Datasets part contains 160k data, sourced from Unichart for both images and summaries. However, the paper mentions 190k from five datasets; are they all included in Unichart?
  3. In the non-arxiv part of Chart Instruction-Tuning Data, MMC-Instruction, Part1 offers 2M questions, significantly more than the 200k mentioned in the paper. Are they extracted from Unichart's questions?
  4. Do Part2 and Part3 respectively contain the GPT-4 results of Chart Information Extraction and Chart Reasoning QAs, as mentioned in the paper?

Request for summarization data for existing dataset

Thanks for open-sourcing the MMC training data, which is quite helpful for developing document-oriented MLLMs. I notice that the download link for existing datasets is not updated yet and I believe it is better to include them in our training. Do you have a plan for uploading these images recently?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.