GithubHelp home page GithubHelp logo

simplot's Introduction

Simplot

The official source code for SIMPLOT: Enhancing Chart Question Answering by Distilling Essentials

Overview

Recently, interpreting charts with complex and logical reasoning have emerged as substantial challenges due to the development of vision-language models. A prior state-of-the-art (SOTA) model, Deplot, has presented an end-to-end method that leverages the vision-language model to convert charts into table format utilizing Large Language Models (LLMs) for reasoning. However, unlike natural images, charts contain a mix of essential and irrelevant information required for chart reasoning, and we discover that this characteristic can lower the performance of chart-to-table extraction. In this paper, we introduce Simplot, a method designed to extract only the elements necessary for chart reasoning. The proposed method involves two steps: 1) training to mimic a simple plot that contains only the essential information from a complex chart for table extraction, followed by 2) performing reasoning based on the table. Our model enables accurate chart reasoning without the need for additional annotations or datasets, and its effectiveness is demonstrated through various experiments. Furthermore, we propose a novel prompt addressing the shortcoming of recent SOTA model, ignoring visual attributes such as color.

Environment

You can configure the environment used during the experiments by entering the following code.

conda create -n Simplot python=3.8 
conda activate Simplot 
pip install -r requirements.txt

Training

Dataset Preparation

You can download the dataset from following github repository: huggingface repository.

Another component used for training and experimentation can be downloaded and preprocessed through the following code.

cd data/ 
unzip test.zip
cd ..
python preprocess.py

And set the directory as follows:

data/
├── data                   
│   ├── train   
│   │   ├── train_augmented.json 
│   │   ├── train_human.json    
│   │   ├── annotations           
│   │   ├── png                  
│   │   ├── tables                
│   │   ├── positive_png          # Positive Chart Images Folder
│   │   ├── negative_png          # Negative Chart Images Folder
│   │   ├── columns               # Ground-Truth Columns Folder
│   │   ├── indexes               # Ground-Truth Indexes Folder
│   └── val  
│   │   │   ...
│   │   │   ...
│   └── test  
│   │   ├── gpt_columns           # GPT-Extracted Columns Folder
│   │   ├── gpt_indexes           # GPT-Extracted Indexes Folder
│   │   │   ...
│   │   │   ...

Phase 1:

python main.py --phase 1

Phase 2:

python main.py --phase 2 --state_path './state/phase_1_best_model.pth' --lr 1e-5 

Inference

After training step, the python code will automatically save the checkpoints in ./state/. With the saved checkpoints, you can evaluate RD and test ChartQA.

Evaluate RD

python inference.py

ChartQA

python QA.py --api_key 'your api key' --qa_type 'human or augmented' 

OpenCQA

Dataset Preparation

You can download the opencqa dataset from following github repository: github repository.

Another component used for experimentation can be downloaded and preprocessed through the following code (You can use a previously trained model without any additional training).

cd data/opencqa/test/
unzip test.zip
cd ../../../
python preprocess_opencqa.py

And set the directory as follows:

data/
├── opencqa                   
│   ├── baseline_models      
│   ├── bboxes
│   ├── chart_images      
│   ├── ...         
│   └── test  
│   │   ├── gpt_columns           # GPT-Extracted Columns Folder
│   │   ├── gpt_indexes           # GPT-Extracted Indexes Folder
│   │   │── img

Inference

python inference.py --img_path './data/opencqa/test/img/' --row_path './data/opencqa/test/gpt_indexes/' --col_path './data/opencqa/test/gpt_columns/' --inference_type 'opencqa/'

Open-ended Question Answering

python opencqa.py --api_key 'your api key' 

Unichart

Phase 1:

python unichart/unichart_phase1.py

Phase 2:

python unichart/unichart_phase2.py

Inference

You can obtain predicted.csv through the following code, and if you input it as an argument in the above QA code, you can proceed with the QA process.

python unichart/unichart_inference.py

simplot's People

Contributors

sangwu99 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

Forkers

wonjoongk

simplot's Issues

Problem in Preprocessing

I found that the preprocess.py file is crashing after running 10%. I have updated the code and could reach maximum upto 50%. Can you make it 100% anf fix the issue.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.