GithubHelp home page GithubHelp logo

haiyang-w / git Goto Github PK

View Code? Open in Web Editor NEW
290.0 290.0 12.0 12.79 MB

[ECCV2024 OralπŸ”₯] Official Implementation of "GiT: Towards Generalist Vision Transformer through Universal Language Interface"

Home Page: https://arxiv.org/abs/2403.09394

License: Apache License 2.0

Dockerfile 0.09% Python 99.03% Shell 0.88%
foundation-models perception transformer unified vision-and-language vision-transformer

git's People

Contributors

haiyang-w avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

git's Issues

warning upon loading Tokenizer?

I've encountered the following warnings when trying fast-mode,
I've tried loading BlipTokenizer, but why turned out to be the "BlipTokenizer" warning?

The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'BertTokenizer'.
The class this function is called from is 'BlipTokenizer'.

Besides, my demo run on an offline env,
where I can not download pretrained weights online

Is there any suggestions? Many thanks!

LoveDA Few Shot training annotation format

Hi,

Thanks for making the code public. I'm trying to run few shot learning on LoveDA dataset. Once I start training, I get the following error.

FileNotFoundError: [Errno 2] No such file or directory: 'data/loveDA/ann_dir/train/1040.png'

My loveDA folder structure is as follows:

GiT
|──data
β”‚   β”œβ”€β”€ loveDA
β”‚   β”‚   β”œβ”€β”€ img_dir
β”‚   β”‚   β”‚   β”œβ”€β”€ train
β”‚   β”‚   β”‚   β”œβ”€β”€ val
β”‚   β”‚   β”‚   β”œβ”€β”€ test
β”‚   β”‚   β”œβ”€β”€ ann_dir
β”‚   β”‚   β”‚   β”œβ”€β”€ train
β”‚   β”‚   β”‚   |      |----- 1040.png.json
β”‚   β”‚   β”‚   β”œβ”€β”€ val

as described here: https://github.com/Haiyang-W/GiT/blob/main/tools/dataset_preprocess/dataset_prepare.md

However, I see in the same link that I should run python tools/dataset_converters/loveda.py /path/to/loveDA but I don't see the file in the cloned repo. If this code is needed to fix this issue, can you please provide me with the dataset_converter file for loveda dataset?

AssertionError in runpy.py

I try to use this awesome model like below

image

but it's not working.

Could i ask you what is the problem?

Thank you!

Details regarding few-shot and zero shot datasets

Hi,

Thank you for the code and the Readme which are both very well organized. I am trying to setup the few-shot and zero-shot datasets. Is there any details I need to take account of?

Thank you!.

Why does seq_embed during training skip last target token

The following line in git_det_head, skips the last token (height) for input tokens:
# input tokens for parallel training
input_tokens = targets_tokens[:, :-1]

This leads to seq_embed skipping height during training. Is this intended? If so, could you please clarify why this is done?

Thank you!.

Mismatch in few-shot and zero-shot numbers between paper and code

There are discrepancies in the results of few-shot results and in one zero shot result as shown below:

git_discrepancies

However, results for zero-shot cs-det, cs-seg and sun - rgbd seg match with only slight deviations.

Would it be possible that I am looking at the wrong metric or maybe there is some other issue?
Would be glad if you could help me with this. Thank you!.

Implementation detail for COCO mAP calculation

First, thanks for sharing the codes and models. I just wonder in this paper, how the COCO mAP metric was calculated under the next token prediction decoder output. Since traditionally, for computing mAP, we need a confidence score prediction for each box, but for next-token prediction output, how this score can be obtained? or am I missing something?

hugging face

能否提供hugging faceηš„demoθ―•ηŽ©οΌŸ

KeyError: 'Duplicate key is not allowed among bases'

When using the large and huge few-shot configurations for loveda dataset, since the config uses both the loveda base config and the git_large config, there are overlapping keys. Issue can be resolved by creating a copy of loveda_base and replacing git_base with git_large and load_from base to large.

Is there any other way to fix this? Or am I missing something here?

Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.