GithubHelp home page GithubHelp logo

Comments (9)

lcl6679292 avatar lcl6679292 commented on April 25, 2024

Thanks for the attention. Actually, we used some optimization techniques the same as layoutlmv2. You can refer to the paper. At the same time, based on the StructuralLM model, we still do some continue pre-training on the DocVQA data, mainly to add 2D-position on the question. This can refer to the method of the champion in the CVPR‘20 challenge. We will consider making this code and model open-source in the future.

from alicemind.

paulpaul91 avatar paulpaul91 commented on April 25, 2024

Thanks for the attention. Actually, we used some optimization techniques the same as layoutlmv2. You can refer to the paper. At the same time, based on the StructuralLM model, we still do some continue pre-training on the DocVQA data, mainly to add 2D-position on the question. This can refer to the method of the champion in the CVPR‘20 challenge. We will consider making this code and model open-source in the future.

I have the same problem with @Cppowboy, using the released weight can not reach 83.94. Without any trick, how many anls can DocVQA reach?

from alicemind.

lcl6679292 avatar lcl6679292 commented on April 25, 2024

Thanks for the attention. Actually, we used some optimization techniques the same as layoutlmv2. You can refer to the paper. At the same time, based on the StructuralLM model, we still do some continue pre-training on the DocVQA data, mainly to add 2D-position on the question. This can refer to the method of the champion in the CVPR‘20 challenge. We will consider making this code and model open-source in the future.

I have the same problem with @Cppowboy, using the released weight can not reach 83.94. Without any trick, how many anls can DocVQA reach?

Thanks for the attention. Just using the released weight can reach 78+ ANLS on the test set with some post-processing, which is always used on the data set. As we mentioned above, we will consider making this continue pre-training code and model open-source in the future.

from alicemind.

paulpaul91 avatar paulpaul91 commented on April 25, 2024

Thanks for the attention. Actually, we used some optimization techniques the same as layoutlmv2. You can refer to the paper. At the same time, based on the StructuralLM model, we still do some continue pre-training on the DocVQA data, mainly to add 2D-position on the question. This can refer to the method of the champion in the CVPR‘20 challenge. We will consider making this code and model open-source in the future.

I have the same problem with @Cppowboy, using the released weight can not reach 83.94. Without any trick, how many anls can DocVQA reach?

Thanks for the attention. Just using the released weight can reach 78+ ANLS on the test set with some post-processing, which is always used on the data set. As we mentioned above, we will consider making this continue pre-training code and model open-source in the future.

thanks

from alicemind.

paulpaul91 avatar paulpaul91 commented on April 25, 2024

Thanks for the attention. Actually, we used some optimization techniques the same as layoutlmv2. You can refer to the paper. At the same time, based on the StructuralLM model, we still do some continue pre-training on the DocVQA data, mainly to add 2D-position on the question. This can refer to the method of the champion in the CVPR‘20 challenge. We will consider making this code and model open-source in the future.

I have the same problem with @Cppowboy, using the released weight can not reach 83.94. Without any trick, how many anls can DocVQA reach?

Thanks for the attention. Just using the released weight can reach 78+ ANLS on the test set with some post-processing, which is always used on the data set. As we mentioned above, we will consider making this continue pre-training code and model open-source in the future.

Can you briefly introduce the continue pre-training and QG?How much benefit can bring for each?

from alicemind.

lcl6679292 avatar lcl6679292 commented on April 25, 2024

The continue pre-training on the DocVQA set can bring about 2.0+ ANLS. train set and validation set. QG can bring about 2.4+ANLS. In addition, merge the train set and dev set can also bring 1.8+ANLS. Tips, the results on the test set are greatly affected by the parameters, which may lead to a difference of 1+ANLS

from alicemind.

paulpaul91 avatar paulpaul91 commented on April 25, 2024

The continue pre-training on the DocVQA set can bring about 2.0+ ANLS. train set and validation set. QG can bring about 2.4+ANLS. In addition, merge the train set and dev set can also bring 1.8+ANLS. Tips, the results on the test set are greatly affected by the parameters, which may lead to a difference of 1+ANLS

How much data is used for the continue pre-training?how much data for QG?

from alicemind.

lcl6679292 avatar lcl6679292 commented on April 25, 2024

The continue pre-training on the DocVQA set can bring about 2.0+ ANLS. train set and validation set. QG can bring about 2.4+ANLS. In addition, merge the train set and dev set can also bring 1.8+ANLS. Tips, the results on the test set are greatly affected by the parameters, which may lead to a difference of 1+ANLS

How much data is used for the continue pre-training?how much data for QG?

The data for continue pre-training is all the DocVQA data set, and the data for QG is more than one million.

from alicemind.

paulpaul91 avatar paulpaul91 commented on April 25, 2024

The continue pre-training on the DocVQA set can bring about 2.0+ ANLS. train set and validation set. QG can bring about 2.4+ANLS. In addition, merge the train set and dev set can also bring 1.8+ANLS. Tips, the results on the test set are greatly affected by the parameters, which may lead to a difference of 1+ANLS

How much data is used for the continue pre-training?how much data for QG?

The data for continue pre-training is all the DocVQA data set, and the data for QG is more than one million.

想问下模型83.94结果是否有使用十折

from alicemind.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.