GithubHelp home page GithubHelp logo

Comments (6)

junyongyou avatar junyongyou commented on June 14, 2024

This is because the input image has very large resolution, much lager than the imageset that the model has been trained on. If you look at the method create_triq_model in triq_model.py, maximum_position_encoding=193, and the position encoding of your image is 661. A quick and dirty way is to increase this argument to 661, for example. But I am not sure if it will work, as the model has not been trained with such big images. Give a try and please let me know what you get.

from triq.

Usernamezhx avatar Usernamezhx commented on June 14, 2024

I think it can't work:

Traceback (most recent call last):
  File "examples/image_quality_prediction.py", line 24, in <module>
    predict_mos = predict_image_quality(model_weights_path, image_path)
  File "examples/image_quality_prediction.py", line 13, in predict_image_quality
    model.load_weights(model_weights_path)
  File "/data2/zhx3/env_python3.7_pytorch1.5/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 250, in load_weights
    return super(Model, self).load_weights(filepath, by_name, skip_mismatch)
  File "/data2/zhx3/env_python3.7_pytorch1.5/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py", line 1266, in load_weights
    hdf5_format.load_weights_from_hdf5_group(f, self.layers)
  File "/data2/zhx3/env_python3.7_pytorch1.5/lib/python3.7/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 707, in load_weights_from_hdf5_group
    K.batch_set_value(weight_value_tuples)
  File "/data2/zhx3/env_python3.7_pytorch1.5/lib/python3.7/site-packages/tensorflow/python/keras/backend.py", line 3384, in batch_set_value
    x.assign(np.asarray(value, dtype=dtype(x)))
  File "/data2/zhx3/env_python3.7_pytorch1.5/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 846, in assign
    self._shape.assert_is_compatible_with(value_tensor.shape)
  File "/data2/zhx3/env_python3.7_pytorch1.5/lib/python3.7/site-packages/tensorflow/python/framework/tensor_shape.py", line 1117, in assert_is_compatible_with
    raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (1, 661, 32) and (1, 193, 32) are incompatible

The max size may be :1024 × 768. ok I will random crop image to 1024 × 768 and get the mean mos value.

from triq.

junyongyou avatar junyongyou commented on June 14, 2024

I think it can't work:

Traceback (most recent call last):
  File "examples/image_quality_prediction.py", line 24, in <module>
    predict_mos = predict_image_quality(model_weights_path, image_path)
  File "examples/image_quality_prediction.py", line 13, in predict_image_quality
    model.load_weights(model_weights_path)
  File "/data2/zhx3/env_python3.7_pytorch1.5/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 250, in load_weights
    return super(Model, self).load_weights(filepath, by_name, skip_mismatch)
  File "/data2/zhx3/env_python3.7_pytorch1.5/lib/python3.7/site-packages/tensorflow/python/keras/engine/network.py", line 1266, in load_weights
    hdf5_format.load_weights_from_hdf5_group(f, self.layers)
  File "/data2/zhx3/env_python3.7_pytorch1.5/lib/python3.7/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 707, in load_weights_from_hdf5_group
    K.batch_set_value(weight_value_tuples)
  File "/data2/zhx3/env_python3.7_pytorch1.5/lib/python3.7/site-packages/tensorflow/python/keras/backend.py", line 3384, in batch_set_value
    x.assign(np.asarray(value, dtype=dtype(x)))
  File "/data2/zhx3/env_python3.7_pytorch1.5/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 846, in assign
    self._shape.assert_is_compatible_with(value_tensor.shape)
  File "/data2/zhx3/env_python3.7_pytorch1.5/lib/python3.7/site-packages/tensorflow/python/framework/tensor_shape.py", line 1117, in assert_is_compatible_with
    raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (1, 661, 32) and (1, 193, 32) are incompatible

The max size may be :1024 × 768. ok I will random crop image to 1024 × 768 and get the mean mos value.

Then it might be the only way for now, even though image patching and average is the approach that I want to avoid. The model can be trained by setting maximum_position_encoding to larger value when using small resolution images. However, I am not sure if that will work because the positional weights beyond 193 will probably not be trained well.

from triq.

junyongyou avatar junyongyou commented on June 14, 2024

There is another approach to solve the problem. You can change line 146 in transformer_iqa.py by increasing the pooling size. For example, you can change to self.pooling_small = MaxPool2D(pool_size=(4, 4)) or even larger.

from triq.

Usernamezhx avatar Usernamezhx commented on June 14, 2024

ok. I will try. thanks for your reply.

from triq.

junyongyou avatar junyongyou commented on June 14, 2024

ok. I will try. thanks for your reply.

If possible, please also let me know your results. I am very curious. Thanks.

from triq.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.