GithubHelp home page GithubHelp logo

Comments (5)

D-X-Y avatar D-X-Y commented on August 19, 2024 2

Yes, you are right.

BTW, if you want to resize the CIFAR-10 size, you need to add a "resize transform" at here and also revised the shape at here, which is used to compute the FLOPs.

from nats-bench.

D-X-Y avatar D-X-Y commented on August 19, 2024

Thanks for your questions.
The code above will direct u to execute these lines, so as to create the macro structure defined in NATS-Bench (https://github.com/D-X-Y/AutoDL-Projects/blob/58733c18becf18cd5c66392eb0ca6a80e2d14d23/xautodl/models/cell_infers/tiny_network.py#L10).

This macro structure will downsample twice and have a global pooling layer before the last FC layer. Therefore, this structure is resolution-agnostic. You can use either 1616 inputs as ImageNet-16-120, or 3232 for CIFAR, or 256*256 for your datasets.

Having said that, for the 256 * 256 input resolution, 2 downsampling layer in the network may not be enough regarding the model capacity. While that is another question.

In sum, so far, the config u obtained from the above code is resolution-agnostic, u can directly use it for inputs with different resolutions.

from nats-bench.

Mshz2 avatar Mshz2 commented on August 19, 2024

Thanks for your questions.
The code above will direct u to execute these lines, so as to create the macro structure defined in NATS-Bench (https://github.com/D-X-Y/AutoDL-Projects/blob/58733c18becf18cd5c66392eb0ca6a80e2d14d23/xautodl/models/cell_infers/tiny_network.py#L10).

This macro structure will downsample twice and have a global pooling layer before the last FC layer. Therefore, this structure is resolution-agnostic. You can use either 16_16 inputs as ImageNet-16-120, or 32_32 for CIFAR, or 256*256 for your datasets.

Having said that, for the 256 * 256 input resolution, 2 downsampling layer in the network may not be enough regarding the model capacity. While that is another question.

In sum, so far, the config u obtained from the above code is resolution-agnostic, u can directly use it for inputs with different resolutions.

Thanks a lot for your fast response.
So, the config below

config = api.get_net_config(12, 'cifar10')

obtains a config appropriate for cifar10 completely agnostic to the pre-defined input resolution in get_datasets function, right? and If I simply resize cifar10 size to e.g. 64 or 128 in the get_datasets function, the appropriate results can still be obtained (except the number of downsampling issue that you mentioned)?

from nats-bench.

Mshz2 avatar Mshz2 commented on August 19, 2024

Thanks for your questions. The code above will direct u to execute these lines, so as to create the macro structure defined in NATS-Bench (https://github.com/D-X-Y/AutoDL-Projects/blob/58733c18becf18cd5c66392eb0ca6a80e2d14d23/xautodl/models/cell_infers/tiny_network.py#L10).

This macro structure will downsample twice and have a global pooling layer before the last FC layer. Therefore, this structure is resolution-agnostic. You can use either 16_16 inputs as ImageNet-16-120, or 32_32 for CIFAR, or 256*256 for your datasets.

Having said that, for the 256 * 256 input resolution, 2 downsampling layer in the network may not be enough regarding the model capacity. While that is another question.

In sum, so far, the config u obtained from the above code is resolution-agnostic, u can directly use it for inputs with different resolutions.

Hi! I was thinking about some changes in my project and I came back to our topic again :)
You mentioned that the structure is resolution-agnostic and I can use 16_16 inputs for my custom datasets with 128x128 resolution. But, wouldn't it reduce the dimensionality of my image in such a way that small objects in it disappear?

+Where and how in here can I increase the number of downsample layers? how many do U suggest for input C of 128 and 256?

  • I changed the channel input size by config['C'] = 32 after config = api.get_net_config(arch, 'cifar10').Do you consider changing config['C'] = 32 or any other values is effective way for increasing accuracy for custom datasets with higher resolutions?

Thanks a lot for you help! <3

from nats-bench.

D-X-Y avatar D-X-Y commented on August 19, 2024

To increase the number of downsample layers, you need to change the definition of TinyNetwork. Please see here: https://github.com/D-X-Y/AutoDL-Projects/blob/58733c18becf18cd5c66392eb0ca6a80e2d14d23/xautodl/models/cell_infers/tiny_network.py#L21, a True means a downsample layer.

from nats-bench.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.