GithubHelp home page GithubHelp logo

featurelearningrotnet's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

featurelearningrotnet's Issues

Problem in data transformations

In dataloader.py when preparing rotated images
rotated_imgs = [

                self.transform(img0),
                self.transform(rotate_img(img0,  90)),
                self.transform(rotate_img(img0, 180)),
                self.transform(rotate_img(img0, 270))
            ]

the following error arises.
ValueError: some of the strides of a given numpy array are negative. This is currently not supported, but will be added in future releases.

How to solve this error?

Reproducing the results in the Paper

Dear Spyros Gidaris, Praveer Singh and Nikos Komodakis,

i have read your paper "Unsupervised Representation Learning by Predicting Image Rotations" and was impressed by your work and the astonishing results receive by pretraining a "RotNet" on the rotation task and later train classifiers on top of the feature maps.

I have downloaded your code from GitHub and tried to reproduce the values in Table 1 for a RotNet with 4 conv. blocks. However, running "run_cifar10_based_unsupervised_experiments.sh" and altering line 33 and for 'conv1' also line 31 in the config file "CIFAR10_MultLayerClassifier_on_RotNet_NIN4blocks_Conv2_feats.py", i obtain slightly lower values than in the paper especially for the fourth block:

Rotation Task: 93,65 (Running your Code) / ---
ConvBlock1: 84,65 (Running your Code) / 85,07 (Paper)
ConvBlock2: 88,89 (Running your Code) / 89,06 (Paper)
ConvBlock3: 85,88 (Running your Code) / 86,21 (Paper)
ConvBlock4: 54,04 (Running your Code) / 61,73 (Paper)

Are there further things I need to consider before running the code to achieve the results in the paper? I have used a GeForce GPX 1070 to run the experiment.

Question regarding fine-tuning phase of ImageNet classification task

The paper published here explains how the pretext task training is conducted, but not how the transfer learning is conducted. I had some questions regarding the procedure for transfer learning for the ImageNet classification task.

The entire procedure can be described as:
a) Train an AlexNet using the rotation prediction pretext task on the entire ImageNet dataset.
b) Freeze all layers except the fully connected layers.
c) Train the AlexNet using the Imagenet dataset using the ImageNet labels.

  1. During phase (c), is the entire imagenet dataset used? Or is a fraction of it used? I would expect self-supervised learning to fine-tune using a relatively small dataset.
  2. What hyper-parameters are used during phase (c)? Such as learning rate, weight decay etc.

Thank you.

Problem about weight rescaling technique

I am wondering what parameters did you use when you rescaled the model?There are many parameters in magic_init.py, such as -t -nit -d. Could you please give more details?Thanks.

How do you create attention map?

I read your paper, it says
`` In order to generate the attention mapof a conv. layer we first compute the feature maps of this layer, then we raise each feature activation on the power p, and finally we sum the activations at each location of the feature map. For the conv.layers 1, 2, and 3 we used the powers p = 1, p = 2, and p = 4 respectively. ''

What do you do after summing up each neuron's power of activations in these layers? I guess some backpropagation or deconvolution is needed to generate such attention map.

Horizontal flip augmentation changes angle

Hi all, this might be a basic question, but in this line, and other lines in the dataloader, the RandomHorizontalFlip() is one of the augmentations, but would that not change the rotation angle of the image, hence consequently changing the ground truth. Is this being handled anywhere (i.e. changing the ground truth label when RandomHorizontalFlip() augmentation happens )?
Thank you!

Accuracy on AlexNet validation set with conv4 feataure

Hi,
I just re-implement your idea myself by following the details in this repository.
The experiments on CIFAR10 obtained about 1% lower accuracy than published results.
However, the experiments on ImageNet with AlexNet achieved about 3% higher accuracy on the ImageNet validation set than the published results.

supervised: 59.48 (59.70 in the paper)

conv4: 52.92 (50 in the paper)

conv5: 46.06 (43.8 in the paper)

Could you give more details about the training?

CIFAR10 self.data does not have attribute test_labels ,test_data

When run CIFAR10_ConvClassifier_on_RotNet_NIN4blocks_Conv2_feats_K1000.py, there are some errors iin dataloader.py.

    if self.dataset_name == 'cifar10':
        labels = self.data.test_labels if (self.split == 'test') else self.data.train_labels
        data = self.data.test_data if (self.split == 'test') else self.data.train_data

I found that self.data does not have attribute test_labels ,test_data, train_labels, and train_data. How to solve this? Thk.

Unable to reproduce the ImageNet Linear Classification Results

Hello, I am really attracted by your work, which is precise and effective. But when i clone your code and run the run_imagenet_based_unsupervised_feature_experiments.sh script(I used your pretrained model so comment the training command) to test the linear classification, but I got the following results, after runing around 25 epochs,
image Especially the conv3,4,5, the results are so low. Maybe I have ommited some important details. Can you give me some advice?

about accuracy.

i run the script, then it begin to train.
what i want know is the accuracy. so, what should i do?

conv2d() arguments error

Hello!
While executing the code as given for training using algorithm.solve(), I have been experiencing the following error. Can someone give a clarity regarding the same?
Issue

question about learning rate and rotate strategy

Hi,

Thanks for providing this awesome work to us!!!

After reading the code, I am not sure whether I have fully understood it, so I feel I better open an issue to ask:

  1. the original cifar-10 is trained with learning-rate of 0.1 when the batchsize is 128. With the rotnet method, the batchsize is amplified to 512 (128x4) but the learning rate is still kept 0.1, is that right ?

  2. I see in the paper that the strategy of "simultaneously rotate the input image by 4 degrees and enlarge the batchsize 4 times" outperforms "randomly choose one degree to rotate and kept the batchsize not changed". Will the "randomly choose method" bring a significantly bad result, or it is only slightly outperformed by the proposed "4 rotates method" ?

I would be very happy to have your rely. Would you please show me your ideas on these details?

How can I generate a log file, which is needed during training the RotNet?

I was training a RotNet of CIFAR10, when I run the command, "python main.py --exp=CIFAR10_RotNet_NIN4bloc", an error occurs.

The error occurs at
algorithm = getattr(alg, config['algorithm_type'])(config) of main.py.
it says,
OSError: [Errno 22] Invalid argument: 'E:\FeatureLearningRotNet-master\experiments\CIFAR10_RotNet_NIN4blocks\logs\LOG_INFO_2020-08-06_17:02:54.623166.txt'

I suppose a log file is needed here but I didn't see it in the log folder. Should I add some lines to generate the log?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.