GithubHelp home page GithubHelp logo

gmayday1997 / scenechangedet Goto Github PK

View Code? Open in Web Editor NEW
229.0 9.0 74.0 14.31 MB

pytorch implementation of scene change detection

License: MIT License

Python 100.00%
pytorch scene-change-detection change-detection contrastive-loss

scenechangedet's People

Contributors

gmayday1997 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scenechangedet's Issues

Strange Loss and not training?

I've noticed strange values for loss when training and that I am unable to get the network to train.
I have been trying to use the CD2014 dataset, More specifically the PTZ/twoPositionPTZCam/ images.

I'm not sure what really to say or what documentation to provide, so if I need to add anything please let me know.
It's quite possible that I'm just doing something or many things wrong but I'd appreciate any help.

loss1
loss2

How to get VL-CMU-CD Dataset?

Hi, I am very interested about your paper, but I can't get the VL-CMU-CD dataset from the url you provide. Is there any other way to get the VL-CMU-CD dataset?

Layer-balancing

I have a question regarding the layer-balancing weights β for layer 5, 6, and 7. Do you use the THRESHS = [0.1,0.3,0.5] in the cfg files for it? If so, it seems that you are never using it in the code. Does it mean you scale the loss of layer 5 by 0.1 and layer 6 by 0.3 and layer 7 by 0.5? Can you please elaborate on that.

Thanks.

Why is the default setting for BATCH_SIZE 1? I changed it and got an error after that

I use the default batchsize=1 and program to run, but once I change it to 4, I get the following error:

***/layer/function.py", line 33, in forward
return self.scale * x * x.pow(2).sum(dim).clamp(min=1e-12).rsqrt().expand_as(x)
RuntimeError: The expanded size of the tensor (512) must match the existing size (4) at non-singleton dimension 1. Target sizes: [4, 512, 51, 51]. Tensor sizes: [4, 51, 51]

i don't know the reason

cd2014测试集是否有问题?

@gmayday1997 你好,我在使用你提供的百度云下载的cd2014测试集时,发现PTZ/twoPositionPTZcam/文件夹下所有的gt_binary中的图片内容都是0. 这样完全没办法训练吧?

CDNet2014訓練的問題

@gmayday1997 您好,我在訓練中發現訓練特別慢,耗時最多的就是在eval階段,請問這樣正常嗎?還有可以將你在CDNet2014上訓練好的model share給我嗎?謝謝。

How should I test the trained model?

I have successfully trained the model, but I don't know how to test the model, although there is a test program on this website (#19), but there is a "KeyError: 'conv1.0.weight'" error. Error message

running error

你好,我在运行代码时出现如下错误,找不到该文件,请问下能提供该文件(trainval.txt)吗?

错误如下:
OSError: /media/admin228/0007A0C30005763A/datasets/dataset_/TSUNAMI\trainval.txt not found.

question about label resize

Hey, nice work! I have a question about loss calculation. In your training code, you resized label images to compute loss, but I think result maps should be upsampling to calculate loss or metrics. Am I right? Look forward to your reply.

Dimension problem

Hi, when I run the code with TSUNAMI dataset. There is error of upsample.
1543728010305

Model does not converge on CD2014 dataset

I am currently trying to train the model on the CD2014 dataset (deeplabv2 as a backbone).
I tried two different methods for training:

  • Method 1: I used the predefined parameter for learning, except for the learning rate which I set to 1e-10 (because in an earlier test, I had the impression that 1e-7 was too large). I also adjusted the code such that I can train with batch_size > 1.
  • Method 2: basically the same as for method 1, but I applied gradient clipping with max_norm=5.

The problem is that the model does not converge for either training method. Did anybody also have that issue? How did you solve it? What parameters did you use for training? How long did you have to train?

How to run the pre-trained models?

So I got the pre-trained model. But how to run it? Is there a sample code to do inference with the pre-trained model?
I just want to pass two images and show the difference.

promble about f1sore on VL_CMU-CD dataset

hello,I try to train and test on VL_CMU-CD dataset. while i get the f1sore 0.658, a bit lower then paper mentioned 0.71 on test set. But i use the trained model on VL_CMU-CD dataset which you provided to test and get the flscore 0.798. I am very confused,can you describe in detail the training process and parameters of the trained model you provide?

Look forward to your reply and guidance!
2021.08.26

CMU datasets error

I downloaded cmu dataset and reproduced the experiments

But some error has occurred in the upsampling step.

Do you know the reason?

documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
Traceback (most recent call last):
File "train.py", line 282, in
main()
File "train.py", line 246, in main
label_rz_conv5 = Variable(util.resize_label(label.data.cpu().numpy(),size=out_conv5_t0.data.cpu().numpy().shape[2:]).cuda())
File "/home/nhkim/Desktop/SceneChangeDet/src/utils/utils.py", line 228, in resize_label
label_resized[:,:,:,:] = interp(labelVar).data.numpy()
File "/home/nhkim/anaconda3/envs/cosimNet/lib/python2.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/nhkim/anaconda3/envs/cosimNet/lib/python2.7/site-packages/torch/nn/modules/upsampling.py", line 131, in forward
return F.interpolate(input, self.size, self.scale_factor, self.mode, self.align_corners)
File "/home/nhkim/anaconda3/envs/cosimNet/lib/python2.7/site-packages/torch/nn/functional.py", line 2509, in interpolate
raise NotImplementedError("Got 5D input, but bilinear mode needs 4D input")
NotImplementedError: Got 5D input, but bilinear mode needs 4D input

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.