GithubHelp home page GithubHelp logo

lhc1224 / cross-view-ag Goto Github PK

View Code? Open in Web Editor NEW
41.0 41.0 2.0 4.64 MB

Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022

License: MIT License

Python 99.92% Shell 0.08%

cross-view-ag's People

Contributors

lhc1224 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

cross-view-ag's Issues

TypeError: MODEL() got an unexpected keyword argument 'align_channels'

when I run test.py,a typeError came out,and I found that in model.py fucion MODEL is:
def MODEL(args, num_classes=36,
align_w=0.2, distil_w=0.1,
cls_w=1, pretrained=True,n=3,D=512):
but in test.py,
model = MODEL(args, num_classes=len(aff_list),
align_channels=256, align_w=0.5, distil_w=0.5,
cls_w=1, pretrained=False).cuda()
is the patameter align_channels irrelevant? shall we just delete it?
It had a result on pycharm IDE ,but not work on the server unless I delete that parameter.
By the way,can I ask for the code about plotting,I'm also a student from department of automation,ustc.

about weight

thank you for nice work(good task and dataset)πŸ‘. Could you provide the model weights? please

No such file or direcctory: ' Seen_gt.t7'

Thanks for your great work.Before I try to train a model,I run process_gt.py to generate Unseen_gt.t7,and I replaced Unseen with Seen to generate Seen_gt.t7.
gt_root="dataset/AGD20K/Seen/testset/GT/"
torch.save(dict_1,"Seen_gt.t7")
but when I train a model,it didn't work,do you know what's the problem?()
Train begining!
train Epoch:[1/35] step:[50/1254] cls_loss: 7.113 dist_loss: 1.722 exo_acc: 22.75% ego_acc: 21.88%
train Epoch:[1/35] step:[100/1254] cls_loss: 4.066 dist_loss: 1.937 exo_acc: 30.00% ego_acc: 28.69%
train Epoch:[1/35] step:[150/1254] cls_loss: 5.073 dist_loss: 2.349 exo_acc: 35.25% ego_acc: 32.38%
train Epoch:[1/35] step:[200/1254] cls_loss: 3.748 dist_loss: 2.042 exo_acc: 37.56% ego_acc: 34.66%
train Epoch:[1/35] step:[250/1254] cls_loss: 2.537 dist_loss: 1.704 exo_acc: 39.25% ego_acc: 35.85%
train Epoch:[1/35] step:[300/1254] cls_loss: 4.081 dist_loss: 1.695 exo_acc: 40.94% ego_acc: 37.40%
train Epoch:[1/35] step:[350/1254] cls_loss: 3.147 dist_loss: 1.032 exo_acc: 42.61% ego_acc: 38.57%
train Epoch:[1/35] step:[400/1254] cls_loss: 2.145 dist_loss: 1.638 exo_acc: 43.98% ego_acc: 39.78%
train Epoch:[1/35] step:[450/1254] cls_loss: 5.803 dist_loss: 1.145 exo_acc: 45.15% ego_acc: 40.65%
train Epoch:[1/35] step:[500/1254] cls_loss: 3.255 dist_loss: 1.598 exo_acc: 46.45% ego_acc: 41.49%
train Epoch:[1/35] step:[550/1254] cls_loss: 1.971 dist_loss: 1.693 exo_acc: 47.32% ego_acc: 42.10%
train Epoch:[1/35] step:[600/1254] cls_loss: 1.749 dist_loss: 1.391 exo_acc: 48.07% ego_acc: 42.68%
train Epoch:[1/35] step:[650/1254] cls_loss: 2.975 dist_loss: 1.919 exo_acc: 48.81% ego_acc: 43.12%
train Epoch:[1/35] step:[700/1254] cls_loss: 2.258 dist_loss: 1.279 exo_acc: 49.47% ego_acc: 43.54%
train Epoch:[1/35] step:[750/1254] cls_loss: 2.977 dist_loss: 1.647 exo_acc: 49.87% ego_acc: 43.95%
train Epoch:[1/35] step:[800/1254] cls_loss: 2.014 dist_loss: 1.469 exo_acc: 50.30% ego_acc: 44.44%
train Epoch:[1/35] step:[850/1254] cls_loss: 2.583 dist_loss: 1.326 exo_acc: 50.64% ego_acc: 44.65%
train Epoch:[1/35] step:[900/1254] cls_loss: 2.099 dist_loss: 1.432 exo_acc: 51.14% ego_acc: 45.11%
train Epoch:[1/35] step:[950/1254] cls_loss: 2.247 dist_loss: 1.262 exo_acc: 51.71% ego_acc: 45.61%
train Epoch:[1/35] step:[1000/1254] cls_loss: 3.252 dist_loss: 1.017 exo_acc: 52.01% ego_acc: 45.85%
train Epoch:[1/35] step:[1050/1254] cls_loss: 1.915 dist_loss: 1.219 exo_acc: 52.30% ego_acc: 46.11%
train Epoch:[1/35] step:[1100/1254] cls_loss: 2.588 dist_loss: 1.692 exo_acc: 52.57% ego_acc: 46.27%
train Epoch:[1/35] step:[1150/1254] cls_loss: 2.674 dist_loss: 1.557 exo_acc: 52.88% ego_acc: 46.49%
train Epoch:[1/35] step:[1200/1254] cls_loss: 2.227 dist_loss: 1.067 exo_acc: 53.17% ego_acc: 46.79%
train Epoch:[1/35] step:[1250/1254] cls_loss: 2.756 dist_loss: 1.217 exo_acc: 53.41% ego_acc: 47.00%
Traceback (most recent call last):
File "train.py", line 221, in
masks = torch.load(args.divide+"_gt.t7")
File "/home/chenchangmao/anaconda3/envs/affordance/lib/python3.7/site-packages/torch/serialization.py", line 699, in load
with _open_file_like(f, 'rb') as opened_file:
File "/home/chenchangmao/anaconda3/envs/affordance/lib/python3.7/site-packages/torch/serialization.py", line 230, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/home/chenchangmao/anaconda3/envs/affordance/lib/python3.7/site-packages/torch/serialization.py", line 211, in init
super(_open_file, self).init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'Seen_gt.t7'

testsetv2 dataset

hello, thank you for providing your work.I would like to ask, can you post the link to the testsetv2 dataset in the description?

can you confirm your pretrained model?

@lhc1224
hello, thank you for providing your work.
i have confirmed your code with pretrained model(download the model through google drive)
by the way, size mismatch between shape from checkpoint and shape in current model occurs..
can you confirm your pretrained model with code?

The difference between result of test and gt

This is my result of test.py:
kld = 1.521207194771439
sim = 0.34196439919030536
nss = 0.9306758723494548
but when I try to visualize the affordance region in test.py,it has a big difference contrast to the ground truth region.
result:
swing_baseball_bat_baseball_bat_000452 png
ground truth:
swing_baseball_bat_baseball_bat_000452 pnggt
result:
swing_baseball_bat_baseball_bat_000595 png
ground truth:
swing_baseball_bat_baseball_bat_000595 pnggt

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.