GithubHelp home page GithubHelp logo

lu-feng / dhe-vpr Goto Github PK

View Code? Open in Web Editor NEW
65.0 2.0 3.0 992 KB

Official repository for the AAAI 2024 paper "Deep Homography Estimation for Visual Place Recognition".

License: MIT License

Python 100.00%
image-localization loop-closure-detection visual-geolocalization visual-place-recognition deep-homography relocalization visual-slam

dhe-vpr's Issues

Can't reproduce pitts30k results

Hey,

I've installed the dependencies as specified in requirements.txt, and have downloaded the models as referenced from the readme, and downloaded the pitts30k after contacting Relja, and formatted it using the convinient VPR-datasets-downloader.

I then used eval.py with the following arguments:

python -m eval --resume_fe=finetunedCCT14_pitts30k.torch --resume_hr=finetunedDHE_pitts30k.torch --datasets_folder=./datasets --dataset_name=pitts30k

Only difference is I had to replace row 120 in model/cct/cct.py as follows:
image

Because the link in row 25 in model/cct/cct.py appears to be broken:
image

I am getting these results:

(hmgrphy) PS C:\dev\projects\DHE-VPR> python -m eval --resume_fe=finetunedCCT14_pitts30k.torch --resume_hr=finetunedDHE_pitts30k.torch --datasets_folder=./datasets --dataset_name=pitts30k
2024-07-28 13:45:19 python C:\dev\projects\DHE-VPR\eval.py --resume_fe=finetunedCCT14_pitts30k.torch --resume_hr=finetunedDHE_pitts30k.torch --datasets_folder=./datasets --dataset_name=pitts30k
2024-07-28 13:45:19 Arguments: Namespace(brightness=None, cache_refresh_rate=1000, contrast=None, criterion='triplet', dataset_name='pitts30k', datasets_folder='./datasets', device='cuda', efficient_ram_testing=False, epochs_num=1000, exp_name='default', freeze_te=5, horizontal_flip=False, hue=None, infer_batch_size=32,
l2='before_pool', lr=1e-05, majority_weight=0.01, margin=0.1, mining='partial', neg_samples_num=1000, negs_num_per_query=2, num_reranked_preds=32, num_workers=8,
optim='adam', patience=3, queries_per_epoch=5000, rand_perspective=None, random_resized_crop=None, random_rotation=None, recall_values=[1, 5, 10, 20], resize=[384, 384], resume_fe='finetunedCCT14_pitts30k.torch', resume_hr='finetunedDHE_pitts30k.torch', saturation=None, save_dir='default', seed=0, test_method='hard_resize', train_batch_size=4, train_positives_dist_threshold=10, trunc_te=8, val_positive_dist_threshold=25)
2024-07-28 13:45:19 The outputs are being saved in logs_test/default/2024-07-28_13-45-19
C:\dev\projects\DHE-VPR\dataset_geoloc.py:43: DeprecationWarning: np.float is a deprecated alias for the builtin float. To silence this warning, use float by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.float64 here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
self.gallery_utms = np.array([(path.split("@")[1], path.split("@")[2]) for path in self.gallery_paths]).astype(np.float)
C:\dev\projects\DHE-VPR\dataset_geoloc.py:44: DeprecationWarning: np.float is a deprecated alias for the builtin float. To silence this warning, use float by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.float64 here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
self.queries_utms = np.array([(path.split("@")[1], path.split("@")[2]) for path in self.queries_paths]).astype(np.float)
2024-07-28 13:45:22 Geoloc test set: < GeolocDataset, pitts30k - #gallery: 10000; #queries: 6816 >
this will not modify any behavior and is safe. When replacing np.int, you may wish to use e.g. np.int64 or np.int32 to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
correct_bool_mat = np.zeros((geoloc_dataset.queries_num, max_recall_value), dtype=np.int)

2024-07-28 13:47:28 baseline test: R@1: 1.2, R@5: 4.7, R@10: 8.2, R@100: 39.3
Testing: 100%|██████████████████████████████████████████████████| 6816/6816 [15:17<00:00, 7.42it/s]
2024-07-28 14:02:47 test after re-ranking - R@1: 1.1, R@5: 5.2, R@10: 9.2, R@20: 14.8

Please let me know if I did anything wrong, and how to reproduce correctly.
Thank you,
And thanks for the great work.

Data set error

Hi feng, thanks for your great work, and i am reproducing this work. Now i run eval according to readme, but it show an error like "Folder {gallery_folder} does not exist", and i check out this code ,and find no gallery in pitts30k/pitts250k direction( which have been generated according by https://github.com/gmberton/VPR-datasets-downloader), any tips? thank you @Lu-Feng

and my pitts30k dataset tree as follows:
image

question code release

Hello
Thanks for your interesting research!
Can you give me an rough dates the code will be released?

About compute_similarity

Hi @Lu-Feng , i have a question about compute_similarity function in network.py, why do eatures_a.transpose(2, 3), in my thinking , the shape of features_a and feature_b both are nch*w. seems your code do transpose(2, 3) with features_a, but no transform with features_b.
image

CUDA error: out of memory

Hello,
I'm currently in the process of attempting to replicate the remarkable work you've shared. However, I've encountered a hurdle along the way. Following the training pipeline outlined in the repository and adhering to the provided requirements, I attempted to train the DHE-VPR using 4090 with 24GB graphics memory, same size with the 3090 . Unfortunately, after training for 1 epoch, the program crashed and reported the following issue:

Traceback (most recent call last):
File "/home/xx/Documents/codespace/DHE-VPR/train_dhe.py", line 151, in
REIloss = homography_project.reprojection_error_ofinliers(model, queries_fw, positives_fw, weights=random_weights)
File "/home/xx/Documents/codespace/DHE-VPR/homography_project.py", line 102, in reprojection_error_ofinliers
reproject_error[i] = match_batch_tensor(query, pred, theta, trainflag=True, img_size=(384,384))
File "/home/xx/Documents/codespace/DHE-VPR/homography_project.py", line 31, in match_batch_tensor
max1 = torch.argmax(M, dim=1) #(N,l)
RuntimeError: CUDA error: out of memory

I believe this issue is related to memory allocation on the CUDA device. Your expertise and guidance in resolving this problem would be immensely valuable to me

Thank you for your time

HomographyRegression output show

Have you tried to visualize the four points on query and reference image which generate by HomographyRegression?I try to show these four points on query and reference image in my local image which are resize to 383x384, but seems many points of these four points are out of image boundry(less than 0 or beyond 384), it this normal ? @Lu-Feng

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.