GithubHelp home page GithubHelp logo

The error samples are due to issues with the ground truth annotations rather than errors in the model predictions. about mixvpr HOT 6 OPEN

amaralibey avatar amaralibey commented on September 24, 2024
The error samples are due to issues with the ground truth annotations rather than errors in the model predictions.

from mixvpr.

Comments (6)

amaralibey avatar amaralibey commented on September 24, 2024

Hello @libenchong,

Thank you again for your valuable feedback. Yes, I've noticed some errors in that regard but never did a thorough investigation like you did. That would make a great study case I think.

Best regards,

from mixvpr.

libenchong avatar libenchong commented on September 24, 2024

Hello@amaralibey. In these error cases, the images predicted by the model to be most similar to the query are not in the ground truth, or the geographical distance between the predicted images and the query is slightly larger than the ground truth. I'm not sure how to handle these cases, do you have any good suggestions?
this is error sample.I think the images predicted by the model are more accurate than the ground truth
image

from mixvpr.

amaralibey avatar amaralibey commented on September 24, 2024

Hello @libenchong,

Great observation!

In most benchmarks, the ground truth (GT) typically includes images located within a 25-meter perimeter from the query, based on GPS coordinates. However, since Mapillary images are primarily crowdsourced from various sources like phones and dashboard cameras, mapillary_sls often exhibit a considerable amount of noise, as demonstrated in the example you provided.

Although the model's prediction appears to be close to the query (judging by the door on the right), the noisy coordinates associated with the image indicate a distance greater than 25 meters, resulting in an inaccurate label.

I would propose manual verification when creating a benchmark, specifically for positive matches within the 25-40 meter range from the query, particularly when we are relying solely GPS coordinates that are prone to a lot of noise. This way, we can ensure a more precise evaluation of the model's performance.

Additionally, it's worth noting that these errors in the labels apply to all evaluated techniques and might affect their performance in a similar manner. This helps maintain fairness in the evaluation process.

Again, thank you for your valuable observations.

from mixvpr.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.