GithubHelp home page GithubHelp logo

odm-benchmarks's Introduction

ODM Benchmarks

This repository is a part of the ODM project and is dedicated to benchmark sample datasets found in the ODMData repository and a few other thrid party sources, to better understand the behavioural characterists of ODM.

OpenDroneMap benchmarking data is summarized in the following pages:

Datasets for Benchmarking

Preferred datasets for benchmarking at this time are:

  • Brighton Beach (18 photos)
  • Toledo (87 photos)
  • Wietrznia (225 photos)
  • Shitan (493 photos)

These and many other sample datasets are indexed at the ODMData repository.

How to Contribute Benchmarks

This section provides instructions for contributing to the ODM Benchmarks project.

  • Review Previous Benchmarks - You can have a look at previous benchmarks which is found in the benchmarks.csv file, located in the data directory to understand various things such as for example, figuring out which dataset has been benchmarked lesser than in comparision to other datasets.

  • Select a desired Dataset - Choose one of the datasets of your choice which is listed in the table seen above. Links to download the dataset is provided in the same.

  • Run the Dataset - Create a new task in ODM and process the images.

  • Verify the results - Ensure that the task completed successfully by viewing the maps and models generated

  • Submit Your Results - Open the benchmarks.csv file and look at the header for each of the column and note the desired information required before submitting and edit the csv file. Submissions are accepted Via a Pull Request or you can choose to post your results on the ODM community forum. A table is given below to explain the structure of the CSV.

Attribute Explanation Example
ID Benchmark Number 1
DATASET Dataset Name Toledo
PROCESSING_TIME Time taken to Process The dataset 1h 9m
PROCESSING_SUCCESS Confirm If process was successful Y
ERROR_TYPE Mention error if any occured -
RAM_SIZE Size of RAM allocated 16 GB
RAM_CLOCK_SPEED RAM Frequency of the Machine 2133 MT/s
CPU_TYPE Make and model of the CPU Intel i5
CPU_CLOCK_SPEED Clock Speed of the CPU 2.3 Ghz
CPU_NUM_CORES Number of Cores of the CPU 4
STORAGE_TYPE Storage Type of the system. HDD/SSD/NVME SSD
OS Operating System of the Machine Ubuntu 18.04
VM_TYPE Virtual Machine on which ODM is running on Docker
ODM_VERSION Version of ODM Used to Benchmark 1.3.1
ODM_CLUSTER Delcaration of usage of Cluster ODM N
CONFIG_NAME Mention of the Configuration in which the dataset is being processed Default
CONFIG_RESIZE mention of image size if images are being resized 2048 px
CONFIG_OTHER Other Configurations made that are worth mentioning -
TEST_DATE Date of Test 2020-03-07
TEST_BY Information of the individual who tests the data Corey Snipes
INCLUDE_IN_SUMMARY Check if Data has been included in summary Y
NOTES Additional Notes -

Queries and Contributions

  • Questions and Doubts - Any queries you may have can be posted in the ODM community forum.

  • Contributing new Benchmark Data - If you have benchmark data to share, See this page for details on contributing.

License

MIT License

odm-benchmarks's People

Contributors

coreysnipes avatar dependabot[bot] avatar franchyze923 avatar ichsan2895 avatar jeongyong-park avatar manand881 avatar pierotofy avatar rado0x54 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

odm-benchmarks's Issues

Add cpu generation or cpu complete details

A benchmark could vary a lot from a cpu gen 1 and a cpu gen 10.
Example : an i7 - 930 (generation 1) desktop cpu is pretty slow in comparison to an i5 - 10210u (generation 10) mobile version. Same happen to Ryzen cpu's

Docker penalty

I don't know if this deserves it's own issue, but I just did a side-by-side of running a big job with and without docker, and I am curious if anyone else has done it.

For the record, with docker it took 10h 25m, without, 10h 5m, yielding about a 3.5% penalty which (short further testing) I attribute to running on docker.

I am super curious if anyone else has done a similar comparison. If so, we might have enough data points to add to the docs.

Processing big datasets

I don't know if this needs to be part of the odm-benchmarks project, but I have a particularly large dataset to process, so I am doing some monitoring of individual stages so that I can do a better job predicting processing time over the life of the project. I thought I would document that here in case it is useful to see.

Change "ODM_CLUSTER" column to "SPLIT_PARAMS"

Currently the ODM_CLUSTER column is just a flag to indicate whether split was used. All current benchmark data is "N".

To support more detailed split/merge info in future benchmarks, I suggest changing the column header to "SPLIT_PARAMS", and changing all existing data to "-" (to indicate that split was not used).

Recommended notation for split params in future data: "sX soY"
Derived from: "--split X --split-overlap Y"

e.g., notation in benchmarks data would be "s200 so150" for --split 200 --split-overlap 150

Split benchmark.csv file into various files accoring to project dataset

The benchmark.csv file contains the bench marking results of all datasets in a single file. when one has to benchmark, one compares how the results vary for a single dataset at a given time. I suggest that the bench marking data be split into different files according to the data set that is being processed so we can write make a static website hosted on github pages in this repository which visualizes the benchmark data

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.