GithubHelp home page GithubHelp logo

codem's People

Contributors

chambbj avatar clglennie avatar hobu avatar j9ac9k avatar pjhartzell avatar willforker avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

codem's Issues

Have PointCloud output default to COPC

in codem the output defaults to the format of the input files (.las vs .laz), and in the VCD, it defaults to las 1.2 for ESRI/ArcGIS compatibility.

both CODEM and VCD should default to writing copc files for output, but with a command line option to write the output to LAS 1.2 (so we can maintain ArcGIS compatibility).

Toolbox PDAL Driver Path

When loading ArcGIS Pro toolbox, often the PDAL driver path results in an error when trying to do import pdal . Thus, the line os.environ["PDAL_DRIVER_PATH"] = "C:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\neggs-pyproj901\\bin;C:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\neggs-pyproj901\\Lib\\site-packages\\bin" has to be incorporated into the toolbox to set the environment correct for certain edge cases.

Limit the search area for registration if CRSs allow for it

If the foundation and compliment have equal coordinate systems, and the size difference between the two is substantially different, then allow the user to pass a command line argument to limit the search area for keypoint matches.

    graph TD;
        B{Fnd CRS Defined?} -->|No| C(Raise Error about lack of specified CRS)
        B -->|Yes| G{Floating CRS Defined?}
        G -->|No| C
        G -->|Yes| H{Same As Foundation?}
        H -->|No| L(Raise Error for mismatched CRSs)
        H-->|Yes| J(Compute Bounding Box <br> as Floating Dataset +/- 50% <br> and clip Foundation to bounding box)
        J--> K(Run Codem!)
Loading

edge_size should be a parameter when calculating resolution of PointClouds

Issue created from this comment

#74 (comment)

To calculate the PointCloud.native_resolution we take the units_factor and multiply it by the pdal pipeline metadata["filters.hexbin"]["avg_pt_spacing"]. The hexbin filter as it stands is pdal.Filter.hexbin(edge_size=25, threshold=1). The edge_size parameter for this filter should be a parameter that users can configure if they want, as it is unlikely to be suitable for all lidar data collections.

Respect different raster pixel types (point vs. area)

Currently, CODEM is ignoring RasterPixelIsPoint and RasterPixelIsArea. For example, registering two DSMs that are both RasterPixelIsPoint results in registered output that is RasterPixelIsArea.

The registered output should take on the pixel type of the foundation DSM.

If there is a mismatch between pixel types, any necessary shifts need to be handled internally by CODEM.

SRS error/issue with VCD

In VCD > preprocessing.py, around line 103 it takes in the SRS from the pipeline.quickinfo, but notes that if there are more than one SRS, it won't work.

       def _get_utm(pipeline: pdal.Pipeline) -> pdal.Pipeline:
            data = pipeline.quickinfo
            is_reader = [[k.split(".")[0] == "readers", k] for k in data.keys()]
            for k in is_reader:
                if k[0]:  # we are a reader
                    reader_info = data[k[1]]
                    bounds = reader_info["bounds"]
                    srs = CRS.from_user_input(reader_info["srs"]["compoundwkt"])
                  
                    # we just take the first one. If there's more we are screwed
                    break

Encountered this error with Open Data LiDAR from DC, with a PCS and VCS (see image). Possibly we can make a workaround? If not, I will enhance the toolbox to warn people accordingly.
image

Unequal X and Y Cell Sizes

Currently, when a DEM is provided with X and Y cell sizes that are not equal, (even if the difference is as minute as X=11.67888888888889 and Y= 11.67888888888888) the tool fails due to preprocess.py, line 398, in _calculate_resolution :

assert np.trace(A) == 0, "X scale and Y scale must be identical."
AssertionError: X scale and Y scale must be identical

I have two proposals that I would like feedback on regarding how to proceed with this issue.

The much simpler proposal (but more NEGGS and ArcGIS specific) would be to add a warning in the CODEM\arcgispro\src\coregister\esri\toolboxes\3D Data Co-Registration.pyt file, under the updateMessages function, checking if the arcpy.Describe.meanCellHeight and arcpy.Describe.meanCellWidth are equal, and producing a warning if they are not.

The other proposal would be to implement some sort of gdal function that would detect and modify the input data so that the X and Y cell sizes were equal prior to processing. Not sure what sort of implications this would have downstream in the processing, however. This would most likely depend on how different the X and Y cell sizes are (do people utilize rasters with non-square cells?).

Either way, curious about this issue as I have stumbled upon it both with DEMs produced from ODM as well as DEMs that were projected in ArcGIS (as CODEM doesn't like unprojected data as well).

Let me know your thoughts, happy to discuss further, thank you for reading this long message!

-Will

Add support for --bounds argument to clip AOI

At present, codem will attempt to register an AOI to a foundation dataset, and attempt to identify key-point matches over the entirety of both inputs. This can lead to poor results when the AOI is of a significant area with many features. To address this issue, we propose adding a --bounds argument that would added, where the contents could be relayed to the respective pdal reader.

Option for Enhanced Resolution Parameter Selection

Tangentially related to my last issue post, wondering what thoughts would be on using arcpy in the python toolbox interface (CODEM\arcgispro\src\coregister\esri\toolboxes\3D Data Co-Registration.pyt) to automatically generate the best Minimum Resolution (m) parameter to optimize best results and processing time (all if the user wishes, of course). Obtaining the resolution of input DEMs is totally possible, might be possible for LAS files as well (point count and possibly area could yield and average point density?), but a relationship between the input data resolution and ideal CODEM parameter resolution would have to be known. Is this something that has been tested or studied yet? If not, I am willing to play around as I believe this would be a beneficial feature. Would love to discuss further.

-Will

Replace RuntimeError Exceptions with Subclassed Exceptions

Currently, when the registration objects are initialized, it checks to see if the GeoData objects have had their data "prepped". If not, it raises a RuntimeError, with a message about ensuring the prep method on the GeoData object was run.

As other code in the stack can also raise RuntimeError's, we should likely create our own exception that users can specifically try and catch.

Add propagated error to the registration.txt output

Current registration.txt output ends with

RMSEs:
X = +/-0.488,
Y = +/-0.805,
Z = +/-0.861,
3D = +/-1.276

If we assume 3m spherical error of the photogrammetrically derived foundation DSMs, then the total error is computed as

Total Error = SQRT(3^2 + 3D_RMSE^2)

In the case above, that equates to 3.26 m.

We had discussed potentially breaking that into horizontal and vertical components, but that can be added at a later time. (We'd either need a provider to specify their estimated horizontal and vertical error, or we could break down the spherical down based on some Gaussian assumptions.)

Maybe this is best shown as a new block, summarizing the assumptions and equations used, e.g.,

PROPAGATED ERROR
----------------
Assumed global 3D error = +/-3
3D_RMSE = +/-1.276
Total Error is computed as SQRT( global_3D_error^2 + 3D_RMSE^2 )
Total Error = +/-3.260

Annotate raster outputs with GeoTIFF tags

We should put version information and CODEM in GeoTIFF tag output from CODEM so that someone has information about how the data came into being.

We might also include tags about the fixed and floating data that were used to create the output along with gross error information.

Retain vertical CRS information

CODEM is dropping any vertical CRS information that is present in the input files. The vertical CRS of the registered product should be inherited from the foundation DSM.

Consider swapping out enlighten for rich

We might consider switching to using Rich instead of enlighten for progress reporting.

Some bullets to make the case:

  • Rich has a larger developer community and is more actively developed
  • Rich is known to work well on all three major platform terminals
  • Integration with Textual if we wanted to make a really fancy interface
  • Emoji support?

Naming conventions for VCD outputs

Might be nitpicking, but it might be useful to somehow incorporate the names of the before/after data into the output ground and non-ground products (meshes, rasters, pcs) so that multiple runs of the tool with different inputs can be differentiated easily in the Arc interface. Simply food for thought, since it is not too hard to change the name of the layer itself in Arc

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.