ncalm-uh / codem Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
Currently --version
is not an accepted argument for codem
or vcd
in codem the output defaults to the format of the input files (.las vs .laz), and in the VCD, it defaults to las 1.2 for ESRI/ArcGIS compatibility.
both CODEM and VCD should default to writing copc files for output, but with a command line option to write the output to LAS 1.2 (so we can maintain ArcGIS compatibility).
When loading ArcGIS Pro toolbox, often the PDAL driver path results in an error when trying to do import pdal
. Thus, the line os.environ["PDAL_DRIVER_PATH"] = "C:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\neggs-pyproj901\\bin;C:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\neggs-pyproj901\\Lib\\site-packages\\bin"
has to be incorporated into the toolbox to set the environment correct for certain edge cases.
If the foundation and compliment have equal coordinate systems, and the size difference between the two is substantially different, then allow the user to pass a command line argument to limit the search area for keypoint matches.
graph TD;
B{Fnd CRS Defined?} -->|No| C(Raise Error about lack of specified CRS)
B -->|Yes| G{Floating CRS Defined?}
G -->|No| C
G -->|Yes| H{Same As Foundation?}
H -->|No| L(Raise Error for mismatched CRSs)
H-->|Yes| J(Compute Bounding Box <br> as Floating Dataset +/- 50% <br> and clip Foundation to bounding box)
J--> K(Run Codem!)
Issue created from this comment
To calculate the PointCloud.native_resolution
we take the units_factor
and multiply it by the pdal pipeline metadata["filters.hexbin"]["avg_pt_spacing"]
. The hexbin filter as it stands is pdal.Filter.hexbin(edge_size=25, threshold=1)
. The edge_size
parameter for this filter should be a parameter that users can configure if they want, as it is unlikely to be suitable for all lidar data collections.
Simply holdovers from early experimentation and can be removed. tree2d
and everything derived from it.
CODEM/src/vcd/preprocessing/preprocess.py
Line 159 in 5ed1cb2
Currently, CODEM is ignoring RasterPixelIsPoint
and RasterPixelIsArea
. For example, registering two DSMs that are both RasterPixelIsPoint
results in registered output that is RasterPixelIsArea
.
The registered output should take on the pixel type of the foundation DSM.
If there is a mismatch between pixel types, any necessary shifts need to be handled internally by CODEM.
In VCD > preprocessing.py, around line 103 it takes in the SRS from the pipeline.quickinfo, but notes that if there are more than one SRS, it won't work.
def _get_utm(pipeline: pdal.Pipeline) -> pdal.Pipeline:
data = pipeline.quickinfo
is_reader = [[k.split(".")[0] == "readers", k] for k in data.keys()]
for k in is_reader:
if k[0]: # we are a reader
reader_info = data[k[1]]
bounds = reader_info["bounds"]
srs = CRS.from_user_input(reader_info["srs"]["compoundwkt"])
# we just take the first one. If there's more we are screwed
break
Encountered this error with Open Data LiDAR from DC, with a PCS and VCS (see image). Possibly we can make a workaround? If not, I will enhance the toolbox to warn people accordingly.
Need to instead use this to reject unclustered points only. And create a new arg for class filtering.
There is no argparse argument to pass that could toggle ICP_SAVE_RESIDUALS
https://github.com/NCALM-UH/CODEM/blob/main/src/codem/main.py#L323
Should need to add a store_true
action to --icp-save-residuals
argparse argument.
https://viewer.copc.io/?state=d3d6a54f96222c326657186ecb2879b69fa338380e2cc0306a5077759e115acc is an example that does ๐ด and ๐ต colorization of the change point clouds. We could do the same for an after product.
Currently, when a DEM is provided with X and Y cell sizes that are not equal, (even if the difference is as minute as X=11.67888888888889 and Y= 11.67888888888888) the tool fails due to preprocess.py
, line 398, in _calculate_resolution :
assert np.trace(A) == 0, "X scale and Y scale must be identical."
AssertionError: X scale and Y scale must be identical
I have two proposals that I would like feedback on regarding how to proceed with this issue.
The much simpler proposal (but more NEGGS and ArcGIS specific) would be to add a warning in the CODEM\arcgispro\src\coregister\esri\toolboxes\3D Data Co-Registration.pyt
file, under the updateMessages
function, checking if the arcpy.Describe.meanCellHeight
and arcpy.Describe.meanCellWidth
are equal, and producing a warning if they are not.
The other proposal would be to implement some sort of gdal
function that would detect and modify the input data so that the X and Y cell sizes were equal prior to processing. Not sure what sort of implications this would have downstream in the processing, however. This would most likely depend on how different the X and Y cell sizes are (do people utilize rasters with non-square cells?).
Either way, curious about this issue as I have stumbled upon it both with DEMs produced from ODM as well as DEMs that were projected in ArcGIS (as CODEM doesn't like unprojected data as well).
Let me know your thoughts, happy to discuss further, thank you for reading this long message!
-Will
#54 Added support for colorizing point clouds in the VCD output, that feature should be added to the rasterization output as well.
Some references to keep in mind while implementing this feature:
https://www.github.com/qgis/QGIS/issues/22427
https://www.github.com/noaa-ocs-hydrography/qgis-raster-attribute-table-plugin
At present, codem will attempt to register an AOI to a foundation dataset, and attempt to identify key-point matches over the entirety of both inputs. This can lead to poor results when the AOI is of a significant area with many features. To address this issue, we propose adding a --bounds
argument that would added, where the contents could be relayed to the respective pdal reader.
Tangentially related to my last issue post, wondering what thoughts would be on using arcpy
in the python toolbox interface (CODEM\arcgispro\src\coregister\esri\toolboxes\3D Data Co-Registration.pyt
) to automatically generate the best Minimum Resolution (m) parameter to optimize best results and processing time (all if the user wishes, of course). Obtaining the resolution of input DEMs is totally possible, might be possible for LAS files as well (point count and possibly area could yield and average point density?), but a relationship between the input data resolution and ideal CODEM parameter resolution would have to be known. Is this something that has been tested or studied yet? If not, I am willing to play around as I believe this would be a beneficial feature. Would love to discuss further.
-Will
Currently, when the registration objects are initialized, it checks to see if the GeoData objects have had their data "prepped". If not, it raises a RuntimeError
, with a message about ensuring the prep
method on the GeoData object was run.
As other code in the stack can also raise RuntimeError's, we should likely create our own exception that users can specifically try and catch.
Current registration.txt output ends with
RMSEs:
X = +/-0.488,
Y = +/-0.805,
Z = +/-0.861,
3D = +/-1.276
If we assume 3m spherical error of the photogrammetrically derived foundation DSMs, then the total error is computed as
Total Error = SQRT(3^2 + 3D_RMSE^2)
In the case above, that equates to 3.26 m.
We had discussed potentially breaking that into horizontal and vertical components, but that can be added at a later time. (We'd either need a provider to specify their estimated horizontal and vertical error, or we could break down the spherical down based on some Gaussian assumptions.)
Maybe this is best shown as a new block, summarizing the assumptions and equations used, e.g.,
PROPAGATED ERROR
----------------
Assumed global 3D error = +/-3
3D_RMSE = +/-1.276
Total Error is computed as SQRT( global_3D_error^2 + 3D_RMSE^2 )
Total Error = +/-3.260
We should put version information and CODEM
in GeoTIFF tag output from CODEM so that someone has information about how the data came into being.
We might also include tags about the fixed and floating data that were used to create the output along with gross error information.
CODEM is dropping any vertical CRS information that is present in the input files. The vertical CRS of the registered product should be inherited from the foundation DSM.
We might consider switching to using Rich instead of enlighten for progress reporting.
Some bullets to make the case:
Might be nitpicking, but it might be useful to somehow incorporate the names of the before/after data into the output ground and non-ground products (meshes, rasters, pcs) so that multiple runs of the tool with different inputs can be differentiated easily in the Arc interface. Simply food for thought, since it is not too hard to change the name of the layer itself in Arc
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.