Comments (6)
This is a lazy 3D cube created by your code:
sbas_disps = sbas.open_grids(dates, 'disp', func=[sbas.los_displacement_mm])
sbas_disps_total_ll = sbas.intf_ra2ll(sbas_disps.cumsum('date'))
And you saved it into NetCDF but it cannot be saved to GeoTIFF because it is 3D dataset:
delayed = sbas_disps_total_ll.to_netcdf(filename_nc, engine=sbas.engine, compute=False)
tqdm_dask(dask.persist(delayed), desc='Saving Total Displacement as NetCDF')
That's more effective to rearrange the commands and use loop adding the current raster to total sum instead of cumsum('date') call (it is a computationally expensive operation) when you just need a set of 2D rasters:
disp = sbas.open_grids(dates, 'disp', func=[sbas.los_displacement_mm])
# convert coordinate to valid dates
disp['date'] = pd.to_datetime(disp.date)
# save NetCDF files into separate directory
dirname = f'{WORKDIR}.displacement'
!mkdir -p {dirname}
!rm -f {dirname}/displacement*.nc
# for faster processing add new grid to materialized sum of the previous ones
# do not use .cumsum() function on lazy stack to process all the grids
disp_total = None
for (idx, disp_current) in enumerate(disp):
date = disp_current.date.dt.strftime('%Y%m%d').item()
if disp_total is None:
disp_total = disp_current.compute()
else:
disp_total += disp_current.compute()
#print (idx,date)
filename = f'{dirname}/displacement.{date}.nc'
sbas.intf_ra2ll(disp_total).to_netcdf(filename, engine=sbas.engine)
print ('.', end='')
We can optimize the code, add "sbas.intf_ra2ll" call directly to the sbas.open_grids call (disp_ll = sbas.open_grids(dates, 'disp', func=[sbas.los_displacement_mm, sbas.intf_ra2ll])) and so on. Actually, it does not related to PyGMTSAR itself but to a big data processing on Python and I share the examples on my Patreon: https://www.patreon.com/pechnikov
By the way, what is the reason to produce GeoTIF files when NetCDF files can be opened in almost any GIS software like to GDAL, QGIS, etc.?
from pygmtsar.
Thank you for your detailed suggestion, that definitely produces a result where none existed before. However, I'm a little concerned that the different methods to output the netcdf/ geotiff are causing an offset to occur somehow. I'd carried out an export using the lazy grid methodology on a smaller batch of data and got the following result:
However, when using the new methodology, I get a completely different output:
I thought that it may be the coherence stack mask somehow causing less data to be available with a longer time series, however on closer inspection it looks as if the data has "shifted" northwards. See example below:
As you can see there's an exceedingly similar-looking displacement output at slightly different locations. To answer your other question as to why I am attempting to convert the netCDF to geoTIF within the code instead of after the fact - I believe there was a bug in the version of xarray outside of the docker environment and I had problems utilizing gdal or other geospatial packages to resolve the problem at the time. The conversion of a 3D data stack to GeoTIF seemed to be working perfectly with lower resolution imagery and seems to still work when utilizing a high res 2D array. Is there any particular reason why I should not be doing it this way? Do you have any potential suggestions as to why I am experiencing a shift? Many thanks in advance!
from pygmtsar.
You might try the example notebook https://colab.research.google.com/drive/1xhVedrIvNS66jGKgS30Dgqy0S31uJ8gm?usp=sharing
and export disp_total.nc. As it uses the OSM mask, the results should fit well with the OSM map in QGIS:
from pygmtsar.
I had a look at the example notebooks you mentioned above and couldn't see any differences with the way you're geocoding so attempted to replicate my theory on the exact same small (3 full slc images) batch of data. Both output methodologies produced the exact same data grid which fits nicely with the OpenStreetMap layer available on QGIS.
What I am therefore confused about is how changing the total number of dates processed and the master image of the inSAR dataset can cause a geographic shift in my output result. To save on space I removed 6 months of data so would have ended up with a different master image (as at the moment I am not specifically setting one).
Is this something you have experienced before or do you think it may well be a mistake on my part? I know that the critical baseline is an important concept to understand in inSAR theory, but I didn't believe that this could lead to an actual shift in a resulting displacement raster?
Many thanks in advance for your response!
from pygmtsar.
Maybe do you use the same geocode matrix for the different master images? For the complete processing I cannot imagine how the shift is possible.
from pygmtsar.
No, everything is calculated independently for each stack of imagery. I'll run some further tests this week and let you know if I can replicate the problem.
from pygmtsar.
Related Issues (20)
- [Help]: I want to run it into my local machine (ubuntu) and then align_images() doesnt work HOT 8
- [Help]: why this is happened ? HOT 10
- [Help]: Using PyGMTSAR in Spyder HOT 2
- [Help]: Error with compute_reframe(AOI) HOT 1
- [Help]: PS on a single-master network HOT 1
- wavelength using in InSAR sentinal1 HOT 6
- [Help]: incorrect landmask HOT 6
- [Help]: ASF SLC download throwing errors recently HOT 5
- [Bug]: File path issues using Dask Distributed system in sbas.geocode() HOT 8
- [Help]: sbas.compute_align(n_jobs=1, joblib_aligning_backend='threading') failed HOT 1
- [Help]: Writing Geotiff HOT 46
- [Help]: Key error : ref; during sbas_pair() HOT 8
- [Bug]: How to unify grids to resolve this error? HOT 18
- [Help]: PS Analysis slow processing HOT 2
- Error during trend correction with own data in Lakesarez_landslide HOT 1
- [Help]: Inquiry Regarding Differences Between Cumulative and STL LOS Displacement HOT 10
- Slow computation time and CPU 100% usage across all workers HOT 4
- [Help]: implenting stack of unwrapped interfrograms and coherence files from other SAR sensors to PyGMTSAR HOT 3
- ValueError: bytes object is too large when using .sync_cube to save a trend stack HOT 13
- unable to open DEM file HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pygmtsar.