GithubHelp home page GithubHelp logo

alexeypechnikov / pygmtsar Goto Github PK

View Code? Open in Web Editor NEW
349.0 16.0 80.0 1.8 GB

PyGMTSAR (Python InSAR): Powerful and Accessible Satellite Interferometry

Home Page: http://insar.dev/

License: BSD 3-Clause "New" or "Revised" License

Python 0.21% Jupyter Notebook 99.79% Dockerfile 0.01% C 0.01% Shell 0.01%
earth-observation earth-science earthquake flooding insar natural-disasters python3 remote-sensing sbas-insar scientific-computing

pygmtsar's People

Contributors

alexeypechnikov avatar bjmarfito avatar calefmt avatar dsandwell avatar ikselven avatar kmaterna avatar paulwessel avatar rtburns-jpl avatar steffandavies avatar xiaohua-eric-xu avatar xiaopengtong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pygmtsar's Issues

[Bug]: sbas_parallel - PicklingError: Could not pickle the task to send it to the workers.

System: Ubuntu / 32 CPU / 128 GB RAM

When running sbas_parallel I get the following error:

%%time
# process the full unwrap grid (correlation grid will be cropped if needed)
sbas.sbas_parallel(pairs, mask=composite_road_mask)

Computing: 10%
65/620 [01:17<11:23, 1.23s/it]

_RemoteTraceback Traceback (most recent call last)
_RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.10/site-packages/joblib/externals/loky/backend/queues.py", line 125, in feed
obj
= dumps(obj, reducers=reducers)
File "/home/ubuntu/.local/lib/python3.10/site-packages/joblib/externals/loky/backend/reduction.py", line 211, in dumps
dump(obj, buf, reducers=reducers, protocol=protocol)
File "/home/ubuntu/.local/lib/python3.10/site-packages/joblib/externals/loky/backend/reduction.py", line 204, in dump
_LokyPickler(file, reducers=reducers, protocol=protocol).dump(obj)
File "/home/ubuntu/.local/lib/python3.10/site-packages/joblib/externals/cloudpickle/cloudpickle_fast.py", line 632, in dump
return Pickler.dump(self, obj)
File "/home/ubuntu/.local/lib/python3.10/site-packages/distributed/client.py", line 465, in getstate
return self.key, self.client.scheduler.address
AttributeError: 'NoneType' object has no attribute 'address'
"""

The above exception was the direct cause of the following exception:

PicklingError Traceback (most recent call last)
File :2

File ~/.local/lib/python3.10/site-packages/pygmtsar/SBAS_sbas.py:202, in SBAS_sbas.sbas_parallel(self, pairs, mask, data, corr, chunks, chunksize, n_jobs)
200 # process all the chunks
201 with self.tqdm_joblib(tqdm(desc='Computing', total=ys*xs)) as progress_bar:
--> 202 filenames = joblib.Parallel(n_jobs=n_jobs)(joblib.delayed(func)(iy, ix)
203 for iy in range(ys) for ix in range(xs))
205 # rebuild the datasets to user-friendly format
206 das = [xr.open_dataarray(f, engine=self.engine, chunks=chunksize) for f in filenames]

File ~/.local/lib/python3.10/site-packages/joblib/parallel.py:1098, in Parallel.call(self, iterable)
1095 self._iterating = False
1097 with self._backend.retrieval_context():
-> 1098 self.retrieve()
1099 # Make sure that we get a last message telling us we are done
1100 elapsed_time = time.time() - self._start_time

File ~/.local/lib/python3.10/site-packages/joblib/parallel.py:975, in Parallel.retrieve(self)
973 try:
974 if getattr(self._backend, 'supports_timeout', False):
--> 975 self._output.extend(job.get(timeout=self.timeout))
976 else:
977 self._output.extend(job.get())

File ~/.local/lib/python3.10/site-packages/joblib/_parallel_backends.py:567, in LokyBackend.wrap_future_result(future, timeout)
564 """Wrapper for Future.result to implement the same behaviour as
565 AsyncResults.get from multiprocessing."""
566 try:
--> 567 return future.result(timeout=timeout)
568 except CfTimeoutError as e:
569 raise TimeoutError from e

File /usr/lib/python3.10/concurrent/futures/_base.py:458, in Future.result(self, timeout)
456 raise CancelledError()
457 elif self._state == FINISHED:
--> 458 return self.__get_result()
459 else:
460 raise TimeoutError()

File /usr/lib/python3.10/concurrent/futures/_base.py:403, in Future.__get_result(self)
401 if self._exception:
402 try:
--> 403 raise self._exception
404 finally:
405 # Break a reference cycle with the exception in self._exception
406 self = None

PicklingError: Could not pickle the task to send it to the workers.

[Help]: Issues writing high resolution los results to netcdf

Describe the problem you met
Thank you for all your help so far. I seem to be having a similar problem to last time with regards to not being able to output a netcdf file from the los displacement measurements. However, this time I believe the issue has been caused by my attempt to increase the resolution of the data from 90 metres to 30 metres. This is causing a significant increase in RAM usage, so much so that the dask worker restarts several times before finally failing. I see with your Yamhi Dam 4th notebook that you export the last date from the xarray. As I need all results between first and last I attempted to iterate through the dates with a simple "for in" loop. However, even with this - I get a similar problem with the worker failing consistently at about 90% when writing a single date. Would you be able to point me in the right direction to resolve this issue? I've made sure to check the shape properly this time and everything looks appropriate (72 dates with lat and long). I've included the script underneath the logs, the log itself is relatively small as I am using your sbas.pickle approach to avoid unnecessary processing. lines 220 to 240 which are currently commented out are my proposed method for iteration through the array.

Same as before I am using your suggestions from your Yamchi Dam example, i.e:

  • chunk size: 1024
  • n_worker: 1
  • Dask increase of timeouts to 60s
  • Dask Persist/ tqdm_dask for the NetCDF export

Resources:

  • 12th Gen Intel© Core™ i7-12700H × 14 (20 threads)
  • 32GB RAM

OS and software version
OS: Linux Mint (Cinnamon 21.1) with mobigroup/pygtmsar docker (id: a25ad61c3c30)

Log file
inSAR-30m_res_log_test.txt
script.txt

[Please note that if no new responses are made over six months from users, the issue will get closed]

[Also if this is a general question that's not particlarly related to GMTSAR, such as "how to correct atmospheric noise" or "how to intepret my interferogram", consider using the discussions page]

LOS displacement for SBAS

Hello @mobigroup ,

Many thanks for this project. I've been trying to recreate the maps in my own OS (I use Fedora36 and everything goes well with the standard installation).
I have a question how to generate the correct LOS displacement after processing sbas_parallel.

It seems that in google colab example for S1A Stack are positive results
Cum_LOS_Displacement_googlecolab
But the deformation should be negative, right?.

I tried to replicate that but I obtained positive results as well.
Cum_LOS_Displacement

the simplest solution is just multiplying the grid array by -1 because the values seem to make sense.

but I wanted to know if you know why the cumulative displacement is positive? or if I did something wrong.

Thanks in advance
Simon

[Bug]: Cannot detrend pairs - ValueError: Chunk shape must not be greater than data shape in any dimension. (512, 512) is not compatible with (3750, 375)

sbas.detrend_parallel(resolution_meters=15)

Detrending and Saving: 10%
87/873 [00:12<01:57, 6.70it/s]

ValueError Traceback (most recent call last)
Cell In[81], line 2
1 datagrid.chunksize=128
----> 2 sbas.detrend_parallel(resolution_meters=15)

File ~/.local/lib/python3.10/site-packages/pygmtsar/SBAS_detrend.py:40, in SBAS_detrend.detrend_parallel(self, pairs, n_jobs, interactive, **kwargs)
38 label = 'Detrending and Saving' if not interactive else 'Detrending'
39 with self.tqdm_joblib(tqdm(desc=label, total=len(pairs))) as progress_bar:
---> 40 results = joblib.Parallel(n_jobs=n_jobs)(joblib.delayed(func)(pair, **kwargs) for pair in pairs)
42 if interactive:
43 return results

File ~/.local/lib/python3.10/site-packages/joblib/parallel.py:1098, in Parallel.call(self, iterable)
1095 self._iterating = False
1097 with self._backend.retrieval_context():
-> 1098 self.retrieve()
1099 # Make sure that we get a last message telling us we are done
1100 elapsed_time = time.time() - self._start_time

File ~/.local/lib/python3.10/site-packages/joblib/parallel.py:975, in Parallel.retrieve(self)
973 try:
974 if getattr(self._backend, 'supports_timeout', False):
--> 975 self._output.extend(job.get(timeout=self.timeout))
976 else:
977 self._output.extend(job.get())

File ~/.local/lib/python3.10/site-packages/joblib/_parallel_backends.py:567, in LokyBackend.wrap_future_result(future, timeout)
564 """Wrapper for Future.result to implement the same behaviour as
565 AsyncResults.get from multiprocessing."""
566 try:
--> 567 return future.result(timeout=timeout)
568 except CfTimeoutError as e:
569 raise TimeoutError from e

File /usr/lib/python3.10/concurrent/futures/_base.py:458, in Future.result(self, timeout)
456 raise CancelledError()
457 elif self._state == FINISHED:
--> 458 return self.__get_result()
459 else:
460 raise TimeoutError()

File /usr/lib/python3.10/concurrent/futures/_base.py:403, in Future.__get_result(self)
401 if self._exception:
402 try:
--> 403 raise self._exception
404 finally:
405 # Break a reference cycle with the exception in self._exception
406 self = None

ValueError: Chunk shape must not be greater than data shape in any dimension. (512, 512) is not compatible with (3750, 375)

[Bug]: FileNotFoundError: [Errno 2] No such file or directory: 'senTest/F123_20150625_20150812_corr.PRM' -> 'senTest/S1_20150625_ALL_F123.PRM'

Describe the bug
Merging appears to "trip up" when generating what I assume are temporary files (corr.PRM, ALL_F123.PRM). I have attempted to limit the number of jobs to 6 at a time (20 threads available on my machine) in the hopes that that may help but it has not.

To Reproduce
Steps to reproduce the behaviour: I am generating a large time-series of all swath sentinel-1. For now, all of the year 2015 for a particular stack (identified via ASF search stack), however I hope to expand this to multiple years in the future. I've been following your recent example of Türkiye and can't see any obvious mistakes that I've made - please let me know otherwise though!

Screenshots
I'll attach a few text files with log, files in processing directory and code.

System and software version:

  • OS: Linux Mint 21.1 Cinnamon (pygmtsar docker)
  • GMTSAR version:

Processing Log
If applicable, attach a log file from terminal output
log.txt
[proce
scriptExample.txt
ssing_list.txt](https://github.com/mobigroup/gmtsar/files/11003705/processing_list.txt)

[Please note that if no new responses are made over six months from users, the issue will get closed. If the report involves a simple fix in the code, the change could be done quickly; if it involves developing new features, do expect them to be done over the long run. Under both cases, the report will remain open until the issue is addressed one way or another.]

[Bug]: AssertionError: Inverse geocoding matrix chunks are not equal to interferogram chunks

Describe the bug
Assertion error midway through geocode_parallel command. I am processing multiple sentinel scenes (20+) and merging the 3 subswaths before geocoding/ unwrapping. Seems to fail at the "build ll2ra transform" step. The problem seems to occur when particular sentinel-1 scenes are included in the stack - however there is no way to determine which scenes are causing the problem or why.

To Reproduce
Steps to reproduce the behavior:
Run merging and geocoding processing on the following datasets, data will run perfectly (full script attached)
scriptExample.txt
:

  • S1A_IW_SLC__1SDV_20160103T082126_20160103T082153_009327_00D7B4_6190.SAFE
  • S1A_IW_SLC__1SDV_20160115T082126_20160115T082153_009502_00DCB7_2058.SAFE
  • S1A_IW_SLC__1SDV_20160127T082126_20160127T082153_009677_00E1D8_4515.SAFE

Add the following two files to the stack for process to fail:

  • S1A_IW_SLC__1SDV_20160208T082125_20160208T082152_009852_00E6DB_C86F.SAFE
  • S1A_IW_SLC__1SDV_20160220T082120_20160220T082146_010027_00EC0A_8A04.SAFE

System and software version:

  • OS: Linux Mint (Cinnamon 21.1) using pygmtsar docker
  • GMTSAR version:

Processing Log
Full log attached below:
log2.txt

[Help]: sbas.get_dem() problem

Hello @mobigroup,
I started to use the pygmtsar for an academic use for my thesis and try to test the script you provide on my pc with other orbit scenes. I am stuck in the static plot where we converts heights to ellipsoidal model. I selected product=SRTM3 for 90m resolution SRTM DEM. I think i have a problem with variable sbas. Can you help me?

error1
error2

[Feature]: Pros and cons for modified Goldstein adaptive filtering

@SteffanDavies See below mapped the original interferogram phase (left) and the result of modified Goldstein adaptive filtering (right) with GMTSAR default psize=32:

image image

GMTSAR applies the filter to already decimated grids and for wavelength=400 (S1A_Stack_CPGF_T173 test case) we have:

gmt grdinfo amp1.grd 
...
amp1.grd: Grid file format: nf = GMT netCDF format (32-bit float), CF-1.7
amp1.grd: x_min: 0 x_max: 21568 x_inc: 16 name: x n_columns: 1348
amp1.grd: y_min: 0 y_max: 5484 y_inc: 4 name: y n_rows: 1371
...

It means for the grid resolution ~60m the filter 32x32 size applied. For the highest processing resolution ~15m we have ~120m spatial accuracy (1/4 filter size). That's often useless for infrastructure monitoring (like to a bridge crash). The question is, did you try to calculate without the Goldstean filter? Typically, PSI excludes the filter completely to save the details. Actually, the filter designed to highlight fringes and it might make cloud borders more strong increasing atmospheric effects.

memory leak error while executing 'SBAS LOS Displacement in Geographic Coordinates'

hi, as it can be seen on the screenshot below, I faced full memory and than a killed proccess while running 'SBAS LOS Displacement in Geographic Coordinates, [mm]' static plot.
i changed Dask worker and thread configurations in sever different manner but the result was the same.
I implemented 8 SLC data and only 1 subswath which created 11 interfrogerams.
I used a Ubuntu 20.04 machine a core i7 9th with 8 cores with 64 Gig Ram and also 25 Gig of swap space. the program was run alongside of Anaconda 2023.10.3
If i must provide any further information, please let me know.
Screenshot from 2023-06-13 19-05-52

[Help]: topo_ra_parallel() problem with all cores

Hello @mobigroup , I am getting an error when I try to use all the processing cores in my workstation to generate the topo in radar coordinates with topo_ra_parallel()
image

when I limit the amount of cores, that is fixed and the topo_ra file with the trans data file are created
image

Do you have any idea what that problem might be? I remember that in previous versions of pygmtsar that worked well.

Thank you in advance

[Bug]: MASTER KeyError

Describe the bug

When i run S1A_Stack_CPGF_T173.ipynb example using 14 sentinel-1 scenes on pygmtsar 2022.11.19, everything is well untill:

baseline_pairs = sbas.baseline_pairs(days=BASEDAYS, meters=BASEMETERS)
baseline_pairs
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
~/miniconda3/lib/python3.7/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
   3360             try:
-> 3361                 return self._engine.get_loc(casted_key)
   3362             except KeyError as err:

~/miniconda3/lib/python3.7/site-packages/pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()

~/miniconda3/lib/python3.7/site-packages/pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()

pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()

KeyError: '2022-05-09'

The above exception was the direct cause of the following exception:

KeyError                                  Traceback (most recent call last)
<ipython-input-30-cd271f1f8166> in <module>
----> 1 baseline_pairs = sbas.baseline_pairs(days=BASEDAYS, meters=BASEMETERS)
      2 baseline_pairs

~/miniconda3/lib/python3.7/site-packages/pygmtsar/SBAS_sbas.py in baseline_pairs(self, days, meters, invert, n_jobs, debug)
     56         import pandas as pd
     57 
---> 58         tbl = self.baseline_table(n_jobs=n_jobs, debug=debug)
     59         data = []
     60         for line1 in tbl.itertuples():

~/miniconda3/lib/python3.7/site-packages/pygmtsar/SBAS_sbas.py in baseline_table(self, n_jobs, debug)
     35         # after merging use unmerged subswath PRM files
     36         # calc_dop_orb() required for SAT_baseline
---> 37         master_dt = datetimes[self.master]
     38         prm_ref = PRM().from_file(get_filename(master_dt)).calc_dop_orb(inplace=True)
     39         data = []

~/miniconda3/lib/python3.7/site-packages/pandas/core/series.py in __getitem__(self, key)
    940 
    941         elif key_is_scalar:
--> 942             return self._get_value(key)
    943 
    944         if is_hashable(key):

~/miniconda3/lib/python3.7/site-packages/pandas/core/series.py in _get_value(self, label, takeable)
   1049 
   1050         # Similar to Index.get_value, but we do not fall back to positional
-> 1051         loc = self.index.get_loc(label)
   1052         return self.index._get_values_for_loc(self, loc, label)
   1053 

~/miniconda3/lib/python3.7/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
   3361                 return self._engine.get_loc(casted_key)
   3362             except KeyError as err:
-> 3363                 raise KeyError(key) from err
   3364 
   3365         if is_scalar(key) and isna(key) and not self.hasnans:

KeyError: '2022-05-09'

But pymgtsar 2022.10.11.6 works.

To Reproduce

Use fellowing datasets to reproduce this error:

datetime orbit mission polarization subswath datapath metapath orbitpath geometry

2022-01-09 10:34:03 | A | S1A | VV | 2 | raw_orig/s1a-iw2-slc-vv-20220109t103403-20220109t103428-041383-04eb92-005.tiff | raw_orig/s1a-iw2-slc-vv-20220109t103403-20220109t103428-041383-04eb92-005.xml | raw_orig/S1A_OPER_AUX_POEORB_OPOD_20220129T121556_V20220108T225942_20220110T005942.EOF | POLYGON ((112.54432 23.94818, 112.85013 22.40597, 113.75270 22.58495, 113.44688 24.12716, 112.54432 23.94818))
2022-02-02 10:34:02 | A | S1A | VV | 2 | raw_orig/s1a-iw2-slc-vv-20220202t103402-20220202t103427-041733-04f74c-005.tiff | raw_orig/s1a-iw2-slc-vv-20220202t103402-20220202t103427-041733-04f74c-005.xml | raw_orig/S1A_OPER_AUX_POEORB_OPOD_20220222T081603_V20220201T225942_20220203T005942.EOF | POLYGON ((112.54641 23.94717, 112.85218 22.40496, 113.75473 22.58391, 113.44896 24.12612, 112.54641 23.94717))
2022-02-26 10:34:02 | A | S1A | VV | 2 | raw_orig/s1a-iw2-slc-vv-20220226t103402-20220226t103427-042083-05036e-005.tiff | raw_orig/s1a-iw2-slc-vv-20220226t103402-20220226t103427-042083-05036e-005.xml | raw_orig/S1A_OPER_AUX_POEORB_OPOD_20220318T081639_V20220225T225942_20220227T005942.EOF | POLYGON ((112.54497 23.94702, 112.85071 22.40506, 113.75324 22.58402, 113.44751 24.12597, 112.54497 23.94702))
2022-03-22 10:34:02 | A | S1A | VV | 2 | raw_orig/s1a-iw2-slc-vv-20220322t103402-20220322t103427-042433-050f51-005.tiff | raw_orig/s1a-iw2-slc-vv-20220322t103402-20220322t103427-042433-050f51-005.xml | raw_orig/S1A_OPER_AUX_POEORB_OPOD_20220411T081600_V20220321T225942_20220323T005942.EOF | POLYGON ((112.54411 23.94658, 112.84985 22.40463, 113.75240 22.58359, 113.44666 24.12554, 112.54411 23.94658))
2022-04-15 10:34:02 | A | S1A | VV | 2 | raw_orig/s1a-iw2-slc-vv-20220415t103402-20220415t103428-042783-051b28-005.tiff | raw_orig/s1a-iw2-slc-vv-20220415t103402-20220415t103428-042783-051b28-005.xml | raw_orig/S1A_OPER_AUX_POEORB_OPOD_20220505T081715_V20220414T225942_20220416T005942.EOF | POLYGON ((112.54577 23.94704, 112.85144 22.40520, 113.75392 22.58412, 113.44825 24.12596, 112.54577 23.94704))
2022-05-09 10:34:04 | A | S1A | VV | 2 | raw_orig/s1a-iw2-slc-vv-20220509t103404-20220509t103429-043133-0526bc-005.tiff | raw_orig/s1a-iw2-slc-vv-20220509t103404-20220509t103429-043133-0526bc-005.xml | raw_orig/S1A_OPER_AUX_POEORB_OPOD_20220529T082109_V20220508T225942_20220510T005942.EOF | POLYGON ((112.54577 23.94770, 112.85151 22.40573, 113.75398 22.58467, 113.44825 24.12664, 112.54577 23.94770))
2022-06-02 10:34:06 | A | S1A | VV | 2 | raw_orig/s1a-iw2-slc-vv-20220602t103406-20220602t103431-043483-05311f-005.tiff | raw_orig/s1a-iw2-slc-vv-20220602t103406-20220602t103431-043483-05311f-005.xml | raw_orig/S1A_OPER_AUX_POEORB_OPOD_20220622T081913_V20220601T225942_20220603T005942.EOF | POLYGON ((112.54481 23.94718, 112.84935 22.40624, 113.74083 22.58243, 113.43628 24.12337, 112.54481 23.94718))
2022-06-26 10:34:07 | A | S1A | VV | 2 | raw_orig/s1a-iw2-slc-vv-20220626t103407-20220626t103432-043833-053b9d-005.tiff | raw_orig/s1a-iw2-slc-vv-20220626t103407-20220626t103432-043833-053b9d-005.xml | raw_orig/S1A_OPER_AUX_POEORB_OPOD_20220716T081922_V20220625T225942_20220627T005942.EOF | POLYGON ((112.54517 23.94650, 112.84972 22.40615, 113.72068 22.57835, 113.41613 24.11870, 112.54517 23.94650))
2022-07-20 10:34:09 | A | S1A | VV | 2 | raw_orig/s1a-iw2-slc-vv-20220720t103409-20220720t103434-044183-054610-005.tiff | raw_orig/s1a-iw2-slc-vv-20220720t103409-20220720t103434-044183-054610-005.xml | raw_orig/S1A_OPER_AUX_POEORB_OPOD_20220809T081813_V20220719T225942_20220721T005942.EOF | POLYGON ((112.54523 23.94670, 112.84953 22.40653, 113.71763 22.57804, 113.41332 24.11822, 112.54523 23.94670))
2022-08-13 10:34:10 | A | S1A | VV | 2 | raw_orig/s1a-iw2-slc-vv-20220813t103410-20220813t103435-044533-055092-005.tiff | raw_orig/s1a-iw2-slc-vv-20220813t103410-20220813t103435-044533-055092-005.xml | raw_orig/S1A_OPER_AUX_POEORB_OPOD_20220902T081914_V20220812T225942_20220814T005942.EOF | POLYGON ((112.54597 23.94684, 112.85052 22.40590, 113.73189 22.58009, 113.42734 24.12103, 112.54597 23.94684))
2022-09-06 10:34:11 | A | S1A | VV | 2 | raw_orig/s1a-iw2-slc-vv-20220906t103411-20220906t103437-044883-055c68-005.tiff | raw_orig/s1a-iw2-slc-vv-20220906t103411-20220906t103437-044883-055c68-005.xml | raw_orig/S1A_OPER_AUX_POEORB_OPOD_20220926T081722_V20220905T225942_20220907T005942.EOF | POLYGON ((112.54346 23.94662, 112.84798 22.40605, 113.72934 22.58027, 113.42482 24.12083, 112.54346 23.94662))
2022-09-30 10:34:12 | A | S1A | VV | 2 | raw_orig/s1a-iw2-slc-vv-20220930t103412-20220930t103437-045233-056820-005.tiff | raw_orig/s1a-iw2-slc-vv-20220930t103412-20220930t103437-045233-056820-005.xml | raw_orig/S1A_OPER_AUX_POEORB_OPOD_20221020T081824_V20220929T225942_20221001T005942.EOF | POLYGON ((112.54477 23.94726, 112.84931 22.40657, 113.73069 22.58079, 113.42614 24.12148, 112.54477 23.94726))
2022-10-24 10:34:12 | A | S1A | VV | 2 | raw_orig/s1a-iw2-slc-vv-20221024t103412-20221024t103437-045583-057334-005.tiff | raw_orig/s1a-iw2-slc-vv-20221024t103412-20221024t103437-045583-057334-005.xml | raw_orig/S1A_OPER_AUX_POEORB_OPOD_20221113T081748_V20221023T225942_20221025T005942.EOF | POLYGON ((112.54563 23.94718, 112.85017 22.40636, 113.73159 22.58057, 113.42705 24.12139, 112.54563 23.94718))
2022-11-05 10:34:11 | A | S1A | VV | 2 | raw_orig/s1a-iw2-slc-vv-20221105t103411-20221105t103437-045758-05791b-005.tiff | raw_orig/s1a-iw2-slc-vv-20221105t103411-20221105t103437-045758-05791b-005.xml | raw_stack/S1A_OPER_AUX_RESORB_OPOD_20221105T141804_V20221105T102809_20221105T134539.EOF | POLYGON ((112.54595 23.94704, 112.85053 22.40623, 113.73196 22.58046, 113.42738 24.12128, 112.54595 23.94704))

and define parameters as:

MASTER       = '2022-05-09'
WORKDIR      = 'raw_stack'
DATADIR      = 'raw_orig'
DEMFILE      = 'topo/DEM_WGS84.nc'
BASEDAYS     = 100
BASEMETERS   = 150
DEFOMAX      = 0

System and software version:

  • OS: ubuntu 20.04
  • RAM: 128G
  • Python: v3.7.13
  • GMTSAR version: updated master

ValueError in topo_ra_paralel()

Hi devs

There is a problem with the function topo_ra_parallel()
It raises an error :
ValueError: zero-size array to reduction operation maximum which has no identity

I've tested S1A_2016_Kumamoto_Earthquake_vs_ESA_Sentinel_1_Toolbox project in google Colab.

[Help]: Low CPU utilization during sbas_parallel

I have been trying different combinations of settings for sbas_parallel on large grids. However, cpu utilization doesn't exceed aprox 3-4% on 128 cpu machine.

CPU: 128
RAM: 500GB
Swap: 200GB
SSD: ~4 GBPS R/W

Using chunksize of 2048 exploded memory and swap, reducing chunk size to 512 helps keep memory manageable.

image

image

image

image

Changing Dask setup (num_workers=1 / num_workers=32) makes little difference.

[Help]: Regarding parallel SBAS step

Hello,
I started to use the pygmtsar and try to test the script you provide on my machine. I was stuck in the SBAS parallel processing step due to an encoding error. should I change the SBAS.py can you help me, please?

error1
error_2

[Feature]: Range cropping based on pins

GMTSAR (and PyGMTSAR) crop bursts based on pins. However, this only applies to azimuth axis. Cropping Range axis based on pins will limit unnecessary processing for interferograms and unwrapping for smaller AOI. I haven't seen any notebook examples where this is performed.

[Help]:

Describe the problem you met
In the docker desktop interface, after running the image and getting the URLs from the log page, Non of the URLs can be opened in the browser.

OS and software version
If applicable, what operating system and software version you are using?
OS: macOS Ventura 13.3
GMTSAR version: latest

Reproduce the problem
If applicable, how to reproduce the problem from scratch?

Log file
2023-07-25 15:05:24 Entered start.sh with args: jupyter lab
2023-07-25 15:05:24 Executing the command: jupyter lab
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.073 ServerApp] Package jupyterlab took 0.0000s to import
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.075 ServerApp] Package jupyter_server_fileid took 0.0015s to import
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.078 ServerApp] Package jupyter_server_terminals took 0.0027s to import
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.091 ServerApp] Package jupyter_server_ydoc took 0.0124s to import
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.091 ServerApp] Package nbclassic took 0.0000s to import
2023-07-25 15:05:25 [W 2023-07-25 13:05:25.092 ServerApp] A _jupyter_server_extension_points function was not found in nbclassic. Instead, a _jupyter_server_extension_paths function was found and will be used for now. This function name will be deprecated in future releases of Jupyter Server.
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.092 ServerApp] Package notebook_shim took 0.0000s to import
2023-07-25 15:05:25 [W 2023-07-25 13:05:25.092 ServerApp] A _jupyter_server_extension_points function was not found in notebook_shim. Instead, a _jupyter_server_extension_paths function was found and will be used for now. This function name will be deprecated in future releases of Jupyter Server.
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.572 ServerApp] Package panel.io.jupyter_server_extension took 0.4798s to import
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.575 ServerApp] jupyter_server_fileid | extension was successfully linked.
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.577 ServerApp] jupyter_server_terminals | extension was successfully linked.
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.580 ServerApp] jupyter_server_ydoc | extension was successfully linked.
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.582 ServerApp] jupyterlab | extension was successfully linked.
2023-07-25 15:05:25 [W 2023-07-25 13:05:25.584 NotebookApp] 'ip' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your config before our next release.
2023-07-25 15:05:25 [W 2023-07-25 13:05:25.584 NotebookApp] 'ip' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your config before our next release.
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.585 ServerApp] nbclassic | extension was successfully linked.
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.586 ServerApp] Writing Jupyter server cookie secret to /home/jovyan/.local/share/jupyter/runtime/jupyter_cookie_secret
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.684 ServerApp] notebook_shim | extension was successfully linked.
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.684 ServerApp] panel.io.jupyter_server_extension | extension was successfully linked.
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.715 ServerApp] notebook_shim | extension was successfully loaded.
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.715 FileIdExtension] Configured File ID manager: ArbitraryFileIdManager
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.715 FileIdExtension] ArbitraryFileIdManager : Configured root dir: /home/jovyan
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.715 FileIdExtension] ArbitraryFileIdManager : Configured database path: /home/jovyan/.local/share/jupyter/file_id_manager.db
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.715 FileIdExtension] ArbitraryFileIdManager : Successfully connected to database file.
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.715 FileIdExtension] ArbitraryFileIdManager : Creating File ID tables and indices with journal_mode = DELETE
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.720 FileIdExtension] Attached event listeners.
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.720 ServerApp] jupyter_server_fileid | extension was successfully loaded.
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.720 ServerApp] jupyter_server_terminals | extension was successfully loaded.
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.721 ServerApp] jupyter_server_ydoc | extension was successfully loaded.
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.721 LabApp] JupyterLab extension loaded from /opt/conda/lib/python3.10/site-packages/jupyterlab
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.721 LabApp] JupyterLab application directory is /opt/conda/share/jupyter/lab
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.722 ServerApp] jupyterlab | extension was successfully loaded.
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.724 ServerApp] nbclassic | extension was successfully loaded.
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.725 ServerApp] panel.io.jupyter_server_extension | extension was successfully loaded.
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.725 ServerApp] Serving notebooks from local directory: /home/jovyan
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.725 ServerApp] Jupyter Server 2.4.0 is running at:
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.725 ServerApp] http://505ad591e11d:8888/lab?token=70925d2a69407ca2c823bb4772962bb9c75e51d9a7654c66
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.725 ServerApp] http://127.0.0.1:8888/lab?token=70925d2a69407ca2c823bb4772962bb9c75e51d9a7654c66
2023-07-25 15:05:25 [I 2023-07-25 13:05:25.725 ServerApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
2023-07-25 15:05:25 [C 2023-07-25 13:05:25.727 ServerApp]
2023-07-25 15:05:25
2023-07-25 15:05:25 To access the server, open this file in a browser:
2023-07-25 15:05:25 file:///home/jovyan/.local/share/jupyter/runtime/jpserver-7-open.html
2023-07-25 15:05:25 Or copy and paste one of these URLs:
2023-07-25 15:05:25 http://505ad591e11d:8888/lab?token=70925d2a69407ca2c823bb4772962bb9c75e51d9a7654c66
2023-07-25 15:05:25 http://127.0.0.1:8888/lab?token=70925d2a69407ca2c823bb4772962bb9c75e51d9a7654c66

Screenshot
Screenshot 2023-07-25 at 15 07 40

[Please note that if no new responses are made over six months from users, the issue will get closed]

[Also if this is a general question that's not particlarly related to GMTSAR, such as "how to correct atmospheric noise" or "how to intepret my interferogram", consider using the discussions page]

[Help]: Initializing SBAS post-processing - open_grids does not recognize merged grd files

Hello,
After running most of the processing on the data, including merging of the sub-swaths and unwrapping, python crashed (unrelated).
In order to continue processing and data analyzing, I want to run sbas.open_grids, detrending, sbas.sbas() etc, therefore need to initialize new sbas object (with force=False in order to save older data).

sbas.open_grids failing because does not find files regarding separate sub-swaths, output folder only has grd files of F123 merged subswaths.

How do I get sbas object to recognize and regard only merged subswaths files in these additional processes?

OS: Unbuntu 20.04
GMTSAR version: 6.2.2
Python version: 3.10.11

Log file when running unwraps_ra = sbas.open_grids(pairs, 'unwrap',add_subswath=True)
Traceback (most recent call last): File "/home/name/pythonProject/venv/lib/python3.10/site-packages/xarray/backends/file_manager.py", line 210, in _acquire_with_cache_info file = self._cache[self._key] File "/home/name/pythonProject/venv/lib/python3.10/site-packages/xarray/backends/lru_cache.py", line 56, in __getitem__ value = self._cache[key] KeyError: [<class 'h5netcdf.core.File'>, ('/home/name/disk-one/DS_Data_2014_2023/asc/data_output/F1_20141210_20150115_unwrap.grd',), 'r', (('decode_vlen_strings', True), ('invalid_netcdf', None)), '90889ae3-3a2a-4162-a8c1-d1fdc5482508'] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/name/pythonProject/venv/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3508, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-21-e238d5766e82>", line 1, in <module> unwraps_ra = sbas.open_grids(pairs, 'unwrap',add_subswath=True) File "/home/name/pythonProject/venv/lib/python3.10/site-packages/pygmtsar/SBAS_base.py", line 365, in open_grids da = open_grid(filename) File "/home/name/pythonProject/venv/lib/python3.10/site-packages/pygmtsar/SBAS_base.py", line 321, in open_grid da = xr.open_dataarray(filename, engine=self.engine, chunks=chunksize) File "/home/name/pythonProject/venv/lib/python3.10/site-packages/xarray/backends/api.py", line 686, in open_dataarray dataset = open_dataset( File "/home/name/pythonProject/venv/lib/python3.10/site-packages/xarray/backends/api.py", line 525, in open_dataset backend_ds = backend.open_dataset( File "/home/name/pythonProject/venv/lib/python3.10/site-packages/xarray/backends/h5netcdf_.py", line 413, in open_dataset store = H5NetCDFStore.open( File "/home/name/pythonProject/venv/lib/python3.10/site-packages/xarray/backends/h5netcdf_.py", line 176, in open return cls(manager, group=group, mode=mode, lock=lock, autoclose=autoclose) File "/home/name/pythonProject/venv/lib/python3.10/site-packages/xarray/backends/h5netcdf_.py", line 127, in __init__ self._filename = find_root_and_group(self.ds)[0].filename File "/home/name/pythonProject/venv/lib/python3.10/site-packages/xarray/backends/h5netcdf_.py", line 187, in ds return self._acquire() File "/home/name/pythonProject/venv/lib/python3.10/site-packages/xarray/backends/h5netcdf_.py", line 179, in _acquire with self._manager.acquire_context(needs_lock) as root: File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__ return next(self.gen) File "/home/name/pythonProject/venv/lib/python3.10/site-packages/xarray/backends/file_manager.py", line 198, in acquire_context file, cached = self._acquire_with_cache_info(needs_lock) File "/home/name/pythonProject/venv/lib/python3.10/site-packages/xarray/backends/file_manager.py", line 216, in _acquire_with_cache_info file = self._opener(*self._args, **kwargs) File "/home/name/pythonProject/venv/lib/python3.10/site-packages/h5netcdf/core.py", line 973, in __init__ self._h5file = self._h5py.File( File "/home/name/pythonProject/venv/lib/python3.10/site-packages/h5py/_hl/files.py", line 567, in __init__ fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr) File "/home/name/pythonProject/venv/lib/python3.10/site-packages/h5py/_hl/files.py", line 231, in make_fid fid = h5f.open(name, flags, fapl=fapl) File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py/h5f.pyx", line 106, in h5py.h5f.open FileNotFoundError: [Errno 2] Unable to open file (unable to open file: name = '/home/name/disk-one/DS_Data_2014_2023/asc/data_output/F1_20141210_20150115_unwrap.grd', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

Regarding coherence

Hello Alexy,
I have a general question, where can I control the coherence threshold? I found only this line related to coherence cleaner = lambda corr, unwrap: xr.where(corr>=CORRLIMIT, unwrap, np.nan), but it is not clear to me depends on what the software selects points as low coherence. Can you explain it, please?
Thank you

datagrid and sbas.find_pairs couldn't imported

Hi @mobigroup, I am trying to run the SBAS on my data and on my local computer using the Yamchi_dam notebooks. I face these errors in importing some modules. I ignored that part and created Interferograms and saved a backup file. Like the second notebook I am trying to restore the saved pickle and use find_pairs but it seems that there is no attribute for that. Any idea for solving these issues?
OS and software version
OS: Ubuntu 20.04.3 LTS
GMTSAR version: master

Log file
from pygmtsar import datagrid
datagrid.chunksize = 1024

ImportError Traceback (most recent call last)
Cell In [8], line 3
1 # default chunksize (512) is enough suitable for a single subswath processing using resolution 15m
2 # select higher chunk size (1024) to process multiple subswaths and scenes using resolution 15m
----> 3 from pygmtsar import datagrid
4 datagrid.chunksize = 1024
ImportError: cannot import name 'datagrid' from 'pygmtsar' (/home/meysam/.local/lib/python3.8/site-packages/pygmtsar/init.py)

pairs = sbas.find_pairs()
pairs

AttributeError Traceback (most recent call last)
Cell In [11], line 1
----> 1 pairs = sbas.find_pairs()
2 pairs
AttributeError: 'SBAS' object has no attribute 'find_pairs'.

[Bug]: PyGMTSAR (2023.5.3) - cannot unwrap, pandas error

image


KeyError Traceback (most recent call last)
File ~/.local/lib/python3.10/site-packages/pandas/core/indexes/base.py:3802, in Index.get_loc(self, key, method, tolerance)
3801 try:
-> 3802 return self._engine.get_loc(casted_key)
3803 except KeyError as err:

File ~/.local/lib/python3.10/site-packages/pandas/_libs/index.pyx:138, in pandas._libs.index.IndexEngine.get_loc()

File ~/.local/lib/python3.10/site-packages/pandas/_libs/index.pyx:165, in pandas._libs.index.IndexEngine.get_loc()

File pandas/_libs/hashtable_class_helper.pxi:5745, in pandas._libs.hashtable.PyObjectHashTable.get_item()

File pandas/_libs/hashtable_class_helper.pxi:5753, in pandas._libs.hashtable.PyObjectHashTable.get_item()

KeyError: 0

The above exception was the direct cause of the following exception:

KeyError Traceback (most recent call last)
File :11

File ~/.local/lib/python3.10/site-packages/pandas/core/frame.py:3807, in DataFrame.getitem(self, key)
3805 if self.columns.nlevels > 1:
3806 return self._getitem_multilevel(key)
-> 3807 indexer = self.columns.get_loc(key)
3808 if is_integer(indexer):
3809 indexer = [indexer]

File ~/.local/lib/python3.10/site-packages/pandas/core/indexes/base.py:3804, in Index.get_loc(self, key, method, tolerance)
3802 return self._engine.get_loc(casted_key)
3803 except KeyError as err:
-> 3804 raise KeyError(key) from err
3805 except TypeError:
3806 # If we have a listlike key, _check_indexing_error will raise
3807 # InvalidIndexError. Otherwise we fall through and re-raise
3808 # the TypeError.
3809 self._check_indexing_error(key)

KeyError: 0

[Help]: PS processing

I am having issues with sbas.ps_parallel, some indications on workflow would be helpful.

Is ps_parallel supposed to be run after intf_parallel and merge_parallel? Doing so causes {final date}_F{merged subswaths}.PRM missing file error.

FileNotFoundError: [Errno 2] No such file or directory: 'raw_asc/S1_20230530_ALL_F23.PRM'

Removing this file from dates and passing custom dates causes another error:

sbas.ps_parallel(dates=sbas.df.index[:-1])
Exception: "ValueError('mmap length is greater than file size')"

sbas.stack_parallel() fails with FileNotFoundError: [Errno 2] No such file or directory: 'make_s1a_tops' error

Hi,

I'm having trouble replicating the example notebooks that are available. Everything seems to be working as expected until I try to stack the two subswaths on top of each other in order to begin the actual inSAR process.

In doing so, I get the following error:

Reference: 100%
1/1 [00:00<00:00, 1.08it/s]
---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
Cell In[14], line 1
----> 1 sbas.stack_parallel()

File /opt/anaconda3/envs/inSAR-new/lib/python3.10/site-packages/pygmtsar/SBAS_stack.py:232, in SBAS_stack.stack_parallel(self, dates, n_jobs, **kwargs)
    230 print("hello world 1")
    231 with self.tqdm_joblib(tqdm(desc='Reference', total=len(subswaths))) as progress_bar:
--> 232     joblib.Parallel(n_jobs=n_jobs)(joblib.delayed(self.stack_ref)(subswath, **kwargs) for subswath in subswaths)
    234 # prepare secondary images
    235 with self.tqdm_joblib(tqdm(desc='Aligning', total=len(dates)*len(subswaths))) as progress_bar:

File /opt/anaconda3/envs/inSAR-new/lib/python3.10/site-packages/joblib/parallel.py:1098, in Parallel.__call__(self, iterable)
   1095     self._iterating = False
   1097 with self._backend.retrieval_context():
-> 1098     self.retrieve()
   1099 # Make sure that we get a last message telling us we are done
   1100 elapsed_time = time.time() - self._start_time

File /opt/anaconda3/envs/inSAR-new/lib/python3.10/site-packages/joblib/parallel.py:975, in Parallel.retrieve(self)
    973 try:
    974     if getattr(self._backend, 'supports_timeout', False):
--> 975         self._output.extend(job.get(timeout=self.timeout))
    976     else:
    977         self._output.extend(job.get())

File /opt/anaconda3/envs/inSAR-new/lib/python3.10/site-packages/joblib/_parallel_backends.py:567, in LokyBackend.wrap_future_result(future, timeout)
    564 """Wrapper for Future.result to implement the same behaviour as
    565 AsyncResults.get from multiprocessing."""
    566 try:
--> 567     return future.result(timeout=timeout)
    568 except CfTimeoutError as e:
    569     raise TimeoutError from e

File /opt/anaconda3/envs/inSAR-new/lib/python3.10/concurrent/futures/_base.py:458, in Future.result(self, timeout)
    456     raise CancelledError()
    457 elif self._state == FINISHED:
--> 458     return self.__get_result()
    459 else:
    460     raise TimeoutError()

File /opt/anaconda3/envs/inSAR-new/lib/python3.10/concurrent/futures/_base.py:403, in Future.__get_result(self)
    401 if self._exception:
    402     try:
--> 403         raise self._exception
    404     finally:
    405         # Break a reference cycle with the exception in self._exception
    406         self = None

FileNotFoundError: [Errno 2] No such file or directory: 'make_s1a_tops'

Any help you can provide would be hugely appreciated!

[Help]: facing issue installing.Pygmtsar

Describe the problem you met
Write a brief description of the problem you met, e.g. my intf_tops.csh could not run through

OS and software version
If applicabl
WhatsApp Image 2023-03-13 at 4 55 10 PM
i am facing installing PYGMTSAR have motioned a snapshot, I have done multiple process pull and run, what is the next step.., what operating system and software version you are using?
OS:
GMTSAR version:

Reproduce the problem
If applicable, how to reproduce the problem from scratch?

Log file
Attach a log file output from your terminal.

Screenshot
If applicable, attach a Screenshot to help us understand your case

[Please note that if no new responses are made over six months from users, the issue will get closed]

[Also if this is a general question that's not particlarly related to GMTSAR, such as "how to correct atmospheric noise" or "how to intepret my interferogram", consider using the discussions page]

Timed out trying to connect to tcp://ip-addr-of-scheduler:8786 after 30 s

Describe the bug
Hi i am going through ASF_Downloading_2020_Ardabil_Earthquake.ipynb and ended with the error "Timed out trying to connect to tcp://ip-addr-of-scheduler:8786 after 30 s' after I execute sbas.topo_ra_parallel() in Ubuntu 22.04.1 LTS. DO you have any suggestion to fix this issue.

To Reproduce
Steps to reproduce the behavior:

Screenshots
If applicable, add screenshots to help explain your problem.
pyGRMsar

System and software version:

  • OS: Ubuntu 22.04.1 LTS
  • GMTSAR version:

Processing Log
If applicable, attach a log file from the terminal output

[Please note that if no new responses are made over six months from users, the issue will get closed. If the report involves a simple fix in the code, the change could be done quickly; if it involves developing new features, do expect them to be done over the long run. Under both cases, the report will remain open until the issue is addressed one way or another.]

UnicodeDecode Error while running sbas.stack_parallel()

hi, i was testing Pygmtsar for the first time. I started to do SBAS Process with 6 SLC images and 12 days of temporal baseline. i went along with live tutorials and then faced with the following problem :


nicodeDecodeError Traceback (most recent call last)
/tmp/ipykernel_7140/1628302268.py in
----> 1 sbas.stack_parallel()

~/anaconda3/lib/python3.9/site-packages/pygmtsar/SBAS_stack.py in stack_parallel(self, dates, n_jobs, **kwargs)
233 # prepare secondary images
234 with self.tqdm_joblib(tqdm(desc='Aligning', total=len(dates)*len(subswaths))) as progress_bar:
--> 235 joblib.Parallel(n_jobs=n_jobs)(joblib.delayed(self.stack_rep)(subswath, date, **kwargs)
236 for date in dates for subswath in subswaths)

~/anaconda3/lib/python3.9/site-packages/joblib/parallel.py in call(self, iterable)
1054
1055 with self._backend.retrieval_context():
-> 1056 self.retrieve()
1057 # Make sure that we get a last message telling us we are done
1058 elapsed_time = time.time() - self._start_time

~/anaconda3/lib/python3.9/site-packages/joblib/parallel.py in retrieve(self)
933 try:
934 if getattr(self._backend, 'supports_timeout', False):
--> 935 self._output.extend(job.get(timeout=self.timeout))
936 else:
937 self._output.extend(job.get())

~/anaconda3/lib/python3.9/site-packages/joblib/_parallel_backends.py in wrap_future_result(future, timeout)
540 AsyncResults.get from multiprocessing."""
541 try:
--> 542 return future.result(timeout=timeout)
543 except CfTimeoutError as e:
544 raise TimeoutError from e

~/anaconda3/lib/python3.9/concurrent/futures/_base.py in result(self, timeout)
444 raise CancelledError()
445 elif self._state == FINISHED:
--> 446 return self.__get_result()
447 else:
448 raise TimeoutError()

~/anaconda3/lib/python3.9/concurrent/futures/_base.py in __get_result(self)
389 if self._exception:
390 try:
--> 391 raise self._exception
392 finally:
393 # Break a reference cycle with the exception in self._exception

UnicodeDecodeError: 'ascii' codec can't decode byte 0x86 in position 0: ordinal not in range(128)


what should I do to solve this problem ?

[Help]: HTTPError: 401 Client Error: Unauthorised for URL

Describe the problem you met
Hi again, thank you for your help so far! Data processing worked fine yesterday however, this morning there seems to be an issue when attempting to download orbit files. Have you encountered this problem? Usually I'd wait to see if asf itself was down, but there doesn't seem to be any notification on their end as of yet, and the style of error had me concerned that it may be something more extreme.

OS and software version
OS: Linux Mint (21.1 Cinnamon) , pygmtsar docker (mobigroup/pygmtsar, 6cd0e53e4011)

Reproduce the problem
I've attempted to run my own script and one of your jupyter notebooks - same issue both times.

Log
log.txt
file

[Please note that if no new responses are made over six months from users, the issue will get closed]

[Also if this is a general question that's not particularly related to GMTSAR, such as "how to correct atmospheric noise" or "how to intepret my interferogram", consider using the discussions page]

[Bug]: No such file or directory: 'gmtsar_sharedir.csh'

Describe the bug
I am following the Kumamoto Earthquake Colab but on my local system (MacOS BigSur). I have downloaded the dataset and all cells work well till the cell that executes sbas.download_dem() I get the following error with this code line:

File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 1845, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'gmtsar_sharedir.csh'

(detailed error log below)

System and software version:

  • OS: MacOS BigSur Version 11.7.7
  • GMTSAR version:

Processing Log

Downloading S1A_OPER_AUX_POEORB_OPOD_20210311T113922_V20160407T225943_20160409T005943.EOF: 100%|█| 4.41M/4.41M [00:01<00:
Downloading products: 100%|███████████████████████████████████████████████████████████| 1/1 [00:02<00:00,  2.95s/product]
Downloading products: 100%|███████████████████████████████████████████████████████████| 1/1 [00:00<00:00,  2.72product/s]
Downloading products: 100%|███████████████████████████████████████████████████████████| 1/1 [00:00<00:00,  2.58product/s]
Downloading S1A_OPER_AUX_POEORB_OPOD_20210311T153048_V20160419T225943_20160421T005943.EOF: 100%|█| 4.41M/4.41M [00:02<00:
Downloading products: 100%|███████████████████████████████████████████████████████████| 1/1 [00:03<00:00,  3.53s/product]
Downloading products: 100%|███████████████████████████████████████████████████████████| 1/1 [00:00<00:00,  2.59product/s]
Downloading products: 100%|███████████████████████████████████████████████████████████| 1/1 [00:00<00:00,  2.73product/s]
Traceback (most recent call last):
  File "/Users/apple/Desktop/IIRS/sample.py", line 59, in <module>
    sbas.download_dem(backend="GMT")
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pygmtsar/SBAS_dem.py", line 50, in download_dem
    return self.download_dem_gmt(**kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pygmtsar/SBAS_dem_gmt_gdal.py", line 47, in download_dem_gmt
    gmtsar_sharedir = PRM().gmtsar_sharedir()
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pygmtsar/PRM_gmtsar.py", line 17, in gmtsar_sharedir
    p = subprocess.Popen(argv, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 969, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 1845, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'gmtsar_sharedir.csh'

[Help]: UnicodeDecode Error while running sbas.stack_parallel() and sbas.baseline_pairs()

When trying to run sbas.stack_parallel() and sbas.baseline_pairs() Getting error:
"UnicodeDecodeError: 'ascii' codec can't decode byte 0x86 in position 0: ordinal not in range(128)"

(Running code in pycharm console, 2023.1)

OS: Unbuntu 20.04
GMTSAR version: 6.2.2
Python version: 3.10.11
Unicode: echo $LANG output: en_US.UTF-8

Log file when running sbas.stack_parallel()
Reference: 100%|██████████| 3/3 [00:10<00:00, 3.59s/it] Aligning: 13%|█▎ | 39/291 [00:03<00:23, 10.51it/s] Traceback (most recent call last): File "/home/name/pythonProject/venv/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3508, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-6-935b05484946>", line 1, in <module> sbas.stack_parallel() File "/home/name/pythonProject/venv/lib/python3.10/site-packages/pygmtsar/SBAS_stack.py", line 235, in stack_parallel joblib.Parallel(n_jobs=n_jobs)(joblib.delayed(self.stack_rep)(subswath, date, **kwargs) \ File "/home/name/pythonProject/venv/lib/python3.10/site-packages/joblib/parallel.py", line 1098, in __call__ self.retrieve() File "/home/name/pythonProject/venv/lib/python3.10/site-packages/joblib/parallel.py", line 975, in retrieve self._output.extend(job.get(timeout=self.timeout)) File "/home/name/pythonProject/venv/lib/python3.10/site-packages/joblib/_parallel_backends.py", line 567, in wrap_future_result return future.result(timeout=timeout) File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result return self.__get_result() File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result raise self._exception UnicodeDecodeError: 'ascii' codec can't decode byte 0x86 in position 0: ordinal not in range(128)

[Help]: Understanding each process of InSAR

Thank you for providing Python-based code about InSAR.
I am a beginner who has just started studying SAR/InSAR, and I would like to understand the principles behind each process (D-InSAR, SBAS) while looking at the code.
Did you refer to any specific resources while writing the code? I assume you based it on GMTSAR, but were there any additional references you used to understand the theory?

Or do you have any recommended resources?

Thanks

[Help]: sbas_parallel stalled?

sbas_parallel claims 100% but hasn't concluded for over an hour, and only using 100% of 1 cpu.

image
image
image

Is this normal?

Last files created in processing dir where disp_chunk*.grd

[Help]: Cannot complete unwrapping - Terminated / semlock + semaphore errors

System:
Python 3.10.6
Ubuntu 22.04
RAM 240 GB
Processor 16 vCores

My code:

import platform, sys, os

from pygmtsar import SBAS

import xarray as xr
import numpy as np
import pandas as pd

supress numpy warnings

import warnings
warnings.filterwarnings('ignore')

sbas = SBAS('raw_orig', dem_filename='topo/DEM_WGS84.nc', basedir='raw_stack', landmask_filename='landmask/landmask.nc').set_master('2021-06-04')
sbas.download_orbits()

About 16GB RAM for this stack, 4 minutes (12)

sbas.stack_parallel()

baseline_pairs = sbas.baseline_pairs(days=50, meters=100)

Under 1 min

sbas.topo_ra_parallel()
topo_ra = sbas.get_topo_ra()

pairs = baseline_pairs[['ref_date', 'rep_date']]

About 27 min (34)

sbas.intf_parallel(pairs, wavelength=100)

Fix merging bug

sbas.to_dataframe().sort_values(by=["date","subswath"],inplace=True)

About 14GB RAM, 3.5 min (34 ss) (72 topo)

sbas.merge_parallel(pairs)

phasefilts = sbas.open_grids(pairs, 'phasefilt')
phasefilts_ll = sbas.open_grids(pairs, 'phasefilt', geocode=True)
corrs = sbas.open_grids(pairs, 'corr')

landmask_ll = sbas.get_landmask()
landmask_ra = sbas.get_landmask(inverse_geocode=True)
filter = lambda corr, unwrap: xr.where(corr_stack>=0.1, unwrap, np.nan)
snaphuconf = sbas.snaphu_config(defomax=0, NTILEROW=4, NTILECOL=4, ROWOVRLP=400, COLOVRLP=400)

About 38GB RAM (4x4 tiles, 16 procs, 1 job)

sbas.unwrap_parallel(pairs, n_jobs=1, func=filter, conf=snaphuconf, mask=landmask_ra)

image

[Help]:Installation help

Describe the problem you met
I have installed docker desktop, but why can't I open it.

OS and software version
If applicable, what operating system and software version you are using?
OS:
GMTSAR version:

Reproduce the problem
If applicable, how to reproduce the problem from scratch?

Log file
Attach a log file output from your terminal.

Screenshot
If applicable, attach a Screenshot to help us understand your case
2023-02-17 13-18-46 的屏幕截图

[Please note that if no new responses are made over six months from users, the issue will get closed]

[Also if this is a general question that's not particlarly related to GMTSAR, such as "how to correct atmospheric noise" or "how to intepret my interferogram", consider using the discussions page]

[Bug]: Initializing SBAS post-merge

If Python crashes after merge (such as during unwrapping due to lack of memory etc.), then a new SBAS object must be initialized in a new python terminal (with force=False to prevent wiping data). The problem is that it does not detect existing merged products in the workdir, and creates a dataframe with separate products (2, 3; 2, 3.. instead of 23; 23...)

To resume from previous state, it is necessary to run only part of sbas.merge_parallel() python code related to grouping, and ignore the rest related to processing, because it has already been done before.

[Bug]: numpy>=1.24 issue

numpy 1.23.5 shows warning for CI test S1A_Stack_CPGF_T173.py:

Radar Topography Computing sw1: 100%|██████████| 988/988 [00:32<00:00, 30.82it/s]
249
<xarray.DataArray 'topo_ra' (y: 2742, x: 10786)>
250
/opt/hostedtoolcache/Python/3.8.16/x64/lib/python3.8/site-packages/numpy/core/fromnumeric.py:43: \
VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences \
(which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. \
If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
251
  result = getattr(asarray(obj), method)(*args, **kwds)
252
dask.array<getitem, shape=(2742, 10786), dtype=float32, chunksize=(512, 512), chunktype=numpy.ndarray>
253
Coordinates:
254
  * y        (y) int32 1 3 5 7 9 11 13 15 ... 5471 5473 5475 5477 5479 5481 5483
255
  * x        (x) int32 1 3 5 7 9 11 13 ... 21561 21563 21565 21567 21569 215

FileNotFoundError: [WinError 2] The system cannot find the file specified

I am trying to reframe the images for time series-analysis using SBAS, while defining the pins to 'none' to take in account the full master image, but an error pops up while using sbas.reframe_parallel(dates=None, n_jobs=6).
Software Used- Spyder.
I am using the following code.
from pygmtsar import SBAS
from pygmtsar import PRM
import numpy as np
import xarray as xr
import os
import glob
WORKDIR = 'D:/output'
DATADIR = 'D:/insar'
file_path = r'D:/insar'
CORRLIMIT = 0.10
DEFOMAX = 0
files = os.listdir(file_path)
print(files)
sbas = SBAS(DATADIR, basedir=WORKDIR)
print (sbas.to_dataframe())
sbas.download_orbits()
print (sbas.to_dataframe())
sbas.set_pins([None,None],[None,None],[None,None])
sbas.reframe_parallel(dates=None, n_jobs=12)
print (sbas.get_pins())
sbas.to_dataframe()

The output shows as-
sbas.reframe_parallel(dates=None, n_jobs=12)
Reframing: 0%| | 0/6 [00:00<?, ?it/s]
_RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Users\Arslaan\anaconda3\envs\insar\lib\site-packages\joblib\externals\loky\process_executor.py", line 428, in _process_worker
r = call_item()
File "C:\Users\Arslaan\anaconda3\envs\insar\lib\site-packages\joblib\externals\loky\process_executor.py", line 275, in call
return self.fn(*self.args, **self.kwargs)
File "C:\Users\Arslaan\anaconda3\envs\insar\lib\site-packages\joblib_parallel_backends.py", line 620, in call
return self.func(*args, **kwargs)
File "C:\Users\Arslaan\anaconda3\envs\insar\lib\site-packages\joblib\parallel.py", line 288, in call
return [func(*args, **kwargs)
File "C:\Users\Arslaan\anaconda3\envs\insar\lib\site-packages\joblib\parallel.py", line 288, in
return [func(*args, **kwargs)
File "C:\Users\Arslaan\anaconda3\envs\insar\lib\site-packages\pygmtsar\SBAS.py", line 937, in reframe
self.make_s1a_tops(subswath, date, debug=debug)
File "C:\Users\Arslaan\anaconda3\envs\insar\lib\site-packages\pygmtsar\SBAS.py", line 1686, in make_s1a_tops
p = subprocess.Popen(argv, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=self.basedir)
File "C:\Users\Arslaan\anaconda3\envs\insar\lib\subprocess.py", line 969, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\Arslaan\anaconda3\envs\insar\lib\subprocess.py", line 1438, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] The system cannot find the file specified
"""

Screenshot (22)

[Help]: error: too many values to unpack (expected 2)

Hello @mobigroup,
I am sorry for the other question but I am getting an error when i use decimation.
I don't understand why at times using different scenes from the same area of interest the code works fine but at other times it won't, reporting the error message I've attached here.

Screenshot (49)
Screenshot (50)

Error in initializing SBAS dataframe

I'm analyzing 2017_Iran–Iraq_Earthquake example in Google Colab. When initializing SBAS data frame, I faced the following errors.

sbas = SBAS(DATADIR, DEMFILE, basedir=WORKDIR,
            filter_orbit=ORBIT,
            filter_subswath=SUBSWATH,
            filter_polarization=POLARIZATION)

---------------------------------------------------------------------------

ValueError                                Traceback (most recent call last)

[<ipython-input-25-993443c965c9>](https://localhost:8080/#) in <cell line: 1>()
----> 1 sbas = SBAS(DATADIR, DEMFILE, basedir=WORKDIR,
      2             filter_orbit=ORBIT,
      3             filter_subswath=SUBSWATH,
      4             filter_polarization=POLARIZATION)

1 frames

[/usr/local/lib/python3.10/dist-packages/pygmtsar/SBAS.py](https://localhost:8080/#) in __init__(self, datadir, dem_filename, basedir, landmask_filename, filter_orbit, filter_mission, filter_subswath, filter_polarization, force)
    139             [['datetime','orbit','mission','polarization','subswath','datapath','metapath','orbitpath','geometry']]
    140 
--> 141         err, warn = self.validate(df)
    142         #print ('err, warn', err, warn)
    143         assert not err, 'ERROR: Please fix all the issues listed above to continue'

[/usr/local/lib/python3.10/dist-packages/pygmtsar/SBAS.py](https://localhost:8080/#) in validate(self, df)
    170             print ('ERROR: Found multiple scenes for a single date from different missions')
    171         # note: df.unique() returns unsorted values so it would be 21 instead of expected 12
--> 172         subswaths = int(''.join(map(str,np.unique(df.subswath))))
    173         if not int(subswaths) in [1, 2, 3, 12, 23, 123]:
    174             error = True

ValueError: invalid literal for int() with base 10: ''

Can anyone tell me how I can fix this error?

[Help]: error in sbas.stack_parallel()

Hi @mobigroup, I am facing this error in Align a Pair of Images step. I tried both on my local computer and collab.
https://colab.research.google.com/drive/1TM1ClRpEX1cIiQCUxXVZj9hfkEwmfvVx?usp=sharing
sbas.stack_parallel()
KeyError Traceback (most recent call last)
Cell In [32], line 1
----> 1 sbas.stack_parallel()

File /usr/local/lib/python3.8/dist-packages/pygmtsar-2022.12.1-py3.8.egg/pygmtsar/SBAS_stack.py:235, in SBAS_stack.stack_parallel(self, dates, n_jobs, **kwargs)
233 # prepare secondary images
234 with self.tqdm_joblib(tqdm(desc='Aligning', total=len(dates)*len(subswaths))) as progress_bar:
--> 235 joblib.Parallel(n_jobs=n_jobs)(joblib.delayed(self.stack_rep)(subswath, date, **kwargs)
236 for date in dates for subswath in subswaths)

File ~/.local/lib/python3.8/site-packages/joblib/parallel.py:1098, in Parallel.call(self, iterable)
1095 self._iterating = False
1097 with self._backend.retrieval_context():
-> 1098 self.retrieve()
1099 # Make sure that we get a last message telling us we are done
1100 elapsed_time = time.time() - self._start_time

File ~/.local/lib/python3.8/site-packages/joblib/parallel.py:975, in Parallel.retrieve(self)
973 try:
974 if getattr(self._backend, 'supports_timeout', False):
--> 975 self._output.extend(job.get(timeout=self.timeout))
976 else:
977 self._output.extend(job.get())

File ~/.local/lib/python3.8/site-packages/joblib/_parallel_backends.py:567, in LokyBackend.wrap_future_result(future, timeout)
564 """Wrapper for Future.result to implement the same behaviour as
565 AsyncResults.get from multiprocessing."""
566 try:
--> 567 return future.result(timeout=timeout)
568 except CfTimeoutError as e:
569 raise TimeoutError from e

File /usr/lib/python3.8/concurrent/futures/_base.py:444, in Future.result(self, timeout)
442 raise CancelledError()
443 elif self._state == FINISHED:
--> 444 return self.__get_result()
445 else:
446 raise TimeoutError()

File /usr/lib/python3.8/concurrent/futures/_base.py:389, in Future.__get_result(self)
387 if self._exception:
388 try:
--> 389 raise self._exception
390 finally:
391 # Break a reference cycle with the exception in self._exception
392 self = None

KeyError: "None of [Index(['lon_tie_point'], dtype='object', name='name')] are in the [index]"

[Suggestions]: Some suggestions on the Colab results

I was trying to make some suggestions in another thread (a while ago) but maybe this is a better place for such discussions.

  1. In your Colab page, things look fine before doing SBAS. Then after SBAS, things start to look strange. e.g., below the time-series has a spatial jump that does not exists in the input data. I am not sure how it happened but it does not look correct as time-series analysis should not produce something that does not exist in the input data.
    image

  2. It is mentioned in the Colab that "Remove trend and apply gaussian filter to fix ionospheric effects and solid Earh's tides. " I think I mentioned in prior discussions that this is a dangerous thing to do but it seems that you didn't buy it. Here are some stuffs to consider. a) C-band is not much affected by ionosphere compared to L-band data. b) Deformation contains components from all wavelengths, a simple bandpass may get rid of deformation as well. c) effects of solid earth tides are prominent only when you start to look at large regions and could be easily removed with simple calculation rather than detrending/high-pass filtering. d) atmospheric noise also has components over all wavelengths, and the part below (atm noise produced by gravity waves) remains after high-pass filtering.
    image

  3. Personally I love to see advanced users making their own processing chains involving their own flavors, but if you would like to constantly populate your results under the GMTSAR repository, please make sure your results are done in a correct way (that's good for both you and us). Otherwise it may mislead users (especially beginners) and I would recommend that you keep it private. If in any case you don't believe my suggestions, fell free to verify the stuffs above with any other InSAR scientists.

[Help]: sbas.topo_ra_parallel() taking too long

Processing with 32 cpu / 128 GB RAM / 2 SSD RAID 0 on Jupyter.

3 SS from 151 SLC.

Dem is set to 15m resolution.

image

image

image

sbas.topo_ra_parallel() running very slowly. It has been about 2 hours and only 15%.

image

[Help]: Fail to export netcdf file even with dask modifications

Describe the problem you met
I've been reading your Yamchi Dam notebooks and adapting my own script accordingly. I have now added in the SBAS.pickle file and certain checks to avoid unnecessary reprocessing. However, when running on a large enough dataset, dask seems to fail due to a memory error. I've attempted to process a netCDF with all dates, and when that didn't work I attempted to iterate through the dates individually with a simple for-in loop which gets me further - but still fails before completion. Problems start at line 8386 in the log.txt supplied below. I have increased the chunk size and I am using tqdm_dask/ cluster with n_workers at 1 and dask.persist, instead of dask.compute as from what I'm reading that works better for large datasets. I've also increased the timeouts to 60 seconds. I'm attempting to process 94 Sentinel-1 SLC IW images (with all three sub swaths of data merged).

Keeping an eye on my memory, I do not see it exceeding anything beyond reasonable (~60%), specs:

  • 12th Gen Intel© Core™ i7-12700H × 14 (20 threads)
  • 32GB RAM

OS and software version
OS: Linux Mint (Cinnamon 21.1) with mobigroup/pygtmsar docker (id: 6cd0e53e4011)

Log file
log.txt
script.txt

[Help]: install

Hi dear Aleksei

Is there a manual to download and learn this technique?

Best regards

[Bug]: Orbit download not working - quick fix for ASF

SciHub POD is down and sbas.download_orbits() ASF alternative is not working due to authentication.

Quick fix to sentineleof , add credentials and change url:

def _download_and_write(url, save_dir="."):
    """Wrapper function to run the link downloading in parallel

    Args:
        url (str): url of orbit file to download
        save_dir (str): directory to save the EOF files into

    Returns:
        list[str]: Filenames to which the orbit files have been saved
    """
    fname = os.path.join(save_dir, url.split("/")[-1])
    if os.path.isfile(fname):
        logger.info("%s already exists, skipping download.", url)
        return [fname]

    logger.info("Downloading %s", url)
    # Fix URL
    if 's1qc.asf.alaska.edu' in url:
        url='https://urs.earthdata.nasa.gov/oauth/authorize?response_type=code&client_id=BO_n7nTIlMljdvU6kRRB3g&redirect_uri=https://auth.asf.alaska.edu/login&state='+url+'&app_type=401'
    # Add credentials
    response = requests.get(url, auth=(user, pass))
    response.raise_for_status()
    logger.info("Saving to %s", fname)
    with open(fname, "wb") as f:
        f.write(response.content)
    if fname.endswith(".zip"):
        _extract_zip(fname, save_dir=save_dir)
        # Pass the unzipped file ending in ".EOF", not the ".zip"
        fname = fname.replace(".zip", "")
    return fname


Replace user, pass with credentials.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.