GithubHelp home page GithubHelp logo

cryotools / cosipy Goto Github PK

View Code? Open in Web Editor NEW
49.0 49.0 29.0 254.89 MB

Coupled snowpack and ice surface energy and mass balance model in Python

License: GNU General Public License v3.0

Python 99.63% Shell 0.37%

cosipy's People

Contributors

ansarn avatar apfelbuzza avatar emilycollier avatar gampnico avatar marcusgastaldello avatar richteny avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cosipy's Issues

Double assignment in radCor.py

First of all: this looks like a really cool project!!!! Thanks for doing this great effort!!!! :-)

While searching for a way to correct incoming solar radiation for terrain slope and azimuth, I stumbled upon your implementation of it. I didn't test if it works properly, but I think it has two assignments to the same variable one of which should be deleted for code clarity:

cosipy/modules/radCor.py

Lines 130 to 134 in 9b8645a

# Correct beam component for angle and azimuth of pixels
cf = (math.cos(zeni) * math.cos(angslo / 180.0 * math.pi) + math.sin(zeni) * \
math.sin(angslo / 180.0 * math.pi) * math.cos(azi-azislo / 180.0 * math.pi)) / math.cos(zeni)
cf = (math.cos(zeni) * math.cos((angslo*math.pi) / 180.0) + math.sin(zeni) * \
math.sin((angslo*math.pi) / 180.0) * math.cos(((azi-azislo)*math.pi) / 180.0)) / math.cos(zeni)

In my humble opinion, the second assignment should be correct. Let me know if I should make a PR.

Remove the node in the gird

Hello, I come from China and I‘m now a graduate student. It's my pleasure to communicate with you at here.

In your model, the Grid Class has the way of remove_node, which will directly remove a node when this node's height is less than minimum_snow_layer_height. Is there some effect, if I add the node's height, which was removed, to the below layer?
I hope for your reply and please forgive me if there are some errors of grammar and words spell.

Thank you very much!

Reduce repository size

A git clone on cosipy is about 280 mb large, which is a lot for a git repository,

There is some data in the data and docs folders (27M and 18M), but most of this size is taken by the .git folder. I think that it would be beneficial to the repository to:

  • delete all unused branches
  • if this is not enough, you might want to squash all commits related to data (this is more involved and requires force pushing to master)

aws2cosipy point model failure

Dear developers,

as I would like to run your model in a point wise approach, I tried to set
point_model = True
in the aws2cosipyConfig.py file when running the example for the Zhadang glacier.

The aws2cosipy.py fails to execute in this way with the full log and error-traceback:

ERROR: Unable to locate a modulefile for 'intel64'
ERROR: Unable to locate a modulefile for 'netcdf'


Create input

Read input file ../../data/input/Zhadang/Zhadang_ERA5_2009_2018_small.csv
Empty DataFrame
Columns: [T2, PRES, N, U2, RH2, RRR, SNOWFALL, G, LWin]
Index: []
Empty DataFrame
Columns: [T2, PRES, N, U2, RH2, RRR, SNOWFALL, G, LWin]
Index: []
T2 RH2 U2 ... RRR N SNOWFALL
TIMESTAMP ...
2000-01-01 06:00:00 251.091015 60.239344 1.362225 ... 0.000954 0.855011 5.028486e-06
2000-01-01 07:00:00 249.608837 66.086411 1.254843 ... 0.000954 0.688080 3.279448e-06
2000-01-01 08:00:00 250.118542 64.491801 1.196519 ... 0.000477 0.532532 1.749039e-06
2000-01-01 09:00:00 252.241070 61.477761 1.065327 ... 0.000000 0.332764 1.093149e-06
2000-01-01 10:00:00 256.592175 70.192980 0.997920 ... 0.000000 0.093475 2.186298e-07

[5 rows x 8 columns]
Read static file ../../data/static/Zhadang_static.nc

Traceback (most recent call last):
File "aws2cosipy.py", line 626, in
create_1D_input(args.csv_file, args.cosipy_file, args.static_file, args.start_date, args.end_date)
File "aws2cosipy.py", line 85, in create_1D_input
ds = ds.isel(lat=plat,lon=plon,method='nearest', missing_dims="raise")
File "/home/atraxoo/.local/lib/python3.6/site-packages/xarray/core/dataset.py", line 1955, in isel
return self._isel_fancy(indexers, drop=drop, missing_dims=missing_dims)
File "/home/atraxoo/.local/lib/python3.6/site-packages/xarray/core/dataset.py", line 2002, in _isel_fancy
indexers_list = list(self._validate_indexers(indexers, missing_dims))
File "/home/atraxoo/.local/lib/python3.6/site-packages/xarray/core/dataset.py", line 1810, in _validate_indexers
indexers = drop_dims_from_indexers(indexers, self.dims, missing_dims)
File "/home/atraxoo/.local/lib/python3.6/site-packages/xarray/core/utils.py", line 768, in drop_dims_from_indexers
f"dimensions {invalid} do not exist. Expected one or more of {dims}"
ValueError: dimensions {'method'} do not exist. Expected one or more of Frozen(SortedKeysDict({'lon': 13, 'lat': 7}))

I am running the script on Ubuntu with Python 3.6.9.

Does anyone have an idea how to fix this?

All the best,
Manuel Theurl

Output variables at reduced frequency

Sub-surface variables can quickly reach a very high output volume within gridded runs: a 10 years hourly simulation on a 20 x 20 grid with 200 layers produces more than 7 billion simulated values for each sub-surface variable.

At the same time, sub-surface variables tend to evolve less quickly than surface variables (for example, firn temperatures at 10 m depth have annual oscillations usually not exceeding +/- 1 °C; density at depth also changes very slowly).

Thus, it would be very convenient to have the option to change the output frequency of sub-surface variables, for example writing to the output file only the value every N time-steps.

Dynamic tolerance for layer merging

For some model runs (especially on systems with limited memory/power) it can be hard to find a proper value for max_layers.

It would be very convenient if the model upon reaching the max_layers threshold could increase the merging thresholds (with a warning), to remain within the max_layers limit and still finish the simulation run.

Error when running example data

Hi team,

I'm using Cosipy on my windows laptop for a point energy balance of a glacier for my Masters Dissertation but have run into an error in the example data which I suspect is related to a port issue. I believe a similar issue occurs when attempting to run Cosipy on my university institution computers which restrict the port features for security purposes. The console repeats:

Traceback (most recent call last):
File "C:\Users\Henry\anaconda3\envs\runCOSIPY\lib\site-packages\distributed\worker.py", line 1200, in get_data
response = await comm.read(deserializers=serializers)
File "C:\Users\Henry\anaconda3\envs\runCOSIPY\lib\site-packages\distributed\comm\tcp.py", line 204, in read
convert_stream_closed_error(self, e)
File "C:\Users\Henry\anaconda3\envs\runCOSIPY\lib\site-packages\distributed\comm\tcp.py", line 132, in convert_stream_closed_error
raise CommClosedError("in %s: %s" % (obj, exc))
distributed.comm.core.CommClosedError: in : Stream is closed
distributed.core - INFO - Lost connection to 'tcp://127.0.0.1:50399': in : Stream is closed
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:50371
distributed.nanny - INFO - Worker closed
distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:50374 -> None
Traceback (most recent call last):
File "C:\Users\Henry\anaconda3\envs\runCOSIPY\lib\site-packages\distributed\comm\tcp.py", line 184, in read
n_frames = await stream.read_bytes(8)
tornado.iostream.StreamClosedError: Stream is closed

During handling of the above exception, another exception occurred:

Any guidance or advice on what the error could be would be hugely appreciated.

Refreezing does not conserve column water content

Issue:
Refreezing does not conserve column water content when $\theta_{i} + \Delta\theta_{i} > 1$.

In the following snippet in refreezing.py:

theta_r = 0.0
[...]

if ((GRID.get_node_ice_fraction(idxNode) + dtheta_i + theta_r) >= 1.0):
    GRID.set_node_liquid_water_content(idxNode, theta_r)
    GRID.set_node_ice_fraction(idxNode, 1.0)
else:
    [...]

So when $\theta_{i} + \Delta\theta_{i}+\theta_{r} \geq 1$, the new liquid water content $\theta_{w}$ is set to $\theta_{r} = 0.0$, and the new ice fraction $\theta_{i} = 1.0$.

However, this means water is not conserved as the excess ice is discarded, therefore the layer regains less mass from refreezing.

This also leads to an overestimate for the refreeze parameter ($\theta_{i}h$) and an underestimate for the returned refrozen water ($\sum{-\Delta\theta_{w}z}$).

Possible Solutions:

  • Thicken the layer height in response to the excess ice: this may create a discrepancy with subsurface melt.
  • Convert excess ice back into liquid water, and percolate in the next timestep: this lets liquid water exist below freezing temperatures and requires yet another fix if the new $\theta_{w} \geq 1$.
  • Refreeze during percolation (Wheler & Flowers, 2011.; Vionnet et al. 2012 etc.): this forces percolation.py to depend on refreezing.py.

Update ReadMe.md

The ReadMe.md needs to be updated to reflect the latest developments, e.g. regarding structure of the project, input data, and roles of people.

Cosipy for windows

Hello team, I want to use cosipy for my Windows OS system.
Is it possible or this package just works on Linux/ubuntu/mac?

how to ues the csv data to run the model

first, thank you for your model very much, I‘m studying it.
to be honest, the NetCDF file is a good type to store data, but as for me I don't know how to convert my meteo data to NetCDF file.
there is a meteo csv file in the model directory, so i want to konw how to use the csv data to run this model.
hope for your answer.

Read config file with configparser

Currently, the config file is a common python file. For security reasons it is better to use the configparser module to read the file.

Error when running COSIPY

I was running the data on my own computer and encountered the following problem when simulating a point of energy mass balance. At first I suspected it was the version of the package that was installed, but after reinstalling, the problem persisted. Here is the error message:

2023-11-18 18:08:51,849 - distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
2023-11-18 18:08:51,852 - distributed.scheduler - INFO - State start
2023-11-18 18:08:51,868 - distributed.scheduler - INFO - Clear task state
2023-11-18 18:08:51,869 - distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:8786
2023-11-18 18:08:51,869 - distributed.scheduler - INFO - dashboard at: 127.0.0.1:8787
C:\Users\xie\AppData\Roaming\Python\Python311\site-packages\distributed\nanny.py:147: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
2023-11-18 18:08:52,023 - distributed.scheduler - INFO - Scheduler closing...
2023-11-18 18:08:52,024 - distributed.scheduler - INFO - Scheduler closing all comms
Traceback (most recent call last):
File "C:\Users\xie\AppData\Roaming\Python\Python311\site-packages\distributed\deploy\spec.py", line 262, in init
self.sync(self._correct_state)
File "C:\Users\xie\AppData\Roaming\Python\Python311\site-packages\distributed\utils.py", line 320, in sync
return sync(
^^^^^
File "C:\Users\xie\AppData\Roaming\Python\Python311\site-packages\distributed\utils.py", line 387, in sync
raise exc.with_traceback(tb)
File "C:\Users\xie\AppData\Roaming\Python\Python311\site-packages\distributed\utils.py", line 360, in f
result = yield future
^^^^^^^^^^^^
File "C:\Users\xie\AppData\Roaming\Python\Python311\site-packages\tornado\gen.py", line 762, in run
value = future.result()
^^^^^^^^^^^^^^^
File "C:\Users\xie\AppData\Roaming\Python\Python311\site-packages\distributed\deploy\spec.py", line 351, in _correct_state_internal
await asyncio.wait(workers)
File "E:\Anaconda\Lib\asyncio\tasks.py", line 418, in wait
return await _wait(fs, timeout, return_when, loop)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Anaconda\Lib\asyncio\tasks.py", line 522, in _wait
f.add_done_callback(_on_completion)
^^^^^^^^^^^^^^^^^^^
AttributeError: 'Nanny' object has no attribute 'add_done_callback'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\xie\Desktop\cosipy-master\cosipy-master\COSIPY.py", line 366, in
main()
File "C:\Users\xie\Desktop\cosipy-master\cosipy-master\COSIPY.py", line 88, in main
with LocalCluster(scheduler_port=local_port, n_workers=workers, local_dir='logs/dask-worker-space', threads_per_worker=1, silence_logs=True) as cluster:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xie\AppData\Roaming\Python\Python311\site-packages\distributed\deploy\local.py", line 236, in init
super().init(
File "C:\Users\xie\AppData\Roaming\Python\Python311\site-packages\distributed\deploy\spec.py", line 264, in init
self.sync(self.close)
File "C:\Users\xie\AppData\Roaming\Python\Python311\site-packages\distributed\utils.py", line 320, in sync
return sync(
^^^^^
File "C:\Users\xie\AppData\Roaming\Python\Python311\site-packages\distributed\utils.py", line 387, in sync
raise exc.with_traceback(tb)
File "C:\Users\xie\AppData\Roaming\Python\Python311\site-packages\distributed\utils.py", line 360, in f
result = yield future
^^^^^^^^^^^^
File "C:\Users\xie\AppData\Roaming\Python\Python311\site-packages\tornado\gen.py", line 762, in run
value = future.result()
^^^^^^^^^^^^^^^
File "C:\Users\xie\AppData\Roaming\Python\Python311\site-packages\distributed\deploy\spec.py", line 421, in _close
assert w.status in {
^^^^^^^^^^^^^
AssertionError: Status.init

Here is the version information of the installation package:
dask _2022.2.0
dask-jobqueue_0.7.3
distributed_2022.2.0

Any advice and help is much appreciated!

COSIPY tutorials

Hi COSIPY devs,

@linznicholson and I have just been given some money to hire students to work on educational material for our lectures. Our aim is to use a similar strategy to http://edu.oggm.org and develop COSIPY tutorials / exercises in a similar fashion. Where / how these notebooks will be hosted we don't know yet, but of course we will keep you in the loop about it.

We plan to start by November (ish). With this issue we just wanted to touch base and let you know about it. What's the best way to communicate with you guys? Slack? Would you mind adding Lindsey and I (and, later, our student(s)) to your discussion channels?

Thanks a lot!

Fabien

CF-Convention

Reorder lat/lon/time to time/lat/lon in output file

TypeError when runing COSIPY on the example files

When running cosipy on a newly built Anaconda on the Zhadang example, we get the error message below. Note that we did manage to install and run cosipy on previous Windows installs of Anaconda. Could this be a problem with the version of xarray or pandas?

Traceback (most recent call last):

File "D:\cosipy-master\cosipy\COSIPY.py", line 366, in
main()

File "D:\cosipy-master\cosipy\COSIPY.py", line 62, in main
RESULT = IO.create_result_file()

File "D:\cosipy-master\cosipy\cosipy\cpkernel\io.py", line 96, in create_result_file
self.init_result_dataset()

File "D:\cosipy-master\cosipy\cosipy\cpkernel\io.py", line 305, in init_result_dataset
self.add_variable_along_latlon(self.RESULT, self.DATA.HGT, 'HGT', 'm', 'Elevation')

File "D:\cosipy-master\cosipy\cosipy\cpkernel\io.py", line 781, in add_variable_along_latlon
ds[name] = ((northing,easting), var)

File "C:\Users\khadkaa\anaconda3\envs\cosipy\lib\site-packages\xarray\core\dataset.py", line 1563, in setitem
self.update({key: value})

File "C:\Users\khadkaa\anaconda3\envs\cosipy\lib\site-packages\xarray\core\dataset.py", line 4208, in update
merge_result = dataset_update_method(self, other)

File "C:\Users\khadkaa\anaconda3\envs\cosipy\lib\site-packages\xarray\core\merge.py", line 984, in dataset_update_method
return merge_core(

File "C:\Users\khadkaa\anaconda3\envs\cosipy\lib\site-packages\xarray\core\merge.py", line 632, in merge_core
collected = collect_variables_and_indexes(aligned)

File "C:\Users\khadkaa\anaconda3\envs\cosipy\lib\site-packages\xarray\core\merge.py", line 294, in collect_variables_and_indexes
variable = as_variable(variable, name=name)

File "C:\Users\khadkaa\anaconda3\envs\cosipy\lib\site-packages\xarray\core\variable.py", line 121, in as_variable
raise TypeError(

TypeError: Using a DataArray object to construct a variable is ambiguous, please extract the data using the .data property.

IndexError: list index out of range

I am trying to run cosipy using my own NetCDF file but while executing I am getting this error:

Traceback (most recent call last): ] | 0% Completed | 0.1s
File "e:/Code/GITHUB/main/GUI.py", line 1199, in run_Code
CosipyOutput(date_start= self.StartDate_edit.dateTime().toPyDateTime(),
File "e:\Code\GITHUB\main\CosipyEvaluation.py", line 338, in CosipyOutput
run_cosipy(cluster, IO, DATA, RESULT, RESTART, futures)
File "e:\Code\GITHUB\main\COSIPY.py", line 165, in run_cosipy
LAYER_IRREDUCIBLE_WATER,LAYER_REFREEZE,stake_names,stat,df_eval = future.result()
File "C:\Users\Sarthak\anaconda3\envs\dissertation\lib\site-packages\distributed\client.py", line 220, in result
raise exc.with_traceback(tb)
File "e:\Code\GITHUB\main\cosipy\cpkernel\cosipy_core.py", line 280, in cosipy_core
lwc_from_melted_layers = GRID.remove_melt_weq(melt - sublimation - deposition)
File "e:\Code\GITHUB\main\cosipy\cpkernel\grid.py", line 513, in remove_melt_weq
SWE = self.get_node_height(idx) * (self.get_node_density(idx)/water_density)
File "e:\Code\GITHUB\main\cosipy\cpkernel\grid.py", line 705, in get_node_height
return self.grid[idx].get_layer_height()
IndexError: list index out of range

AttributeError: 'Client' object has no attribute '_periodic_callbacks'

Hi all,
We cannot run the Zhadang example from three different computers, with Ubuntu (18.04-18.10) and Manjaro (18.1). Python versions 3.6 and 3.7.4

I attach the error when run COSIPY.py

Any advice is helpful
Thank for your work.
Greetings from Chile


 Maximum available time interval from 2009-01-01T00:00 until 2009-12-31T23:00. Time steps: 8760 

Integration from 2009-01-01T06:00 to 2009-01-10T00:00

Checking input data .... 

Temperature data (T2) ... ok 
Please check the input data, its seems they are out of range T2 MAX: 262.77 MIN: 236.27 

Relative humidity data (RH2) ... ok 
Shortwave data (G) ... ok 
Wind velocity data (U2) ... ok 
Precipitation data (RRR) ... ok 
Cloud cover data (N) ... ok 
Pressure data (PRES) ... ok 
Snowfall data (SNOWFALL) ... ok 

 Glacier gridpoints: 16 




Output dataset ... ok
Restart ddataset ... ok 

--------------------------------------------------------------

distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:37895
distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:37895
distributed.worker - INFO - Waiting to connect to:       tcp://127.0.0.1:8786
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                    4.17 GB
distributed.worker - INFO -       Local Directory: /home/marquez/cosipy/dask-worker-space/worker-0tsy4zbr
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -         Registered to:       tcp://127.0.0.1:8786
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:44047
distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:44047
distributed.worker - INFO - Waiting to connect to:       tcp://127.0.0.1:8786
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                    4.17 GB
distributed.worker - INFO -       Local Directory: /home/marquez/cosipy/dask-worker-space/worker-mst64rfw
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -         Registered to:       tcp://127.0.0.1:8786
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:38171
distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:38171
distributed.worker - INFO - Waiting to connect to:       tcp://127.0.0.1:8786
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                    4.17 GB
distributed.worker - INFO -       Local Directory: /home/marquez/cosipy/dask-worker-space/worker-3cthrjcz
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -         Registered to:       tcp://127.0.0.1:8786
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.worker - INFO -       Start worker at:      tcp://127.0.0.1:34015
distributed.worker - INFO -          Listening to:      tcp://127.0.0.1:34015
distributed.worker - INFO - Waiting to connect to:       tcp://127.0.0.1:8786
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                    4.17 GB
distributed.worker - INFO -       Local Directory: /home/marquez/cosipy/dask-worker-space/worker-dqdhsk_8
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -         Registered to:       tcp://127.0.0.1:8786
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
LocalCluster('tcp://127.0.0.1:8786', workers=4, threads=4, memory=16.69 GB)
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34015
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44047
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37895
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38171
Traceback (most recent call last):
  File "COSIPY.py", line 345, in <module>
    main()
  File "COSIPY.py", line 90, in main
    run_cosipy(cluster, IO, DATA, RESULT, RESTART, futures)
  File "COSIPY.py", line 145, in run_cosipy
    with Client(cluster,processes=False) as client:
  File "/usr/lib/python3.7/site-packages/distributed/client.py", line 647, in __init__
    "Unexpected keyword arguments: {}".format(str(sorted(kwargs)))
ValueError: Unexpected keyword arguments: ['processes']
Exception ignored in: <function Client.__del__ at 0x7f570f363f80>
Traceback (most recent call last):
  File "/usr/lib/python3.7/site-packages/distributed/client.py", line 1118, in __del__
  File "/usr/lib/python3.7/site-packages/distributed/client.py", line 1323, in close
AttributeError: 'Client' object has no attribute '_periodic_callbacks'

```

Create initial 2D snowfield

Snow field is currently homogenously initialized with a constant snowheight. In real cases it might be useful to read the 2D snow field directly from the input Netcdf-file. Therefore, the read input file needs to be extended to read the snowfield and the grid must be initialized accordingly.

Geothermal heat lfux

Until now, the temperature is prescribed at the bottom of the domain. This should be changed to a Neumann condition (prescribed heat flux) to ensure correct results even with low ice thickness.

Evaluation Tool

  • Compare results with ablation stakes (file with stake information)
  • Ice temperatures

Required hotfix for radiation module Wohlfahrt2016 in aws2cosipy utility

With the newest version there is a mistake in the aws2cosipy utility when running the radiation module after Wohlfahrt et al. 2016:

The variable radiationModule has the options of either Wohlfahrt2016, Moelg2009, or none and corresponds to the specified routine. However, when invoking Wohlfahrt2016 a faulty if statement leads to constant radiation values over the whole glacier domain.

See the code from line 460 to line 480 in aws2cosipy.py:
-> if radiationModule=='old'

if radiationModule == 'Wohlfahrt2016':

  print('Run the Radiation Module Wohlfahrt2016')
  # Change aspect to south==0, east==negative, west==positive
  aspect = ds['ASPECT'].values - 180.0
  ds['ASPECT'] = (('lat', 'lon'), aspect)

  for t in range(len(dso.time)):
      doy = df.index[t].dayofyear
      hour = df.index[t].hour
      for i in range(len(ds.lat)):
          for j in range(len(ds.lon)):
              if (mask[i, j] == 1):
                  **if radiationModule == 'old':**
                      G_interp[t, i, j] = np.maximum(0.0, correctRadiation(lats[i], lons[j], timezone_lon, doy, hour, slope[i, j], aspect[i, j], sw[t], zeni_thld))
                  else:
                      G_interp[t, i, j] = sw[t]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.