GithubHelp home page GithubHelp logo

slcs-jsc / mptrac Goto Github PK

View Code? Open in Web Editor NEW
34.0 10.0 13.0 799.4 MB

Massive-Parallel Trajectory Calculations (MPTRAC) is a Lagrangian particle dispersion model for the analysis of atmospheric transport processes in the free troposphere and stratosphere.

License: GNU General Public License v3.0

Shell 7.06% Makefile 1.12% C 90.23% Python 1.59%
atmospheric-modelling dispersion-model stratosphere trajectories dispersion atmospheric-science climate climate-science troposphere meteorology

mptrac's People

Contributors

bolarinwasaheed avatar codacy-badger avatar fkhosrawi avatar holke avatar hydrogencl avatar janhclem avatar jonassonnabend avatar kaveh01 avatar laomangio avatar lars2015 avatar sabinegri avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mptrac's Issues

Performance degradation with Stage 2024 on JUWELS Booster?

Describe the bug

MPTRAC nightly builds (https://datapub.fz-juelich.de/slcs/mptrac/nightly_builds/) show that the runtime of the physics timers on JUWELS Booster significantly increased on 3 Nov 2023, when the new stage 2024 was enabled. Switching back to stage 2023 on 10 Nov reproduced the original runtime:

plot_phys

Additionally, the new stage causes an issue in gpu_test sample.tab output:

Bildschirmfoto vom 2023-11-10 08-04-13

To Reproduce

Rerun the test case at: /p/fastdata/slmet/slmet111/model_data/mptrac/nightly_builds/juwels-booster/run.sh

Expected behavior

Need to further investigate this issue and find the root cause.

Maybe next to updating the software stage also other changes of the JUWELS Booster system config were introduced?

Build a C to Fortran Interface for the Library

C to Fortran Interface for the Library...

  • Define structs in the interface
  • Define functions in the interface
  • Adapt the Makefile to compile everything correctly ...
  • Implement high-level functions in Fortran to initialize, run and finalize MPTRAC
  • Write kind of a trac.f90 as a Fortran time loop to run a test case...
  • Write tests for the interface / establish workflow that makes interface errors transparent...

Extend the possible interpolation methods for different grids (mostly ICON grid data)

We already have interpolation functions and data structures for cartesian grids. Could we extend the interpolation functions to deal with icosahedral grids?

  • Literature research: Can libraries be used for it?
  • What routines can be shared with LaMETTA, YAC, ICON?
  • Implement an icosahedral met grid type.
  • Write a reading function for ico met type.
  • Implement index search functions for the ICON grid.
  • Implement a horizontal interpolation from the ICON grid to air parcel positions...
  • Implement a horizontal interpolation from the ICON grid to the lat-lon support grid, which is required for parameterizations.
  • Formulate a hybrid coordinate with potential temperature in the stratosphere and a non-hydrostatic coordinate in the troposphere.
  • Generalize the vertical coordinate system in MPTRAC.
  • Implement exact interpolations for winds on pressure levels that consider the tilt of the levels against the eta coordinate.
  • Implement the vertical interpolation function and combine it with the horizontal interpolations.
  • Implement a control flow to select between cartesian, icosahedral or any other potential grid geometry.
  • Test the procedures with ICON output data...

bug in ERA5 polar trajectories?

  • The CLaMS group identified a bug in polar trajectories using ERA5 data. Air parcels sometimes stick to the pole (see plot below):

compare_traj

  • The issue seems to be related to a new interpolation scheme of the meteo data on MARS implemented in 2019.

  • More information can be found in the meteocloud issue tracker: https://jugit.fz-juelich.de/slcs/meteocloud/-/issues/66

  • Identify and further investigate a test case for MPTRAC

  • Add a fix to the meteo data during meteo data preprocessing?

assessment of simulated diffusion

Collection of ideas for assessing the diffusion schemes in MPTRAC:

  • check for dependencies of simulated diffusion on the time step of the model data or the meteo data

  • estimates of vertical diffusivities from aircraft measurements (Wagner et al., 2017)

  • look for diffusivity coefficients applied in Eulerian models? literature review?

  • compare diffusion of isentropic and kinematic trajectories

  • look at wind fluctuations from ESA Aeolus measurements: https://earth.esa.int/eogateway/missions/aeolus

implementation of asynchronous file-I/O

Is your feature request related to a problem?

  • a significant amount of runtime of the simulations is required for reading meteo data (in particular for ERA5 data)

Describe the solution you'd like

  • implement asynchronous file-I/O so that meteo data for the next time interval are being read ahead of time while trajectories calculations for the current time interval are conducted in parallel

Describe alternatives you've considered

  • compression of meteo data to reduce waiting time for file-I/O

Additional context

  • Example of runtime measurements of a recent GPU run on JUWELS Booster with 100 million particles and using ERA5 data (packed binary):
TIMER_READ_MET_BIN = 35.306 s    (min= 1.09297 s, mean= 1.41225 s, max= 2.2265 s, n= 25)
TIMER_GROUP_INIT = 0.553 s
TIMER_GROUP_MEMORY = 19.416 s
TIMER_GROUP_INPUT = 36.961 s
TIMER_GROUP_PHYSICS = 89.866 s
TIMER_GROUP_OUTPUT = 32.075 s
TIMER_TOTAL = 178.871 s
  • Reading the ERA5 requires about 20% of the total runtime.

build and make issue

Hi, i have met some issues when i install MPTRAC onto server linux system.
Describe the bug

  • sometimes it succeeded after running a build script, but sometimes it failed with same issue as #4
succeed

bug1

  • when it succeeded, i tried to compile and check the installation, but it failed with following errors, more details can be seen in attachment. The errors show some issues with varname and longname size, but it still failed after changing the varname[LEN] and longname[LEN] in libtrac.h.
bug2
  • The errors show some issues with varname and longname size, but it still failed after changing the varname[LEN] and longname[LEN] (from 5000 to 5100) in libtrac.h.
bug3

Environment
i use the Rocky Linux server
envs

log_check.txt

Unable to read in variables - ERA5

  • I downloaded surface variables corresponding to this selection - https://apps.ecmwf.int/data-catalogues/era5/?class=ea&stream=oper&expver=1&type=an&year=2005&month=jun&levtype=sfc - at a resolution of 0.3x0.3
  • ML level data was also downloaded at the same resolution and the two were emerged in python (as the two files had different GRIB editions and CDO was apparently not able to handle that)
  • Renamed u10 as u10m and modified attribute GRIB_cfVarNam to store the same, (as a try to address the 'invalid name' issue that followed); similarly for v10m and t2m
  • LINUX output/ error as follows:
  Read meteo data: data_in/ea_2005_06_20_00.nc
  Read meteo grid information...
  Time: 172540800.00 (2005-06-20, 00:00 UTC)
  Number of longitudes: 434
  Number of latitudes: 217

Error (libtrac.c, read_met_grid, l3578): NetCDF: Invalid dimension ID or name

Could you please help me debug? Thank you.

definition of pressure level sets

Is your feature request related to a problem?

  • Check the definition of pressure levels sets in level_definitions() in libtrac.c.

Describe the solution you'd like

  • Add additional pressure levels between 1013.25 and 1044 hPa to achieve finer grid spacing. This might be critical for proper calculation of meteo variable that depend on the near-surface data (e.g. CAPE or PBL).
  • Compare selected level with previous definition used for CDO ml2pl operator to convert ECMWF data from model levels to pressure levels.

Reference

bug in netCDF grid output?

Hi Mingzhao,

there is a bug in the netCDF grid output, I think.

When you use strcat() to build the netCDF variable names, it will overwrite the original data of the ctl struct. So the code crashes here with a netCDF error message:

NC_PUT_DOUBLE(strcat(ctl->qnt_name[iq], "_mean"), help, 0);

You need to create a new string to make this work, e.g.

char varname[LEN];
sprintf(varname, "%s_mean", ctl->qnt_name[...]);
NC_PUT_DOUBLE(varname, help, 0); 

Can you please take a look?

Thanks
Lars

improve documentation for new users

Is your feature request related to a problem?

  • new users need more detailed documentation to get started with MPTRAC

Describe the solution you'd like

  • check/revise the README file
  • revise and extend the mkdocs manual, including descriptions of individual apps/tools
  • add/check links to references and other sources of information
  • revise and extend the doxygen manual

Additional ideas

  • implement `-help' flags in the apps
  • create and add small logo for MPTRAC on doxygen and mkdocs manual

improve library build script

Is your feature request related to a problem?

  • the library build script (libs/build.sh) can be improved by additional functionality

Describe the solution you'd like

  • add a command line argument so that the number of threads for parallel builds can be selected (`make -j')
  • add command line arguments so that the user can select which libraries to build (to avoid time-consuming builds of GSL or netCDF, if the software is already available)
  • add an option to download the tarballs of the libraries from their respective sources from the web rather than including them in the MPTRAC git repository?
  • perhaps implement this as a Makefile rather than a bash script?

Additional context

  • pull request by Yen-Sen: #14

./build.sh nc4

after ./build.sh nc4,there are some errors, but ./build.sh nc2 is no problem

***** 12 FAILURES! *****
Command exited with non-zero status 1
0.34user 0.29system 0:00.62elapsed 103%CPU (0avgtext+0avgdata 4236maxresident)k
0inputs+0outputs (0major+18632minor)pagefaults 0swaps
Makefile:3451: recipe for target 'dt_arith.chkexe_' failed
make[4]: *** [dt_arith.chkexe_] Error 1
make[4]: Leaving directory '/usr/local/meteo/mptrac/libs/build/src/hdf5-1.12.1/test'
Makefile:3436: recipe for target 'build-check-s' failed
make[3]: *** [build-check-s] Error 2
make[3]: Leaving directory '/usr/local/meteo/mptrac/libs/build/src/hdf5-1.12.1/test'
Makefile:3431: recipe for target 'test' failed
make[2]: *** [test] Error 2
make[2]: Leaving directory '/usr/local/meteo/mptrac/libs/build/src/hdf5-1.12.1/test'
Makefile:3207: recipe for target 'check-am' failed
make[1]: *** [check-am] Error 2
make[1]: Leaving directory '/usr/local/meteo/mptrac/libs/build/src/hdf5-1.12.1/test'
Makefile:715: recipe for target 'check-recursive' failed
make: *** [check-recursive] Error 1
root@LAPTOP-9C6L8HRA:/usr/local/meteo/mptrac/libs# cd build/
image

Adapt mixing module for usage on GPUs

Is your feature request related to a problem?

  • The new mixing module in trac.c is limited to operating on CPUs. For GPU runs, particle data will be transferred form GPU to CPU memory, mixing will be calculated on the CPUs, and data will be transferred back to GPUs.

Describe the solution you'd like

  • Mixing should be calculated directly on the GPUs, avoiding the data transfers between CPU and GPU memory.

Additional context

  • The code might be rewritten for GPUs, following the example of module_sort().

modifications of library build scripts

Suggestions to modify the library build script:

  • Limit the set of libraries built with the default option/selection to those that are mandatory for building the model (i.e. GSL, HDF5, netCDF, and zlib). Optional libraries (thrustsort, zfp, zstd, KPP) should only be compiled if the users selects them. This avoids initial problems installing MPTRAC if the library build processes are failing. (suggested by @laomangio)

  • Consider integrating the libraries as git submodules rather than adding the source packages to the MPTRAC git repository. (suggested by @hydrogencl )

list of case studies for MPTRAC

Run MPTRAC within the MESSy environment

Use the MPTRAC library to run trajectories as a replacement for the CLaMS trajectory module.

  • Install MESSy/CLaMS and set up a test case with a climatological run.
  • Identify potential "access points" to load functions of the MPTRAC library and to replace code from CLaMS
  • Understand how to compile and use libraries in MESSy
  • Implement converter functions from CLaMS to MPTRAC and vice versa.
  • Implement functions to read the config for MPTRAC, either as name lists or by giving the pathway to the config.ctl file to the name list of CLaMS.
  • Test using the library in MESSy/CLaMS by using read_ctl
  • ... then by using get_met
  • ... and so on ...
  • When all components can be used run the entire script and intercompare results with the default run.

update of meteo binary data files

Is your feature request related to a problem?

  • Some additional variables (ul, vl, wl, o3c, ...) have recently been added to the meteo pre-processing code and meteo data structs of MPTRAC. These data should be added to the meteo binary files as they would otherwise be missing during the simulations.

Describe the solution you'd like

  • Double check whether all the variables of meteo data struct met_t in libtrac.h are considered for input/output in read_met() and write_met() in libtrac.c. Add the missing variables.

Additional context

  • As the binary file format will change, a new version number should be assigned.

make -j4

use nc2

atm_dist.c:47:3: optimized: Inlining printf/47 into main/185 (always_inline).
/usr/bin/ld: cannot find -lhdf5_hl
/usr/bin/ld: cannot find -lhdf5
collect2: error: ld returned 1 exit status
make: *** [Makefile:142: atm_select] Error 1
/usr/bin/ld: cannot find -lhdf5_hl
/usr/bin/ld: cannot find -lhdf5
collect2: error: ld returned 1 exit status
make: *** [Makefile:142: atm_init] Error 1
/usr/bin/ld: cannot find -lhdf5_hl
/usr/bin/ld: cannot find -lhdf5
collect2: error: ld returned 1 exit status
make: *** [Makefile:142: atm_dist] Error 1
image

Python code examples

Is your feature request related to a problem?

  • Provide some simple examples of python scripts helping new users to work with MPTRAC simulation output.

Describe the solution you'd like

  • Examples should show how to read atm- and grid-files in ASCII file format and make map plots of the data.
  • Put Python code examples in a new directory ./projects/python

Additional context

  • Add some context on python usage to the mkdocs documentation?

Restructure MPTRAC to serve as a library.

MPTRAC routines could be made available to other Models (CLaMS) more easily if MPTRAC is properly organized into a library containing those routines (libtrac.h/libtrac.c), and a time loop (trac.c).

  • Move routine declarations and macros etc. to libtrac.h, from trac.c
  • Eventually, rename and reorder the routines.
  • Create high-level functions for the initialisation, running and finalization of the library tools.
  • Change the compile process if needed ...
  • Test library in a set-up separate from MPTRAC...

visualization of particle data

Is your feature request related to a problem?

  • looking for different methods for visualization of MPTRAC particle data

Describe the solution you'd like

  • identify different visualization tools suitable for displaying the particle data in a more fancy way (3-D plots, animations, ...)
  • possibly implement new I/O functions in MPTRAC for writing/converting particle data in file formats best suited for the visualization tools

Describe alternatives you've considered

  • particle data output is currently provided in the form of ASCII tables for use with gnuplot
  • particle data can also be written as netCDF files for use with other visualization tools, python, ...

EasyBuild installation of MPTRAC

Is your feature request related to a problem?

  • Users might have difficulties compiling MPTRAC before they can run it on Juelich HPC systems.

Describe the solution you'd like

  • Provide an EasyBuild installation of MPTRAC on JURECA and JUWELS so that the model can be used by loading it as a module.
  • Check back with Sabine about EasyBuild configuration.
  • Installation in $USERSOFTWARE of meteocloud data project.

Describe alternatives you've considered

  • Prepare binaries with static compilation that can be copied to/from different machines.

Additional context

  • Think about proper solutions for the automatic deployment of code?

../build.sh nc4 error always

when I change ubuntu version to Ubuntu 20.04.4 LTS by windows10
after ./build.sh nc4 , there are also some problems:
***** 12 FAILURES! *****
Command exited with non-zero status 1
0.37user 0.20system 0:00.61elapsed 94%CPU (0avgtext+0avgdata 4200maxresident)k
0inputs+0outputs (0major+18348minor)pagefaults 0swaps
make[4]: *** [Makefile:3451: dt_arith.chkexe_] Error 1
make[4]: Leaving directory '/usr/local/meteo/mptrac/libs/build/src/hdf5-1.12.1/test'
make[3]: *** [Makefile:3437: build-check-s] Error 2
make[3]: Leaving directory '/usr/local/meteo/mptrac/libs/build/src/hdf5-1.12.1/test'
make[2]: *** [Makefile:3431: test] Error 2
make[2]: Leaving directory '/usr/local/meteo/mptrac/libs/build/src/hdf5-1.12.1/test'
make[1]: *** [Makefile:3208: check-am] Error 2
make[1]: Leaving directory '/usr/local/meteo/mptrac/libs/build/src/hdf5-1.12.1/test'
make: *** [Makefile:715: check-recursive] Error 1
image

Trajectory calculations for the boundary layer

Is your feature request related to a problem?

  • MPTRAC may have limited capabilities for calculating trajectories in the boundary layer due to using pressure as vertical coordinate. Meteo data are converted from model level to pressure levels for usage with MPTRAC, which is causing interpolation errors. The pressure coordinate is not terrain-following.
  • More recently, Jan added the option to calculate diabatic trajectories, which provides the option to use meteo data on model levels rather than pressure levels

Describe the solution you'd like

  • Conduct a comparison of trajectory calculations for the boundary layer (model level versus pressure level meteo data) so that we can get an idea of how large the deviations are.
  • Conduct a comparison with HYSPLIT online trajectories (or other models such as FLEXPART or CLaMS) as a reference.
  • Assess what would be needed to make MPTRAC suitable for boundary layer applications.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.