GithubHelp home page GithubHelp logo

rainfa's Introduction

Rainfall Frequency Analysis (RainFA) package

Description

RainFA develops a methodology for data curation and frequency analysis of rainfall series, programmed in Python, that allows: (1) homogenization of the temporal step; (2) performing data quality control; (3) determining homogeneous regions using the L-moments method from a prior process of clusters supported by the Silhouette width and Mantel statistics, as well as Ward's dendogram; and (4) calculation of the l-moments candidates for homogeneous regions according to the Discordancy and Heterogeneity measures.

Further work will focus on extending the methodology to improve the geospatial and temporal distribution of precipitation in homogeneous regions by incorporating new tools into the RainFA gallery.

How does it work

Inputs

  • Station data (.csv): a single file including for each station the following data: station ID, station name, WGS84 coordinates (Latitude, Longitude, Elevation).
  • Precipitation time series (.csv): a different file for each gauge station including station ID, date-time and instantaneous rainfall in (IR) mm.

Calculations

Firstly, data curation and trend testing will help to depurate the rainfall time series.

  • Precipitation time-series database (.parquet): Database with the homogeneized time-step date-time and instantaneous rainfall in mm.
  • Boxplot graphs (.ipynb): Jupyter notebook for the calculation and graphical representation of single station boxplots.
  • Double-mass graphs (.ipynb): Jupyter notebook for the calculation and graphical representation of single station double-mass graphs against average rainfall data.
  • Mann-Kendall test (.ipynb): Jupyter notebook for the calculation of the Mann-Kendall trend test and associated statistics to determine the data stationarity.

Next, the following graphs will allow for cluster classification. For this purpose, the station data file is suplemented with two rainfall statistics summarised from the recipitation time series data file - i.e. annual rainfall and number of days with rainfall above 0.2 mm.

  • Mantel graphs (.ipynb): Jupyter notebook for the calculation and graphical representation of Mantel graph clustering method.
  • Silhouette widths graphs (.ipynb): Jupyter notebook for the calculation and graphical representation of Silhouette widths graph clustering method.
  • Ward's dendogram graph (.ipynb): Jupyter notebook for the calculation and graphical representation of Ward's dendogram graph.

Finally, l-moments ratios will allow the homogeneity of a region to be identified:

  • L-moments discordancy measure (.ipynb): Jupyter notebook for the calculation of the Discordancy measure with l-moments.
  • L-moments heterogeneicy measure (.ipynb): Jupyter notebook for the calculation of the Heterogeneity measure with l-moments.

Case study

The package contains the necessary information to reproduce an example of the different subroutines in a hypothetical region of Spain.

Authors

Acknowledgments

Authors acknowledge Vicente M. Candela Canales for supporting the R&D investment and programs within the Vielca companies.

rainfa's People

Contributors

jiortizv avatar

Stargazers

Tostes_T avatar  avatar Benjamin avatar Javed Ali avatar Joren Hammudoglu avatar

Forkers

krasouli

rainfa's Issues

Trimmed L-moments and `lmo`

I was just notified of your publication by Google Scholar; great work!

If you're still planning on further research, I'd like to point out the Lmo package, of which I am the author.
Unlike lmoments3, it has support for generalized trimmed L-moments (i.e. $\lambda_r^{(s, t)}$ with $(s, t) \in \mathbb{R}^2_{>1/2}$), and (trimmed) L-comoments, in both parametric and nonparametric contexts.
I believe that this functionality, especially the trimmed L-moments, can prove to be a great addition to RainFA.

To illustrate: unbiased sample estimates of TL-location and TL-scale can be found with tl_loc, tl_scale = lmo.l_moments(data, [1, 2], trim=(1, 1)), and the population LL-moments $\lambda_r^{(0, 2)}$ of a GEV(-1.2) distribution can be evaluated with scipy.stats.genextreme(-1.2).l_moment([1, 2], trim=(0, 2)) (after lmo has been imported).

Also, I'll soon release a new version that includes support for the generalized method of L-moments for distribution fitting, which I can imagine to be relevant here, as well.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.