GithubHelp home page GithubHelp logo

row_tasks's People

Contributors

gmerritt123 avatar

Watchers

 avatar

row_tasks's Issues

Particle Tracking Workflow

Idea is to build a routine for executing particle tracking and possibly SWAT analysis on any given dac file. dac files will be generated via RunModel.py, risk assessment related exports are generated via WSSU_Tool_DataPrep.py.

Tasks:

  • ensure dac has 'PT_Poro'
  • load dac
  • Backward particle tracking
    • Generate backward particle release locations
    • Municipal wells only etc.
    • 25 m radius, 25 particles per ring, start with top/middle/bottom of screen
    • find WT at top ring release locations, where top ring release locations is above WT, move down to WT
    • Adjust middle ring commensurately
    • execute particle tracking and attribute
    • Some routine for cutting out unsat zone:
      • use id_layer_shifts to ID when particles move in/out of the unsat zone
      • define some threshold length or time spent consecutively in unsat zone to flag as when particle should be truncated
    • create 3D lines, binned by 2, 5, 25 year travel
    • TBD --> how to translate into WHPA polygons --> concave hull with QA steps, varying alpha by wellfield, vs "rasterization"
  • Forward particle tracking
    • release grid over domain (find WT)
    • receptor polygons
    • execute forward PT
    • join to receptor polygons
    • trim to captured particles
    • attribute those and export for potential SWAT analysis
    • 2, 5, 25 year calculated from time of capture
    • create 3D lines
    • TBD --> how to translate into WHPA polygons --> concave hull with QA steps, varying alpha by wellfield, vs "rasterization"

Key exports:

  • Attributed Backward particles
  • Attributed Forward particles (captured only)
  • Backward binned lines
  • Forward binned lines

XS Tool Preparation

AS OF v2.html

  • Hovertool Updates: WellName displayed in place of sys_loc_code on:
    • Plan view Lith
    • Plan view Picks
    • Plan View Boreholes
    • XS Lith
    • XS Picks
    • XS WLs/Screens
  • DESC lith field added to BHViewer hover
  • WellName now displayed in BHViewer on click --> pull request for XS lib req'd to pass additional fields to pv_src in BHViewer
  • Additional Materials added to mat lith tables in XS repo --> pull request
  • 2020 Avg wls averaged (side script export from Hydrographs)
  • "Kill switch" setup dev'd and added
  • Downloaded and using most recent model from Karl (V3.dac)
  • Truncated ROW lith to only Moraine area (i.e. locations NOT in cambridge model)

AS OF v3.html

  • Full QA of pick data:
    1. Identifying most recent pick for each sys_loc_code for each unit by parsing consultant_reference field to get a Date
    2. Identifying shallowest available pick for each sys_loc_code for each pick since some consultants picked multiple intervals across the hole for the same pick
    3. Identification of sys_loc codes where ii. is true and the duplicate picks are not in sequence with each other --> e.g. AFB1 is picked, then ATB2, then AFB1 again
    4. JL then looked over these and determined whether taking shallowest was appropriate and if not, overriding manually
    5. With this custom filter applied, only duplicate picks (same depth, same unit, same date) remain, which can be safely removed from the dataset

AS OF v4.html

  • URGENT: get missing borehole location data :

    • "1_BasicWellInfo_wRefElev_v12_UR_20220412_183325" from eWRAS provides XY locations for wells
    • "1_Reference_ELEV_v7_20220412_183332" from eWRAS provides reference elevations for wells
    • "1_SQL_X-YInfo_v4_20220518_200909" from eWRAS provides XY locations for all locations in eWRAS (boreholes and wells), but does not have ref elev.
    • Currently this approximately halves the total number of lithologies/picks we can display
    • Need to discuss with Karl to obtain this data
    • Karl: Use "Coordinates & Ground Elevations.xlsx" -->
    • With this applied:
      • 109 sys_loc_codes with lith_data missing location data
      • 1275 sys_loc_codes with loc_data missing lith_info
      • 4,852 sys_loc codes with both lith data and location data
  • Picks and Liths: Populated WellName with sys_loc_code if WellName absent

  • XS How to doc --> Joelle completed -- > GM completed minor edits and now up on team site accompanying XSv2

Hydrograph PNGs

  • instantiate two figures (one for pumping, one for hydrograph)
  • assemble a layout with the three figures (well, hydrograph and pumping)
  • remove hydrograph x-axis labels
  • remove extra y axis
  • add lowest safe waterlevel to well figure
  • add SAAD to well figure colored by available drawdown

Hydrograph Preparation

AS OF v3.html (Both Cambridge and Moraine)

  • Truncated ROW wldata to only Moraine area (i.e. locations NOT in cambridge model)
  • Queries set up to query both Cambridge and Moraine models to extract model layer at location middle of screen
  • Mapping setup to map model layer to geounit and color (CAM colour theme added to XS repo)
  • Color theme by geounit applied to both continuous and discontinuous points:
    • Plan view
    • Hydrograph
    • Key learning about factor_cmap: sometimes better to take the size hit than to force on the fly color mapping
  • Discontinuous diamond marker size increased
  • Legend built for unit name/color
  • WellName hover in place of sys_loc_code
  • Checkbox groups for filtering by geounit added relevant PR
    • Workaround req'd to have checkbox group control both continuous and discontinuous data types (trigger one callback with another callback)
  • Calculation of avg WLs for 2020 for XS tool

GM TO DO:

Issues to raise with Karl

Landfill data:

  • Parse spreadsheets --> Completed except for "049378-Waterloo Landfill- AFB2 Water Levels- 1997 to 2020.xlsx"
    • Impossible to parse without major effort
    • 049378-Triannual WL Surfer.xlsx provides reasonable coverage/overlap in lieu
  • No common field between "049378- Well coordinates- 11-2020.xlsx" and "049378-RPT40-APPC.2 (Well Completion Details).xlsx" to join on (sys_loc_codes do not align at all)
    • How to reconcile?
  • WAIT FOR MORE DATA FROM KARL FOR CAMBRIDGE AND KITCHENER - Need Cambridge Screens, Kitchener Coords and Screens
  • TO MOVE FORWARD ON WATERLOO LANDFILL Use coords from "049378-Triannual WL Surfer.xlsx " and screens from "049378-RPT40-APPC.2 (Well Completion Details).xlsx"

Other

  • Incorporate Landfill WL data to hydrograph tool once wrangling is complete
  • Incorporate PGMN data

JL TO DO

  • Add middle of screen to tables
  • "Bonus" Add Wellfield "field" to loc_dfs based on nearest well (Do Last)
    • Cross join pumping wells locations (mp_locdf) to continous/discontinuous well locations (cloc_df,dloc_df)
    • Calculate distance between
    • Retrieve the row corresponding the minimum distance for each obs location
    • Retrieve the distance and the pumping well for each
    • If within x km, assign wellfield, else None
    • Get the corresponding wellfield for that well
    • Add that info to loc_dfs (cloc_df and dloc_df)
    • Add to table
    • Extra bonus, add checkbox group
  • Placement of tentative repWLs i.e. Calib Targets and Quality Rankings --> MORAINE MODEL ONLY
    • Establish using transducer 2020 avg vs manual
    • Establish using some other avg year if poor data in 2020 relative to others
    • End goal is a table like:
sys_loc_code RepWL RepDateStart RepDateEnd RepSource Comment QualRank
6503... 303.21 Jan 1 2020 Dec 31 2020 Continuous Avg Transducer Data in 2020 High
6502... 305.35 Jan 1 2018 Dec 31 2018 Discontinuous No 2020 data, 2018 next most representative Medium
  • Complete documentation/table summaries of calib target methology:
    • Data "coverage" pivot table conditionally formatted
    • Hierarchy of quality/preference ranking

Merge Dundee North Updates with Moraine Model uploaded by Region (v3.fem)

Key Tasks

  • @gmerritt123 to upload latest Dundee North fem to sharepoint
  • Establish extent of Dundee updates
  • Attempt layer elev update based on XY-slice join (because mesh will not be consistent), if that fails re-do the Dundee North update interpolation onto V3.fem's mesh (@jlangford92 has that script)
  • kZone check:
    • All kZones in v3 have unique Kx/Anis combo
    • Check for kZones where Kx/Anis between Dundee North Model and v3 are not the same
    • Incorporate Dundee North kZones/kZone values into v3.
  • Manual update of Type 1 BCs at Strasburg Creek headwaters (Look at Dundee North fem and V3.fem)
  • Leave Recharge for now

XS Tool --> QGIS figures

Need a means of translating bokeh-XS to QGIS where we can manipulate to make nice looking figures

Bokeh Side

Bokeh_Util JS functions:

  • cds_to_polygeojson function: translates bokeh cds data to nested dict following geojson standard
  • download_string function: writes a string to text file
  • genExportComponents function to export stuff as csv (probably suitable for lith data, wls, picks etc with proper QGIS post processing)

XS Library Update:

QGIS Side:

  • Workflow where within a folder:
    • User sets desired vertical exag
    • Lith data is converted into rectangle polys based on x,y,h,w
    • "Collars" is generated from Lith_Data
    • Q (not GRASS) affine function is applied on all objects to applied vertical exag
  • XS "Template" pieces like vertical/horizontal scales etc.

Picks to Surfaces Process

Need a means of getting from Stantec Picks to Model surfaces.

Stantec provides picks as exported out of xs tool. We should also request a polygon of the update area for this particular submission of picks.

  • Potential feature request for XS tool --> draw update area polygon and export geom -- to add to #1

image

1) Merge latest pick datasets with the previous dataset (picks_recon.py)

  • Global QA/QC of new pick dataset
    - [x] Duplicates check
    - [x] Order check
    - [x] na in Z field check
    - if new pick dataset fails these checks, review and send back to pick makers for fixing before continuing
  • Merge the new QAed picks with previous dataset (funtion called pick_merge_check):
    - [x] round Z to 2 decimals to reduce rounding errors
    - [x] populate a pick_dict which houses:
    - [x] unchanged: picks that didn't change (in any field) from previous dataset
    - [x] base_unique: picks that are unique to the original dataset (eg if youre merging a subset to a larger dataset)
    - [x] new: new picks (picks where the 'PickType' has been flagged as new)
    - [x] modified: picks where the Z, Comment, Reviewed, Omit OR NewPick field have changed
    - [x] tricky because there may be reasons to carry older comments forward... manually select whether you keep old
    or new in the variable explorer... shouldn't be functionized, someone needs to review for this step
    • group these pick types together to develop a new merged dataset

2) Local QA Process (@gmerritt123 )

Need to locally QA picks a number of ways:

  • ID locations that don't have control point pinchouts and review
  • ID locations that have grades >20% between their nearest neighbours
  • get_profile_data from existing model at these locations
  • Consolidate with model nodes:
sys_loc_code Reviewed PickZ MdlZ Comment Formation MdlSlice XY WellName
920005A 1 352 355 I Picked This ATB2 5 54,32 WT-OW1
920005A 1 Nan 352 AFB2 6 54,32 WT-OW1
2695421 0 356 356 NodeValue GS 1 60,35
  • Routine to calculate thicknesses, with "intentional" Nan returns where no explicit thickness is defined across a unit
  • Format Reviewed field according to code
  • Missing picks check (check that every layer is picked at locations where picks have been made)

3) Kriging dataset prep (PickQA.py and QGIS)

PickQA.py:

  • Reads in csv of picks and concave hull around new/modified picks
  • Reads in model, pulls existing nodes within update area (shapefile made in QGIS)
  • Translate csv to shp or gdf and reproject etc.
  • Filter picks based on being within update polygon
  • Obtain Control Points from existing model using concave hull+buffer
  • Export of update hull

QGIS:

  • read in the merged pick dataset filtered down to the update area in PickQA.py and split into model layers (export result into shapefiles)
    • For reviewing all our flags and changes to model we need:
    • Import of model nodal elevations in the update area and make tins of the elevations and thicknesses for each fm
    • "Elevation" Map
      • Basic Basemaps
      • Model Node "Border" control points
      • Pick Data with themes for:
        • Location with picks missing a pick for the currently viewed layer (Make it really big)
        • Whether Location is reviewed/not reviewed (Shape)
        • Label for Pick and Existing Model Elev
        • Color --> based on difference between model existing elev and picked elev
        • something to show who/when picked
      • "Thickness" Map
        • Same theming/Basemaps as above
        • Except color, which gets a bit more complicated:
          • Color based on pick thickness vs existing model thickness
    • Ability to make edits on pick data - we can edit our picks in QGIS (and flag that we edited them) and read these shapefiles into the kriging scrip
    • Ability to add new records in QGIS

4) Kriging/Interpolation Routine @jlangford92

  • Note: punctual kriging>universal krieging or nearest neighbour

Kriging Routine

  • Extra piece --> use DataExtraction fns to pull type 1 BCs and flag nodes in upd area that are Type 1 BCs
  • Dictionary to map kriging params to each slice (sill, range, account for drift etc)
  • Perform kriging and map to nodes
  • Variogram plots

Constraint Logic

  • Function to generate "weighting raster" for "blending in" the DEM (for ground surface and for HGUs; completed in PickQA.py and a weight is mapped to the nodes)
    image

  • Negative buffer the hull back to the "true" concave hull --> obtain line geom of that, and segmentize to high res, assign "weight" of 1 (meaning this area will get new DEM fully weighted)

  • Segmentize the original hull, assign "weight" of 0 (meaning this area will be fully old DEM weighted) (look at TargetLineSegmentize in https://github.com/jlangford92/GeoSpatialTools/blob/GM_Dump/GeospatialTools.py)

  • Interpolate (method TBD) the model nodes in between these two a weight value

  • Goal is to assign a "weight" to every XY nodal location within the buffered update area hull (Inside inner buffer always = 1)

  • Transfer height of new DEM to all nodes in the full update area (JL needs big DEM, GM to provide height transfer script)

  • Apply weighted average to Z

  • Constrain model layers to this "new" DEM

Model Set Up

Goal is to get a preliminary calibration check and steady state model run setup.

Key Tasks

  • Calculate 2020 avg. pumping rates --> use hydrograph tool gen script as basis

  • Put into model:

    • Get MLW BCs using DataExtraction lib
    • Establish 1-1 relationship between well name in pumping dataset and MLW BC Name in model (may take some manual changes)
    • Generate WellRates.csv with "2020Q" as RateID with correct WellNames etc.
    • Update fem using MLBC class in Pre lib
  • ExecuteFEFLOW setup

    • Can be pretty basic, no need for parameter update setup just yet
    • locations.csv / observation data locations input to be generated based on @jlangford92 's calibration dataset
    • Ensure get_layer is True
    • writing of simulated results to a sub folder
  • SSRes setup

    • Generate "obsdata.pkl" or .csv that contains @jlangford92 's calibration dataset including quality and wellfield groups
    • Mapping of layer to geounit:

    geounit_dict = {1:'Ground Surface',2:'AFA1',3:'ATB1',4:'AFB1',5:'ATB2',6:'AFB2' ,7:'AFB2',8:'ATB3',9:'AFB3',10:'ATC1',11:'AFD1',12:'ATE1',13:'AFF1',14:'Bedrock'}

PTTW Model Update

Initial plan of attack after chat with Patty:

  • Export out existing PTTWs from current model
  • Download/obtain current PTTW database
  • ID expired/still active/new permits in the model domain and whether we have them in the model already or not
  • Use that to guide the permits we target for information once we get the WTRS database

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.