GithubHelp home page GithubHelp logo

gns-science / toshi-hazard-store Goto Github PK

View Code? Open in Web Editor NEW
0.0 1.0 0.0 34.18 MB

Library for saving and retrieving NZHSM openquake hazard results with convenience (uses AWS Dynamodb).

Home Page: https://gns-science.github.io/toshi-hazard-store/

License: Other

Makefile 0.27% Python 99.73%

toshi-hazard-store's Introduction

toshi-hazard-store

pypi python Build Status codecov

Features

  • Main purpose is to upload Openquake hazard results to a DynamodDB tables defined herein.
  • relates the results to the toshi hazard id identifying the OQ hazard job run.
  • extracts metadata from the openquake hdf5 solution

Credits

This package was created with Cookiecutter and the waynerv/cookiecutter-pypackage project template.

toshi-hazard-store's People

Contributors

chrisbc avatar chrisdicaprio avatar

Watchers

 avatar

toshi-hazard-store's Issues

Feature: grids + decomposed hazard checklist

a) new Source Logic trees have several thousand branches so we must decompose these and do aggregation in the cloud.
b) we need gridded sites to produce hazard maps and for site-specific results (to nearest grid-site)

The checklist then is:

  • check buildling XML from new LTB json
  • publish new grids into a new configuration
  • add new grid options into task config
  • setup truncated decomposition for testing/tuning ( subset of full LTB set )
  • TEST RUN (local, + dynamodb with multi-proc)
  • validate number of rlzs per LTB (should be same as GMPE TECT_REGS * SOURCES (In LTB) = 24 (6 Sources * 4 TECT_REGS)
  • BIGGER LOCAL TEST RUN (GRD_NZ_0_1 (3600 * 1LTB => STORE multi-proc = 101 mins
  • new RLZ table add new indexing/ fields to dynamo rlzs hash = downsample_grid, sort = grid + GTID + rlzid
  • push to new tables store_hazard_v3 4 T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTA1MzU2 R2VuZXJhbFRhc2s6MTA1MzUz -v -c
  • Add latest GMM logic tree from Sanjay (WAIT until science team supply new config)
  • new HazardAggregation table

PARKED / DEFERRED

  • set up us-east-1 global table replicas
  • using AWS compute test/build subset of of sample config (local) pushing to new table (use SPLIT_TRUNCATION config)
  • ToshiAPI add update mutation for HazTask and make use of this in oq_run_hazard.py (to track hazard jobs better)
  • new DISAGG table we want to store dissaggs PARK
  • create an aggregation function - 1 aggregation per grid + GTID

Feature: disaggregations

We want to be able to calculate disaggregations for our aggregated hazard calculation.

  • User can specify a PoE (or several PoEs), imts, and which quantile curves (e.g. mean, 0.1, 0.5, etc)
  • calculate disagg by finding the realization that is closest to the shaking level at the PoE of the quantile curve of interest (this will be different for each location)
  • The realization consists of a set of gsims and a set of sources to cover all TRTs/regions
  • create a python configuration that can be copied to runzi to launch oq engine disaggregation calculations

output configuration example :

dissag_config = dict(
  vs30 = 400
  
  source_ids = ['SW52ZXJzaW9uU29sdXRpb25Ocm1sOjExMDk5Mg',
      'RmlsZToxMTEyMjQ','SW52ZXJzaW9uU29sdXRpb25Ocm1sOjExMTA2Mw==',
      'RmlsZToxMTEyMTM=','SW52ZXJzaW9uU29sdXRpb25Ocm1sOjExMTEzNQ==',
      'RmlsZToxMTE5MTM=',
      'RmlsZToxMTEyMzk=']
  
  imt = 'PGA'
  
  level = 0.954
  
  location = '-36.870~174.770'
  
  gsims = {
      'Active Shallow Crust':'Stafford2022_Central',
      'Subduction Interface':'Atkiinson2022Crust_Upper',
      'Subduction Intraslab':'AbrahamsonEtAl2014'
  }
)

We will need a mapping between the gsim name (e.g. Stafford2022_Central) and the logic tree branch xml

<logicTreeBranch branchID="STF22_center">
    <uncertaintyModel>[Stafford2022]
        mu_branch = "Central" </uncertaintyModel>
        <uncertaintyWeight>0.136</uncertaintyWeight>
</logicTreeBranch>

Sanity-check performance v1 vs v2 tables

Use the new v2 local storage options to compare performance characteristics for a typical openquake-hazard extractor task.

Using a single output HDF5 from NSHM_v1.0.4 :

  • 1) check the V2 local storage adatper existing THS Hazard tables (Meta + Realisations) works OK
  • 2) get baseline performance stats from above

deferrred:

  • 3) create new v2 hazard tables for realisations - using new nzshm-model to identify rlzs
  • 4) test these in same way as item 2 above

FIX: remove WIP from Dynamodb ToshiOpenquakeMeta v3 table name

The DynamoDB table name in region ap-southeast-2 THS_WIP_OpenquakeMeta-{STAGE} should be renamed to
THS_OpenquakeMeta-{STAGE} where STAGE = PROD or TEST. i.e the _WIP substring should be removed.`. This need so be synchronised in Code and deployment.

There's no automatic table renaming process - see https://stackoverflow.com/questions/21410851/how-to-rename-a-dynamodb-table

Done when:

  • push PR with new table name to main
  • complete the manual rename process in ap-southeast-2

Feature: store "aggregate" disaggregation

We want to be able to store an aggregate disagg (e.g. typically, a mean of all disaggregations of a logic tree)

unique identifiers of a disaggregation are:

  • vs30
  • location lat,, lon
  • hazard_model_id (THS)
  • annual PoE (what about PoO)
  • IMT
  • agg of the hazard curve (e.g. 'mean', '0.5', etc)
  • agg of the disaggregation

Data to be stored

  • hazard level of shaking. This is a single floating point value
  • the bins (i.e. the values of magnitude, distance, etc) at which we've calculated the disagg: equivalent to levels in a hazard aggregation
  • the disaggregation probabilities: equivalent to values in a hazard aggregation

Data Structure

The diaggregation is a multi-dimensional matrix (typically 4 dimensions). In the example data provided the dimensions are

  • magnitude
  • distance
  • trt (tectonic region type)
  • epsilon

It is possible to have a disagg over a subset of those dimensions. It may also be possible that we have more or other dimensions (e.g. lat and lon)

Code References

Sample Data

sample_data.zip

For the sample data provided, the axis of the 4d disagg array are:

  • 0: magnitude
  • 1: distance
  • 2: trt
  • 3: epsilon

Similarly, bins[i] are the values of the bins for the dimension i
bins[4] stores the value of the hazard level of shaking

vs30 = 750
location = lat: 39.000 lon:175.930
hazard_model_id = 'SLT_v8_gmm_v2_FINAL'
annual probability = 0.038559172
IMT = 'SA(0.5)'
agg = 'mean'
deagg_SLT_v8_gmm_v2_FINAL_-39.000~175.930_750_SA(0.5)86_eps-dist-mag-trt.npy
bins_SLT_v8_gmm_v2_FINAL
-39.000~175.930_750_SA(0.5)_86_eps-dist-mag-trt.npy

vs30 = 400
location = lat:-42.780 lon:171.540'
hazard_model_id = 'SLT_v8_gmm_v2_FINAL'
annual probability = 0.019688642
IMT = 'SA(1.5)'
agg = 'mean'
deagg_SLT_v8_gmm_v2_FINAL_-42.780~171.540_400_SA(1.5)63_eps-dist-mag-trt.npy
bins_SLT_v8_gmm_v2_FINAL
-42.780~171.540_400_SA(1.5)_63_eps-dist-mag-trt.npy

Feature: support for logic branch query

We want to interact with data in the store using branch attributes instead of toshi ID for:

  • ad hoc analysis
  • performing aggregations on OQ rlz data

so ....

  • timestamp
  • toshi id / latest version most recent inversions (see timestamp)
  • filter by tectonic region (SLAB??)
  • inversion solution vs distributed
  • bN pair group (TBD)
  • scaling group (e.g 2 low, unscaled, 2 high )
  • polygon rescaled (variations on yes, or no)
  • deformation model

Feature: store realizations

We want to store the level and poe (hazard) for every realization calculated by aggregate_rlzs or aggregate_rlz_mp. They must be indexed to a way to ID the realization by source and gsim branches, so a dict may be the best storage option, though we must test for memory usage and speed (350,000 + realizations)

POC/Arrow evaluation

as we delve deeper into the EPIC #50 it becomes apparent that maybe dig0data tech like arrow can help. So, can we do this...

basic questions

  • convert THS objects into a arrow/parquet dataset that can be worked on easily using just regular FileSystemLike storage (including local and S3)
  • compare performance querying and process large task (eg hazard aggregation in THP
  • enumerate the pros/cons
  • is parquet the preferred serialisation format

Sub questions:

  • can we use arrows in-memory features and/or IPC techniques to boost performance and minimise file IO Plasma?
  • can we do partitioning in arrow (not just parquet) how does that work see
  • can we easily reshape datasets to optimise for different use-cases (3rd party , internal heavy compute)
  • can we use SQL-like querys
    also here SELECT ...

Future possibly

feature: db_adapter - use alternate database(s) for primary storage.

This could be DynamoDB, local (e.g. sqlite), or other DB system. Provide API so that users may extend options available.

Done when:

  • db adaptor pattern established
  • add adapter for sqlite3 (SQL store)
  • add adapter for pynamodb for A/B testing
  • support batch operations
  • migrate legacy tests to use adapters
  • update ths_test script
  • update ths_cache script
  • update API user docs
  • #57

Fix: gridded hazard table vs30 index

cannot query gridded hazard toshi_hazard_store.query.get_gridded_hazard() for a list of vs30s with both 3 and 4 digits. The sort key string for vs30 is probably not formatted correctly.

Migration tool to copy / move curves as needed

As a THS user I want to manage the storage of hazard realisation curves with a CLI script.

We need a way to move curves into different places / states

  • migrate curve[s] from V3 (Openquake) to Rev 4 (HazardRealisationCurve).
    • Local-> Local
    • cloud -> cloud (intra-region)
    • cloud -> cloud (inter-region)
  • Migrate curves from one storage system to another:
    • local to cloud
    • cloud to local
  • restrict options to sane ones in combination with version migration
  • provide means to filter what is migrated (maybe different for storage vs migration.

Fix: Disaggregations

Disaggregations are producing unexpected results. The disagg for AKL has a large Subduction Intraslab contribution (0.5s 10% in 50yrs, among others). AKL should have no slab events within over 100km. The mag, dist disagg plot shows most contribution from M<7, dist<25km, which is inconsistent with the TRT disagg.

Possible sources for the error:

  • disagg is incorrectly labeled after re-running oq engine in disagg mode
  • wrong disagg retrieved by end-user
  • disagg run incorrectly
  • incorrect labeling of disagg in toshi-hazard-store stage
  • retrieval of rlz is incorrect in toshi-hazard-store stage
  • sources are incorrect (slab dist model using wrong coordinates)
  • problem w/ Anne's disagg code
  • oq doesn't like negative longitudes in slab dist seis file
  • gsim LT
  • test if distance and mag disagg is inconsistant with TRT

FIX: meta data query taking longer than expected

Query of metadata is taking 250-350 seconds for 49 hazard IDs. This can be reduced to about 30s if the query loop is broken after receiving N responses where N is the number of IDs in the query. In both cases, the query runs in about the same amount of time, but the ending iteration consumes hundreds of seconds if a break isn't used.

In the test script, the tables are in the PROD database.

Test script:

import time
import ast
import toshi_hazard_store


hazard_ids = [
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTYxMw==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTU1MA==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTMyMg==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTU1MQ==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTYwNA==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTM3OA==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTU0Mg==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTYxOQ==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTU5Nw==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTYxMA==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTUyOA==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTU0OQ==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTM2MA==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTMyMw==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTYwMQ==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTM3Mw==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTQ2NQ==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTMyMQ==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTM2MQ==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTM2Nw==', 
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTU5NA==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTM3MQ==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTM2OA==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTY0Mw==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTYwNw==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTU0NQ==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTYxMQ==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTYwMA==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTM2Mg==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTYwNg==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTYxMg==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTU5NQ==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTQ2Nw==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTU0OA==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTU5Ng==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTU0NA==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTU0MQ==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTYwOQ==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTUyOQ==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTU0MA==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTQ3MQ==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTM3Mg==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTYwOA==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTUxOA==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTYxNg==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTUzMA==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTQ3Ng==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTQ2Ng==',
    'T3BlbnF1YWtlSGF6YXJkU29sdXRpb246MTMyOTYwNQ=='
  ] 

vs30 = 225

def query_asis():
    
    metadata = {}
    tic_0 = time.perf_counter()
    for meta in toshi_hazard_store.query_v3.get_hazard_metadata_v3(hazard_ids, [vs30]):
        tic = time.perf_counter()
        hazard_id = meta.hazard_solution_id
        gsim_lt = ast.literal_eval(meta.gsim_lt)
        metadata[hazard_id] = gsim_lt
        toc = time.perf_counter()
        print(
            f'Elapsed time of query loop: {toc-tic_0:0.5f} seconds. Time to load metadata for {hazard_id}: {toc-tic:0.5f} seconds.'
        )

    return metadata


def query_withbreak():

    metadata = {}
    tic_0 = time.perf_counter()
    for i, meta in enumerate(toshi_hazard_store.query_v3.get_hazard_metadata_v3(hazard_ids, [vs30])):
        tic = time.perf_counter()
        hazard_id = meta.hazard_solution_id
        gsim_lt = ast.literal_eval(meta.gsim_lt)
        metadata[hazard_id] = gsim_lt
        toc = time.perf_counter()
        print(
            f'Elapsed time of query loop: {toc-tic_0:0.5f} seconds. Time to load metadata for {hazard_id}: {toc-tic:0.5f} seconds.'
        )
        if i==len(hazard_ids)-1:
            print('reached all ids, break iteration')
            break

    return metadata

def time_query(query_fn):
     
    tic = time.perf_counter()
    query_fn()
    toc = time.perf_counter()
    print(f'time to run all queries using {query_fn} {toc-tic:0.3f} seconds')

if __name__ == "__main__":

    time_query(query_withbreak)
    print('-'*50)
    time_query(query_asis)


FIX: Meta logic tree exceed 400KB limit

  • toshi-hazard-store version: 0.3.1

Discussed with Anne and maybe this can move rlz_lt info to the RLZS objects, so there'll be no need to store this on Meta. Although there are a vast number of rlzs curves pre solution , so perhaps an intermiediate table is needed

Feature: store disaggregation realizations

We want to store disaggregation the individual realization disaggregations in the database

The unique identifiers of a disaggregation are:

  • vs30
  • location
  • hazard_model_id (THS)
  • annual PoE
  • IMT
  • agg

In addition we require realization data (rlz number and gsim, same as for the hazard curves)

This mirrors how hazard realizations are stored, but with additional information (hazard_model_id, annual PoE)

Notes

  • The disaggregation data is multi-dimensional, unlike the hazard curves, so an appropriate storage/organization strategy is necessary
  • The disaggregations are much larger than the hazard curves (~20,000 entries vs 44 for hazard curves)
  • Sample code to extract disagg from hdf5:
def extract_disagg(filename):
    
    dstore = datastore.read(filename)
    oqparam = vars(dstore['oqparam'])
    imtls = oqparam['hazard_imtls']
    inv_time = oqparam['investigation_time']
    sites = find_site_names(dstore.read_df('sitecol'),dtol=0.001)
    dstore.close()
    
    if len(sites)==1:
        site = sites.index.values[0]
    else:
        raise NameError('hdf5 includes more than one site location.')
        
    if len(imtls)==1:
        imt = list(imtls.keys())[0]
    else:
        raise NameError('hdf5 includes more than one IMT.')
        
    if len(imtls[imt])==1:
        imtl = imtls[imt][0]
    else:
        raise NameError(f'hdf5 includes more than one IMTL for {imt}.')
    
    with h5py.File(filename) as hf:
        poe = np.squeeze(hf['poe4'][:])
        
        full_disagg = np.squeeze(hf['disagg']['Mag_Dist_TRT_Eps'][:])
        full_disagg_contribution = full_disagg / poe
        
        mag_dist_disagg = np.squeeze(hf['disagg']['Mag_Dist'][:])
        mag_dist_disagg_contribution = mag_dist_disagg / poe

        dist_bin_edges = hf['disagg-bins']['Dist'][:]
        mag_bin_edges = hf['disagg-bins']['Mag'][:]
        eps_bin_edges = hf['disagg-bins']['Eps'][:]

        trt_bins = [x.decode('UTF-8') for x in hf['disagg-bins']['TRT'][:]]
        dist_bins = (dist_bin_edges[1:]-dist_bin_edges[:-1])/2 + dist_bin_edges[:-1]
        eps_bins = (eps_bin_edges[1:]-eps_bin_edges[:-1])/2 + eps_bin_edges[:-1]
        mag_bins = (mag_bin_edges[1:]-mag_bin_edges[:-1])/2 + mag_bin_edges[:-1] 

    disagg = {}
    disagg['site'] = site
    disagg['imt'] = imt
    disagg['imtl'] = imtl
    disagg['poe'] = poe
    disagg['inv_time'] = inv_time
    disagg['disagg_matrix'] = full_disagg
    disagg['bins'] = {'mag_bins':mag_bins,
                      'dist_bins':dist_bins,
                      'trt_bins':trt_bins,
                      'eps_bins':eps_bins}
    disagg['bin_edges'] = {'mag_bin_edges':mag_bin_edges,
                           'dist_bin_edges':dist_bin_edges,
                           'eps_bin_edges':eps_bin_edges}
    
    return disagg

Update: upgrade pandas

We want the version of pandas used by toshi-hazard-store to be compatiable with the latest version of OpenQuake which is v 2.0.3

surprisingly, we're not getting OQ errors using 1.5.3, but best to avoid future issues

EPIC: toshi-hazard-store Revision 4

Improvements to toshi-hazard-store to simply hazard pipeline and facilitate 3rd party use:

  • #53
  • remove toshiAPI IDs from key values for realizations. They can be continue to be stored, but should not be necessary to lookup realizations
  • use hash of oq config as part of key value for realizations. This can be used to ensure that all realizations that go into a aggregate (THP) calculation use the same oq config settings and same oq version
  • #60

FIX: query.get_hazard_rlz_curves with just toshi_id fails.

  • toshi-hazard-store version: 3.1.0
  • Python version: 3.8+
  • Operating System: Linbux

Description

query with just toshi_id fails..

What I Did

from CDC

for r in query.get_hazard_rlz_curves(m.haz_sol_id):
        print("rlz", r.loc, r.rlz, r.values[0] )
        break
pynamodb.exceptions.QueryError: Failed to query items: An error occurred (ValidationException) on request (H9SDI9LD273B7NT2V9H216QJ7NVV4KQNSO5AEMVJF66Q9ASUAAJG) on table (ToshiOpenquakeHazardCurveRlzs-TEST) when calling the Query operation: One or more parameter values are not valid. The AttributeValue for a key attribute cannot contain an empty string value. Key: imt_loc_rlz_rk

Feature: site-specific VS30

we want to store site specific VS30 values.

the current VS30 is a 'assumed value' and it a key part of the index.

  • when vs30==0 then the site_vs30 should be used
  • site_vs30 is an arbitrary integer value

Missing locations

nzshm-common-py@feature/20-backarc-v2

To reproduce. Only the first of 3 locations will be found:

from toshi_hazard_store.query_v3 import get_hazard_curves
from nzshm_common.location.location import LOCATIONS_SRWG214_BY_ID
from nzshm_common.location.code_location import CodedLocation
missing_locations = ['srg_1','srg_142', 'srg_186']
imts = ['PGA']
aggs = ['mean']
vs30s = [150]
locations = [CodedLocation(LOCATIONS_SRWG214_BY_ID[loc]['latitude'], LOCATIONS_SRWG214_BY_ID[loc]['longitude'],0.001).code for loc in missing_locations]
hazard_ids =  ['NSHM_v1.0.2']
for res in get_hazard_curves(locations, vs30s, hazard_ids, imts, aggs):
    print(res)

https://github.com/GNS-Science/toshi-hazard-post/blob/b7c275f4c3288aa14550c39c9f9f5eaed81e43fc/toshi_hazard_post/hazard_aggregation/locations.py#L106
https://github.com/GNS-Science/toshi-hazard-post/blob/b7c275f4c3288aa14550c39c9f9f5eaed81e43fc/toshi_hazard_post/hazard_aggregation/aggregation.py#L415-L420

feature: extend local DB support

further to #53 we want also these:

Future enhancments:

  • split into separate project
  • support transactions
  • adaptor for redis (K/V) mongo document store pure K/V is not going to perform well on data retrieval side.
  • make it fully compatible with caching adapter

Feature: add component attribute to hazard curves

For aggregate hazard curves and aggregate disaggregations we want an additional field to identify component of motion. A number of options can be enumerated (we may add more later). The default is used if the fields don't exist (applies to all previous calculations).

class Component(Enum):
    ROTD50 = "median horizontal ground motion"
    LHC = "largest horizontal component"

ROTD50 is default

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.