geoscienceaustralia / anuga_core Goto Github PK
View Code? Open in Web Editor NEWAnuGA for the simulation of the shallow water equation
Home Page: https://anuga.anu.edu.au
License: Other
AnuGA for the simulation of the shallow water equation
Home Page: https://anuga.anu.edu.au
License: Other
At the moment when modelling dam breaks or river bed scour I can only use the 1_5 solver in sequential mode.
Can erosion operator be made to work in parallel as well as with the default solver?
regards
At present test_all.py will produce an error if parallel environnment has not been installed. Need to exclude the parallel directory from the test_all.py tests. Probably easiest to try to import pypar and if an exception then exclude the parallel directory from the tests.
There seems to be miss match between the orientation of the Z arrays returned by grd2array and dem2array. Make sure the tests in test_grd2array and test_dem2array are consistent.
There is a bug using sww2dem when verbose is set to True. In particular on line 182 there is an error with fid.starttime[0] which should be fid.starttime
But before fixing this we should set up a unittest to test for this error. Probably need to run a test with verbose=True and set stdout to a file and check that the output is correct.
Test that the new quantity.set_values which take dem or asc files also works for the corresponding domain.set_quantity procedure
Change distribute so that first layer incorporates all triagles neighbouring full NODES (not just triangles)
This should be easy and would allow fast creation of elevation data
The distribute_to_vertices_and_edges command should probably be distribute_to_edges and distribute_edges _to_vertices and only call distibute_edges_to_vertices when we yield to outer loop.
This should produce a small improvement in speed.
file_function should be able to take a csv file as well as the current netcdf formatted files. Also it would be sensible to provide a default value if a request is made outside the time domain of the data.
Triangle has the property of having fairly abupt change in mesh size when going from one zone of refinement to another
This procedure doesn't take the breaklines argument which is available from create_mesh_from_regions.
create_domain_from_regions creates a meshfile which shouldn't be necessary.
Integrate the make_nearestNeighbour_quantity_function from quantity_setting_function into quantity.set_values
Maybe make nearestNeighbour interpolation default instead of our old leastsquares fitting for pts and csv files.
If file function reads in an sts file with x or y coordinate zero, and the corresponding boundary point is off by 1e-7 then the point is not pick up.
Looks like we could use AppVeyor to run continuous integratintests for windows.
Checkout numenta/nupic.core-legacy#134 for hints on how to do this.
We would need to use mingw
Is there any chance that riverwalls will be implemented with the over solvers, eg 1_5 solver? I want to run towradgi to see how it performs against DE0 and DE1.
Currently, lat_long_UTM_conversion.py can only convert a single point. Update lat_long_UTM_conversion.py so that a list or array of points can be converted directly using LLtoUTM (UTMtoLL)
mpi4py is more commonly available on super computing facilities. Roberto Vidmar has already done this earlier and provided his code.
Currently, AnuGA outputs stage for every element in the model.Therefore, when you open the results in Crayfish, it shows all the domain as "wet".
Ideally, there should be a cut-off threshold, to stop dry areas being outputted in the SWW file.
"main repository" link on line 7 of /CONTRIBUTING.rst points to http://github.com/stoiver/anuga_core instead of http://github.com/GeoscienceAustralia/anuga_core
May set default as reflective or None. But what to do if None?
We are using an old version of metis.
Must be a problem with integers and/or alignment
Memory seems to grow when using sww2arry. Probably due to calc_grid_values
Hi all,
I plan to use ANUGA with CUDA. I use this implementation of ANUGA-CUDA https://github.com/budiaji/anuga-cuda
That implementation has two domain, gpu_domain_basic and gpu_domain_advanced. I can choose which domain I use by modifying file https://github.com/budiaji/anuga-cuda/blob/master/src/anuga_cuda/__init__.py
I ran simple merimbula case here https://github.com/budiaji/anuga-cuda/blob/master/tests/CUDA/merimbula/merimbula.py
I am using anuga version 1.3.1. Using basic domain, the simulation ran fine. But using advanced domain I got this error:
Traceback (most recent call last):
File "test_cuda_shallow_water.py", line 66, in <module>
for t in domain.evolve(yieldstep = yieldstep, finaltime = finaltime):
File "/home/somat/source/anuga-cuda/src/anuga_cuda/gpu_domain_advanced.py", line 2580, in evolve
self.evolve_one_euler_step(yieldstep, self.finaltime)
File "/home/somat/source/anuga-cuda/src/anuga_cuda/gpu_domain_advanced.py", line 2849, in check_point
fn(*args, **kv)
File "/home/somat/source/anuga-cuda/src/anuga_cuda/gpu_domain_advanced.py", line 2191, in evolve_one_euler_step
self.update_timestep(yieldstep, finaltime)
File "/home/somat/source/anuga-cuda/src/anuga_cuda/gpu_domain_advanced.py", line 2851, in check_point
fn.original(self.cotesting_domain, *args, **kv)
File "/home/somat/.virtualenvs/anuga-1.3.1/local/lib/python2.7/site-packages/anuga/abstract_2d_finite_volumes/generic_domain.py", line 2100, in update_timestep
if self.get_time() + timestep > self.yieldtime:
AttributeError: CUDA_advanced_domain instance has no attribute 'yieldtime'
I thought that CUDA_advanced_domain class has no attribute yieldtime. I checked and it has attribute yieldtime. But somehow it's missing.
Any ideas? Thank you.
It seems that sww2dem can produce weird velocity results when the sww file has been created using the DE algorithms.
We should try to use just the centroid values, and use the routines from anuga.utilities.plot_utils
Should setup sequential_distribute to be able to run on system without pypar. Would allow sequential job in large memory to run initial creation of domain. Also would allow experimental simulation code (in towadgi directory) to run on windows.
The utilities.polygon module is now in geometry but the move is not updated in manual
Gareth has implemented anoperator to monitor the flux through the boundary. This has only been implemented for the DE0 and DE1 (euler and rk2 timestepping).
Implement and test the Characterisitc_boundary condition. Compare to the transmissive boundary conditions.
Update quantity set_values so it can take a dem file using the set_values_from_utm_grid_file. Should also incorporate some of Gareth's code from quantity_setting_functions.py
In the past (when 1_5 solver was the default), we were able to tell ANUGA to filter out very shallow water depths on the domain.
This was very useful when running rainfall on the domain models (for flood modelling work).
Now that DE0 solver is the default, this option is not available.
Can you please add this feature back to ANUGA.
In particular, we need a test where the sts file has a non zero starttime, so that we can check that the updating of the domain starttime is correct. Also should ensure that the subsequent gauge creation works consistently..
It would be great to create a plugin to Paraview to read in sww files
Should combine sequential and parallel structures code. There is replication of code which should be removed. I suggest moving most of the code to the structures directory, and make sure the code can run both sequential and parallel mode.
With such a cleanup it should be easy to add new structures.
We use openmp to speedup the leastsquares fitting of pts elevation data to the domain. But we could move to using scipy to do that calculation using code in quantity_setting_functions.
The use of openmp has been one of the main reasons we have needed to use gcc compilers on windows and macos. If we could compile our code with the VC compiler then we could release pre-compiled wheels for windows via pip.
We need to update sww files when using the DE algorithms. It seems that there are examples when we have shallow flows that the smoothed vertex values can jump between wet and dry and indeed can produce weird velocity fields.
Suggest storing the smoothed vertex values using a simple combination of the centroid values.
This should speed up anuga by a factor of 1.5 - 2
We have a simple benchmarking setup in the repository GeoscienceAustralia/anuga_benchmark. This needs to be extended and setup to run on a few machine to ensure continuous testing of the speed of the anuga code
When running the same code with different friction values, the velocity results are the same.
The script I am using to run ANUGA 1.3.10 follows and below that is a simplified version of the code I use to produce the output:
"""Script for running the flood simulation. Takes 5 command line
arguments corresponding to the volume of water added to the system per second,
the time the simulation will run for, the resolutions of the outside area, the
resolution of the smaller polygon, and the output folder name.
"""
import anuga
import os #does file manipulation
import time
import sys #does command line arguments
import numpy
import shutil #for copying the file
from anuga.structures.inlet_operator import Inlet_operator
try:
in_vol = int(sys.argv[1])
time = int(sys.argv[2])
default_res = int(sys.argv[3])
small_res = int(sys.argv[4])
outfol = sys.argv[5]
except:
print 'Incorrect command arguments for simluation. Arguments should be:\n
'
sys.exit()
root = 'moses' # variable will be name of all input and output variables
sww_file = root + '.sww' # name of the sww output file from previous segment
gauge_out = root + '_gauge_output.csv' # name of the file to which depths will be written
gauge_in_file = root + '_gauges.csv'
ascii_file = root + '.asc'
dem_file = root + '.dem'
bnd_file = root+'_bnd.csv'
msh_file = root + '.tsh'
pts_file = root + '.pts'
if not os.path.exists(outfol):
os.makedirs(outfol)
shutil.copyfile(gauge_in_file, '%s/%s' %(outfol, gauge_in_file))
anuga.asc2dem(ascii_file, use_cache=False, verbose=False) #ascii exported from dem in arcgis
anuga.dem2pts(dem_file, use_cache=True, verbose=False)
bounding_polygon = anuga.read_polygon(bnd_file)
inner_poly = anuga.read_polygon('moses_inner_polygon_points.csv') #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Will have to change this line if want to add multiple regions or change inner region.
domain = anuga.create_domain_from_regions(bounding_polygon, boundary_tags={'s0':
[0], 's1': [1], 'out1': [2], 'out2': [3], 'out3': [4], 's5': [5],
's6': [6], 's7': [7], 's8': [8], 's9': [9], 's10': [10], 's11':
[11], 's12': [12], 's13': [13], 's14': [14], 's15': [15], 'in':
[16], 's17': [17], 's18': [18], 's19': [19], 's20': [20], 's21':
[21], 's22': [22], 's23': [23], 's24': [24], 's25': [25], 's26':
[26], 's27': [27]},
maximum_triangle_area = default_res, mesh_filename = msh_file,
interior_regions = [[inner_poly, small_res]], use_cache = True,
verbose = False) #s0 is name of the segment between the first two points in moses_bg_ply.csv, etc., the numbers assign each segment a number
'''
pol_1 = anuga.read_polygon('moses_pol_1.csv')
pol_2 = anuga.read_polygon('moses_pol_2.csv')
domain = anuga.create_domain_from_regions(bounding_polygon, boundary_tags={'s0':
[0], 's1': [1], 'out1': [2], 'out2': [3], 'out3': [4], 's5': [5],
's6': [6], 's7': [7], 's8': [8], 's9': [9], 's10': [10], 's11':
[11], 's12': [12], 's13': [13], 's14': [14], 's15': [15], 'in':
[16], 's17': [17], 's18': [18], 's19': [19], 's20': [20], 's21':
[21], 's22': [22], 's23': [23], 's24': [24], 's25': [25], 's26':
[26], 's27': [27]},
maximum_triangle_area = default_res, mesh_filename = msh_file,
interior_regions = [[pol_1, small_res],[pol_2, small_res]], use_cache = True,
verbose = False)
'''
print 'Number of triangles = ', len(domain)
print 'The extent is ', domain.get_extent()
print domain.statistics()
domain.set_name(sww_file) # Name of sww file
domain.set_datadir('./%s/' %outfol) # changes directory to output folder
domain.set_minimum_storable_height(1) # writes data to sww file only if depth exceeds value (e.g., 1m);
domain.set_minimum_allowed_height(1) #improves speed; see user manual
domain.set_quantity('stage', 0.0)
domain.set_quantity('friction', 0.035)
domain.set_quantity('elevation', filename=pts_file,
use_cache = True,
verbose =False,
alpha =0.1) #topographic smoothing; see user manual
line_in=[[298175.29404705,5279687.12514713], [300665.44798212,5279673.08820274]]
anuga.Inlet_operator(domain, line_in, in_vol)
Br = anuga.Reflective_boundary(domain)
Bt = anuga.Transmissive_boundary(domain)
Bo = anuga.Dirichlet_boundary([-425, 0, 0]) #outflow. Values of stage, x and y momentum. The boundary will input of output water to attempt to get to the specified height. Negative stage is used to remove water and hopefully avoid numerical instability associated with transmissive boundary
domain.set_boundary({'s0':
Br, 's1': Br, 'out1': Bo, 'out2': Bo, 'out3': Bo, 's5': Br, 's6':
Br, 's7': Br, 's8': Br, 's9': Br, 's10': Br, 's11': Br,
's12': Br, 's13': Br, 's14': Br, 's15': Br, 'in': Br,
's17': Br, 's18': Br, 's19': Br, 's20': Br, 's21': Br,
's22': Br, 's23': Br,'s24': Br, 's25': Br, 's26': Br, 's27': Br})
prev_dep=domain.get_quantity('stage').get_integral()
for t in domain.evolve(yieldstep=100, finaltime=time): #yieldstep is where timestep can be specified
print domain.timestepping_statistics() #prints the timestep, time it took to run previous timestep, among other things
cur_dep=domain.get_quantity('stage').get_integral()
change = cur_dep - prev_dep
prev_dep = cur_dep
print 'time: %f, change: %f' %(t,change) #printed change value is change in water volume over 100 seconds
os.chdir('./%s/'%outfol) #changes the directory of the program to the output folder as specified in the command arguments
names=[] #array used to hold the list of gauge sample names
output=[] #array used to hold all the depths over time
b=True #boolean variable for loop manipulation
anuga.sww2csv_gauges(sww_file, gauge_in_file, quantities=['depth'], verbose=True)
with open(gauge_in_file,'r') as f: #opens the file that contains the list of gauge names and locations
f.readline()
for line in f:
line=line.split(',')
oname='gauge_'+line[2]+'.csv'
names.append(oname) #adds each gauge name to the array in order
for name in names: # this loop will run once for each name (and thus each gauge)
if b: #this section will only execute the first run through
b=False
with open(name,'r') as cur_file: #opens the corresponding gauge file
cur = cur_file
cur_file.readline()
for line in cur_file:
dep = line.split(',')[2].strip()
output.append([float(dep)]) #processes the string into a useful format (a float in this case) and adds it to the array
else: #this section will execute every run through except the first. It functions the same except adding to each element of the array
n=0
with open(name,'r') as cur_file:
cur_file.readline()
for line in cur_file:
dep = line.split(',')[2].strip()
output[n].append(float(dep)) #processes each data point of the gauge file and adds it to the output
n+=1
os.remove(name) #this line deletes the gauge filesmade by the anuga method. It's not necessary for functionality but makes the file directory cleaner
with open(gauge_out,'w') as out_file: #opens an output file and writes the output to the file. Note the output file is csv (comma seperated) and is usable in excel.
for el in output:
out_file.write(str(el).strip('[]'))
out_file.write(' \n')
print 'run complete'
import anuga
anuga.sww2dem('moses.sww', 'friction.asc', quantity='friction', cellsize=30,
reduction=20, verbose=True)
velocity = '(xmomentum2 + ymomentum2)**0.5/(stage-elevation+1.e-30)' #Velocity
anuga.sww2dem('moses.sww', 'velocity.asc', quantity=velocity, cellsize=30, reduction=20, verbose=True)
For very large problems it would be very useful to create and partition the mesh in parallel. At present we create a sequantial mesh on one processor, and the n partition that mesh on the ame processor and fnally communicate the partitions to the various processors.
It would be great if we could partition the high level description of the mesh and do the subsequent triangulations independently on each of the processors.
Check out http://msl.cs.odu.edu/mediawiki/index.php/Parallel_Constrained_Delaunay_Mesh_%28PCDM%29_Generation
There is a bug when using sts boundary conditions with a non-zero starttime. During the evolve, the times used should be relative times, but seem to be absolute times. This caused a problem when applying an sts BC as it assumed relative times.
Should change the evolve to use relative times again (it used to).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.