The repository contains grids (UGX) and script (LUA) files to initiate a calcium wave in neurons. Additionally job scripts for the local HPC cluster Owlsnest2 at Temple are provided and GNU R
plotting scripts to evaluate measurements on the meshes. Required are SuperLU
and Parmetis
for parallel computation.
Recent versions of ug4, neuro collection, convection diffusion, LIMEX, Parmetis and LUA2C are required (The latter three are currently not publicly available but available through the group-internal Quadruped repository).
First the user needs to enable the plugins above via cmake -DNEURO_COLLECTION=ON -DLIMEX=ON -DParmetis=ON -DLUA2C=ON -DConvectionDiffusion=ON
and then compile the code via make
.
An example configuration is displayed here.
Execute in a terminal:
ugshell -ex wave3d_revised.lua -grid zhi_y.ugx -dt 1e-8 -endTime 0.0005 -solver GMG -tol 0.02 -ryrDensity 0 -setting ryr
.
For cylinder structure:
ugshell -ex wave3d_cylinder.lua -grid cylinder_ex.ugx -dt 1e-6 -endTime 0.05 -solver GMG -tol 0.02 -outName . -ryrDensity 3 -setting ryr -vtk -pstep 0.005
For y-structure:
ugshell -ex wave3d_branching.lua -grid y_structure_ex.ugx -dt 1e-6 -endTime 0.05 -solver GMG -tol 0.02 -outName . -ryrDensity 3 -setting ryr -vtk -pstep 0.005
For full geometries (pstep=0
is necessary to write VTK output data in each step):
ugshell -ex wave3d_revised_sg.lua -grid full_cell_with_soma_and_subsets_assigned.ugx -dt 1e-6 -endTime 0.05 -solver GMG -tol 0.02 -outName . -ryrDensity 3 -setting ryr -vtk -pstep 0.0
- LIMEX uses C++11 features, so one needs at least the following compiler revisions: Clang 3.3 or GCC 4.8.1
- When using LIMEX as the time stepping scheme and concurrently outputting VTK
data one should consider using an older version of the LIMEX plugin since collecting
pvd
files are not correctly written by:git checkout 1384a90d0d1f75f582563e71cdf4295f17bfd474
.
#!/bin/bash -l
#PBS -l walltime=8:00:00,nodes=1:ppn=8,mem=10gb
#PBS -m abe
#PBS -M [email protected]
cd ~/program_directory
module load intel
module load ompi/intel
mpirun -np 8 program_name < inputfile > outputfile