GithubHelp home page GithubHelp logo

goma / goma Goto Github PK

View Code? Open in Web Editor NEW
107.0 26.0 52.0 44.34 MB

A Full-Newton Finite Element Program for Free and Moving Boundary Problems with Coupled Fluid/Solid Momentum, Energy, Mass, and Chemical Species Transport

Home Page: https://www.gomafem.com

License: GNU General Public License v2.0

C 95.64% C++ 1.92% Shell 0.48% Fortran 1.67% CMake 0.26% Python 0.03%
finite-elements finite-element-analysis simulation parallel multiphysics fem snl-applications

goma's Introduction

Goma

A Full-Newton Finite Element Program for Free and Moving Boundary Problems with Coupled Fluid/Solid Momentum, Energy, Mass, and Chemical Species Transport

For more information see the Goma website

Documentation

Most of the documentation can be found at https://www.gomafem.com/documentation.html

License

See LICENSE file. and are noted at the top of the cmake file.

Third party library licenses

CMake modules

Some cmake modules under cmake/ were modified from the Eigen library and are noted at the top of the cmake file.

See licenses at https://gitlab.com/libeigen/eigen

FindMETIS.cmake

  • @copyright (c) 2009-2014 The University of Tennessee and The University
  •                      of Tennessee Research Foundation.
    
  •                      All rights reserved.
    
  • @copyright (c) 2012-2014 Inria. All rights reserved.
  • @copyright (c) 2012-2014 Bordeaux INP, CNRS (LaBRI UMR 5800), Inria, Univ. Bordeaux. All rights reserved.

FindUMFPACK.cmake

nanoflann is included under the BSD license, please see nanoflann.hpp

Major Changes

See CHANGES.md

Build Instructions

See BUILD.md

Spack package

The Spack package manager https://spack.io can be used to install Goma and all of Goma's third party libraries

Currently available on the develop branch of spack.

Example for a bash-like shell:

git clone https://github.com/spack/spack.git
. spack/share/spack/setup-env.sh
spack install goma

For more information on build options see:

spack info goma

For more information on using spack see the spack documentation.

Third party libraries

  • Metis 5.1.0 (Optional)
  • SEACAS 2022-01-27 (Required: Exodus and Aprepro)
  • BLAS/LAPACK (Configured through Trilinos)
  • Trilinos matrix solvers 13.0.1 and up (Required: AztecOO, Amesos, Epetra, TPL LAPACK; Optional: Stratimikos [with Teko, Ifpack, Belos, Tpetra])
  • PETSc matrix solvers (KSP, PC)
  • MUMPS 5.4.0 (through Trilinos or PETSc only)
  • Superlu_dist 7.2.0 (through Trilinos or PETSc only, Trilinos requires parmetis build)
  • UMFPACK, SuiteSparse 5.10.1 (Optional)
  • ARPACK/arpack-ng 3.8.0 (Optional)
  • sparse 1.4b (Optional)
  • Catch2 (Optional testing)

Run the tutorial

To get started with Goma, use the following:

goma's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

goma's Issues

Numerical Jacobian debugger breaks when checking shells of dimension 1

global_h_elem_siz is called, but uses an empty value that doesn't get filled because h_elem_size doesn't get called on elements that have ei->ielem_dim == 1.

global_h_elem_siz is also used for some PG stabilization, changing it to fix this problem alone causes some of the test problems to break.

Recommendation is to implement element-block-level h_elem_siz (i.e. eb_h_elem_size) and use it in place of global_h_elem_siz in all cases.

PSPG boundary condition not consistent with local sizing

tau_pspg is calculated only for the global version in the pspg boundary condition

Compare

goma/src/mm_fill_terms.c

Lines 29499 to 29577 in edabf84

if(pspg_global)
{
/* Now calculate the element Reynolds number based on a global
* norm of the velocity and determine tau_pspg discretely from Re
* The global version has no Jacobian dependencies
*/
Re = rho * U_norm * h_elem / (2.0 * mu_avg);
if (Re <= 3.0)
{
tau_pspg = PS_scaling * h_elem * h_elem / (12.0 * mu_avg);
}
else if (Re > 3.0)
{
tau_pspg = PS_scaling * h_elem / (2.0 * rho * U_norm);
}
}
else if (pspg_local)
{
hh_siz = 0.;
for ( p=0; p<dim; p++)
{
hh_siz += hsquared[p];
}
// Average value of h**2 in the element
hh_siz = hh_siz/ ((double )dim);
// Average value of v**2 in the element
vv_speed = 0.0;
for ( a=0; a<wim; a++)
{
vv_speed += v_avg[a]*v_avg[a];
}
// Use vv_speed and hh_siz for tau_pspg, note it has a continuous dependence on Re
tau_pspg1 = rho_avg*rho_avg*vv_speed/hh_siz + (9.0*mu_avg*mu_avg)/(hh_siz*hh_siz);
if ( pd->TimeIntegration != STEADY)
{
tau_pspg1 += 4.0/(dt*dt);
}
tau_pspg = PS_scaling/sqrt(tau_pspg1);
// tau_pspg derivatives wrt v from vv_speed
if ( d_pspg != NULL && pd->v[VELOCITY1] )
{
for ( b=0; b<dim; b++)
{
var = VELOCITY1+b;
if ( pd->v[var] )
{
for ( j=0; j<ei->dof[var]; j++)
{
d_tau_pspg_dv[b][j] = -tau_pspg/tau_pspg1;
d_tau_pspg_dv[b][j] *= rho_avg*rho_avg/hh_siz * v_avg[b]*pg_data->dv_dnode[b][j];
}
}
}
}
// tau_pspg derivatives wrt mesh from hh_siz
if ( d_pspg != NULL && pd->v[MESH_DISPLACEMENT1] )
{
for ( b=0; b<dim; b++)
{
var = MESH_DISPLACEMENT1+b;
if ( pd->v[var] )
{
for ( j=0; j<ei->dof[var]; j++)
{
d_tau_pspg_dX[b][j] = tau_pspg/tau_pspg1;
d_tau_pspg_dX[b][j] *= (rho_avg*rho_avg*vv_speed + 18.0*mu_avg*mu_avg/hh_siz) / (hh_siz*hh_siz);
d_tau_pspg_dX[b][j] *= pg_data->hhv[b][b]*pg_data->dhv_dxnode[b][j]/((double)dim);
}
}
}
}
}

To

goma/src/mm_ns_bc.c

Lines 7901 to 7908 in edabf84

if (Re <= 3.0)
{
tau_pspg = -PS_scaling * h_elem * h_elem / (12.0 * mu_avg);
}
else if (Re > 3.0)
{
tau_pspg = -PS_scaling * h_elem / (2.0 * rho * U_norm);
}

get_nodal_unknown_offset out of bounds access on GUTS problem

function get_nodal_unknown_offset

index is above bounds on porous_dimp problem, this also occurs on porous_taper

https://github.com/goma/goma/blob/master/src/rf_node_vars.c#L263

index is 7 and array is of size 7

The index value comes from the Nodal_Offset and Nodal_Offset seems to correspond to Nodal Variables Num_Unknowns.

I'm unsure on what the correct behavior for this function should be.

(gdb) p index
$89 = 7
(gdb) p nv->Num_Var_Desc
$90 = 7
(gdb) p i_match
$91 = 6
(gdb) p nv->Nodal_Offset[i_match]
$92 = 7
(gdb) p subVarType
$93 = 0

Warnings under compile using gcc 5.4.0-ubuntu1~16.04.4 as of May 18th, 2017

The following is an inventory of files that generate compile warnings received as of May 18th, 2017 provided by the gcc-5.4.0 compiler.

These may not exist under other compilers, but I thought I would point them out for any interested parties to do some cleanup on.

bc_colloc.c count : 1
rd_pixel_image2.c count : 6
wr_exo.c count : 1
mm_augc_util.c count : 1
mm_eh.c count : 2
mm_input.c count : 3
mm_input_mp.c count : 4
mm_post_proc.c count : 1
mm_std_models.c count : 1
ac_hunt.c count : 1
ac_particles.c count : 1
brkfix/ppi.c count : 5
brkfix/utils.c count : 1

Possibile error in function mass_flux_equil_mtc in mm_fill_species.c

in function mass_flux_equil_mtc in mm_fill_species.c

dv_dw[i][i] += sv[j]/bottom;

Occurs after the following loop, making j = pd->Num_Species_Eqn

for(j=0;j<pd->Num_Species_Eqn;j++)

dv_dw[i][i] += sv[j]/bottom;

Similarly the following occurs outside another loop with the same values

dv_dw[i][i] += vol[j]/bottom;

dv_dw[i][i] += vol[j]/bottom;

These look like they either should be in the loop or should be indexed with something other than j

Undefined gradII_Hside and gradII_Hside_F in assemble_shell_energy

The issue here was fixed when I updated assemble_shell_energy to use calculate_lubq (the q-calculator). Unfortunately those were done on another git repo NOT connected with this one and on a whole other system. Scott, when you've a chance, if you could retrieve that update for me, and can then sufficiently sanitize it and get this fixed.

Right now nobody to my knowledge is using assemble_shell_energy outside one program at SNL which uses this routine. However, would be good to fixed the sanitized version. This error will not rear its head in the test suite as that problem is a single-phase lubrication flow.

Build issues using easy-goma-builder.sh (Trilinos source and rpc/types.h)

When trying to build Goma (master) using the easy-goma-builder.sh script, I encountered two issues.

First, the Trilinos source archive could not be downloaded from the location the script tries to (http://trilinos.csbsju.edu/download/files/trilinos-12.10.1-Source.tar.gz).
I found an alternative URL here: https://github.com/alces-software/packager-base/blob/master/libs/trilinos/12.10.1/trilinos-12.10.1-Source.tar.gz.fetch
Downloading the archive from there manually, and putting it in the gomaTPLs/tars directory worked.

Second, The rpc/types.h file could not be found on my system. Simply commenting out the two problematic #include lines in sl_matrix_dump.c made Goma build sucessfully.

My OS is Fedora 28 x64.

General Dirichlet boundary conditions can use unreasonable values for undefined DOFs

This problem arises when applying the GD_LINEAR boundary condition to an internal, double-sided sideset where a DOF (a species concentration in this case) is only defined on one side of the sideset. When looping through the faces on the side that has the DOF properly defined, everything works fine. However, when later looping through the faces attached to the opposing block, an incorrect value is used. I did not verify, but it appears that it is either overflowing memory or, more likely, indexing back into some other set of variables.

The correct action in this case is to use a single-sided sideset, which we are now doing. However, I would expect that when a double-sided sideset is used and invalid variables are called, an error should be thrown.

History Fix

I was able to fix all the rebase issues dshari had been having.
I have pushed them to my master branch.

Does anyone care to inspect and push them to the goma/goma master branch?
There's now an empty commit due to the conflict resolution, it can probably be deleted it in a further rebase.

Unused files in goma

Are the following files ever used anymore, they aren't referenced in the Makefiles or other code

jas_main.c

bc_colloc_rot.c

mm_distng_cond.c

Also the parser files referenced in #30

Some questions about values that are being calculated or calling functions but never checked or used

These are all causing compiler errors but I am unsure if they are just leftover from older code or in the case of variables being set by functions if their values should be checked

bc_contact.c

function: apply_contact_bc (line 137)

Is xsurf[] needed or can it be removed it is computed but never used

Similarly x_rs_dot[](line 134) is computed but never used

function: jump_down_to_fluid (line 929)

err is set but not used, it looks like there is a note PRS fix: on this

function: Lagrange_mult_equation (line 1089)

dAdx is also set (calculated) but unused

function: apply_embedded_colloc_bc (line 2060)

base_interp is being set but is not being used

bc_curve.c

function: apply_integrated_curve_bc

iapply is being set as a flag for material but is never being used

xsurf[] is being set again but not used

el_quality.c

function: jacobian_metric

err is being set with load_basis_functions but is not being used

rd_pixel_image.c

function: rd_image_to_mesh

converge is being set with the find_xi method but is never being checked

evaluate_volume_integral issue with subgrid integration and huygens renormalization

The line if( subgrid_integration_active ) start_tree = create_shape_fcn_tree( ls->Integration_Depth );

Tries to create a shape fcn tree (still unsure on how this tree structure is meant to behave). but in some cases the ei structure is not set correctly which the create_shape_fcn_tree uses to malloc structures inside the tree.

When the ei is not set correctly then the arrays in the tree are not malloc'd and then goma tries to read and write to the unmalloc'd space causing segfaults.

README.md and scripts/README.md require small corrections

  1. Line in scripts/README.md:

export PATH="/[path to gomalibs]/trilinos-12.6.3-Build/bin:$PATH"

need to be changed to:

export PATH="/[path to gomalibs]/trilinos-12.10.1-Built/bin:$PATH"

  1. Line in README.md

${gomadir}/TPLs/trilinos-12.6.3/bin/aprepro -v

need to be changed to

${gomadir}/TPLs/trilinos-12.10.1-Built/bin/aprepro -v

Debugging GOMA with TotalView

When debugging GOMA with TotalView, get the error:

Fatal Error: extract_block block too long

This occurs when pausing the run and clicking on a GOMA file in the stack.

Default supported compiler, mpi

I would like to have one supported version of GCC and OpenMPI identified for GOMA. This would be the versions that are referenced in the Makefile.

Minimum Resolved Timestep

I'm not sure that I'm using this input deck card correctly.
The Goma manual states,
"Its role is to set a lower bound for the time step with respect to the Time step error
tolerance. When a converged time step is obtained by GOMA, the difference between
the predicted solution and final solution for that time step is compared to the Time step
error tolerance. If the difference exceeds this tolerance the step fails and the time step
is cut (usually by a factor of 2), UNLESS the time step falls below the Minimum
Resolved Time Step size. In this case the step is accepted, even if this error tolerance is
not achieved. This provides a mechanism for the modeler to control what phenomena is
resolved and what phenomena is ignored."

The behavior I observe,
I set the Minimum Resolved Timestep to be some value, x. Even when a time step fails to converge through Newton iteration, the time step still does not get cut below x. The simulation retries the same calculation for the same x that failed Newton convergence previously. Resulting in retrying the same timestep over and over, until, I assume, it runs out of timesteps.

Is this the expected behavior or can we fix it? I think the card should allow further timesteps with the same dt only if Newton convergence succeeds, and cuts the timestep if the Newton convergence fails, no matter what.

LSA - eigenvectors not added to most recent solution

When performing linear stability analysis (LSA), the resulting eigenvectors (displacements) are not added to the most recent solution (deformed mesh) but instead are added to the FEM file. This limitation can be overcome if the LSA is performed only for a single base flow by annealing the mesh and using it as the FEM file and then doing LSA. But this is not typically the case; one is usually performing a continuation in a parameter (or several) and at each solution along the path performing the LSA.

Combine all makefiles

We currently have separate makefiles for standard (optimized), debug, and testing builds. There are two improvements that are needed:

  1. Testing (guts) should be done on the exact same build as the standard optimized run.
  2. All makefiles should be combined into a single one, with different targets.
    Some of this may be moot if we move to autotools.

Test suite problem, slider_shell, fails with optimization

This problem fails to run with both O2 and O3 optimization, it is the only problem that fails the test suite with advanced optimization.

The problem does pass with the default -O1

Exits with: Failed bulk/shell node match consistency check!

Git Guide Wiki

Hi all,

I created a wiki page on here for using git. It has some of the basic stuff, but most importantly, I tried to show a simplified set of commands for getting your branch ready for a pull request.

If anybody is waiting to make changes to Goma but isn't sure what to do (Robert), have a try at the last part and let me know how it goes. I would say if it takes you more than 10 minutes or so to get to the point where you are clicking pull request, let me know where you are getting stuck.

Also, if any of the more seasoned git users wants to make edits, feel free to do so.

Daniel

Remove compiler warnings and errors in GCC 4.8.2

On the default Sandia build, we have a number of compiler warnings. It's bad to keep these around, as they may mask newly bad things developed into the code.

I would like to see all of these warnings resolved. They can either be fixed, or if we don't care, the warning can be turned off (for example, you may not care about -Wcomment warnings).

GOMA run not parallel consistent

In running a continuum-shell problem, I get different behavior on different numbers of processors during the very first assembly. I have a hard time believing that this is expected behavior; during the first assembly, all equations should be built using the initial conditions. When running on 4 processors, at least one Newton iteration works. However, at 8 or 12 processors, NANs are received during the assemble, within the mesh equations. Currently doing a serial run just to check.

array indexing issue in load_MandE_flux mm_fill_porous.c

GCC reports array indexing above bounds in load_MandE_flux on pmv->d_rel_mass_flux_dmesh

It looks like its defined as

dbl d_diff_flux_dmesh[MAX_PMV][DIM] [DIM][MDE];

But is indexed in many places as

dbl d_diff_flux_dmesh[DIM] [MAX_PMV][DIM][MDE];

butler_volmer_heat_source array bound issue

in mm_std_models.c

The function butler_volmer_heat_source has an array issue where a struct array element (d_h->C ) is being accessed far above array bounds due to using MAX_VARIABLE_TYPES as a part of the index but the array in the struct is only [MAX_CONC][MDE] big.

Unsure on what correct behaviour should be.

double
butler_volmer_heat_source(HEAT_SOURCE_DEPENDENCE_STRUCT *d_h, dbl *a)

https://github.com/goma/goma/blob/master/mm_std_models.c#L1338-L1352

        for (j = 0; j < ei->dof[var]; j++)
          {
           phi_j = bf[var]->phi[j];
           for (w = 0; w < pd->Num_Species_Eqn; w++ )
             {
               d_h->C[MAX_VARIABLE_TYPES +  w][j] = 0.0;  /* no dependency other than wspec */

             }

           d_h->C[MAX_VARIABLE_TYPES + wspec][j] = dhdc*phi_j;

          }
struct heat_source_dependence
{
  double v[DIM][MDE];      /* velocity dependence. */
  double X[DIM][MDE];      /* mesh dependence. */
  double T[MDE];           /* temperature dependence. */
  double C[MAX_CONC][MDE]; /* conc dependence. */
  double V[MDE];           /* voltage dependence. */
  double S[MAX_MODES][DIM][DIM][MDE]; /* stress mode dependence. */
  double F[MDE];           /* level set field dependence */
  double P[MDE];           /* acoustic pressure dependence  */
};
typedef struct heat_source_dependence HEAT_SOURCE_DEPENDENCE_STRUCT;

Possible array bound error correct_stream_fcn (mm_post_proc.c)

mm_post_proc.c: In function โ€˜correct_stream_fcnโ€™:

iiii= nsideq[nstart + ii +1];

array defined as int nsideq[7], is indexed by:

iii = nsideq[nstart + ii];
iiii= nsideq[nstart + ii +1];

where nstart is set in a previous for loop as i

for (i = 0; i < ei->num_sides; i++) {
...
  nstart = i;

and ii is set in the current for loop:

for (ii = 0; ii < ei->num_sides - 1; ii++) {

which if num_sides is = 6 means that nstart_max will be 5 and ii_max will be 4 which makes the maximum indexes into nsideq 9 and 10 which is out of bounds.

The compiler however thinks that the nsideq needs to be of size 14 for there to be no array bounds error, and I'm not sure how it got that the iiii line indexes to 13

I'm unsure of what this function is doing so I'm not sure how to rearrange the code to see if this warning can be corrected.

Restructure code into appropriate directories

Currently, all GOMA source code is located in a single flat directory. Migrating files into appropriate /src and /include directories will improve readability and navigation. Additionally, build output should be automatically placed in a /build (or other name) subdirectory. This will also integrate better when brk and fix are added.

Consistency Issues between side set associations in serial and parallel

ss_to_blks is used to associate a side set to a block id for applying boundary conditions. The association is not always the same in parallel causing BC's to be applied on multiple blocks

SS_Internal_Boundary is used to check if a sideset is internal. The logic for this is only for serial meshes.
In parallel it only takes into account the mesh for that processor when checking if it is an internal side set.
This causes SS_Internal_Boundary to be wrong depending on how the mesh was split by brk and then causes boundary conditions to be applied when they should not be. (e.g. KINEMATIC)

zero_lec does not always zero all of the lec

in mm_fill.c

This came up when I was trying to figure out why p_C_T_U had differing iteration histories.

p_C_T_U and p_dry.ml exhibit this problem, where they have nonzeros in both lec->R and lec->J while p_susp correctly has all elements zero'd. I haven't tested other problems.

The function comment specifies why they tried to avoid setting the whole array to zero manually but I'm not sure how much of a cost it would be, It might be significant with larger MAX_CONC and large meshes but I haven't done any profiling:

      *  It uses the same algorithm as the fill routine to minimize the 
      *  the amount of zeroing. This is necessary since the local element 
      *  Jacobian is so large.

VELO_SLIP BC N_cl is not correctly communicated for parallel

VELO_SLIP expects a single node ns id to calculate a relative distance but when the mesh is broken this ns id is only located on one of the processors causing either incorrect node distances or a segfault depending on the run as the processor does not have access to that node.

It seems like this problem may exist in some of the other VELO_SLIP boundary conditions as well.

Possible idea discussed on fixing this was communicating the node position to all processors, and updating this accordingly when needed.

load_MandE_flux issue with array bounds

in mm_fill_porous.c

load_MandE_flux line 8598

There is a problem where if DARCY_FICKIAN is the cr->PorousFluxModel then w can be set to MAX_PMV (4) because of the for loop of (w = 0; w < MAX_PMV; w++) but pmv->liq_Xvol_solvents is an array of size MAX_PMV so line 8598 is accessing outside of the array bounds

 t3 = pmv->d_liq_darcy_velocity_dSM[a][j] *
              pmv->liq_Xvol_solvents[w] * mp->density;

How to resolve MAX_CONC dependency in functions?

Quite a few functions (mostly in mm_std_models, and mm_fill_species) expect MAX_CONC to be greater than 2, and many expect MAX_CONC to be >= 7~8 which is causing a large amount of array index out of bounds warnings on the default makefile setting of MAX_CONC=2, and many still on the GUTS makefile of MAX_CONC=4

Should these functions exit with a EH message specifying what MAX_CONC is expected for that function or should they be looked at in more detail by someone else?

Brk fails on some files

Running a mesh that has a volume block and an adjoining shell block. I can not share the geometry, but it is similar to the tri_shell_FSI problem.

Fails within brk with error:

wr_dpi.c:1078: nc_put_var_int() varid=4332943

MAX_CONC is expected to be 4 or more by some functions which causes array subscript above array bounds.

MAX_CONC = 2 in default Makefile

mm_fill_species.c expects MAX_CONC = 4

d_mass_flux[store][MAX_VARIABLE_TYPES + 2] += dQdx;

d_mass_flux[store][MAX_VARIABLE_TYPES + 2] += dQdx;

Similarly the same expectation in mm_std_models.c

goma/mm_std_models.c

Lines 5800 to 5923 in efb8a20

mp->d_species_source[MAX_VARIABLE_TYPES + 2] = 0.;
mp->d_species_source[MAX_VARIABLE_TYPES + 3] = dQ3dx3;
mp->d_species_source[MAX_VARIABLE_TYPES + four] = dQ3dx4;
mp->d_species_source[MAX_VARIABLE_TYPES + five] = 0.;
}
break;
case 1:
mp->species_source[species_no] = Q2;
var = TEMPERATURE;
if (pd->v[var])
{
mp->d_species_source[var] = 0.;
}
var = MASS_FRACTION;
if (pd->v[var])
{
mp->d_species_source[MAX_VARIABLE_TYPES + 0] = 0.;
mp->d_species_source[MAX_VARIABLE_TYPES + 1] = dQ2dx1;
mp->d_species_source[MAX_VARIABLE_TYPES + 2] = dQ2dx2;
mp->d_species_source[MAX_VARIABLE_TYPES + 3] = 0.;
mp->d_species_source[MAX_VARIABLE_TYPES + four] = 0.;
mp->d_species_source[MAX_VARIABLE_TYPES + five] = dQ2dx5;
}
break;
case 2:
mp->species_source[species_no] = Q1 + Q2;
var = TEMPERATURE;
if (pd->v[var])
{
mp->d_species_source[var] = 0.;
}
var = MASS_FRACTION;
if (pd->v[var])
{
mp->d_species_source[MAX_VARIABLE_TYPES + 0] = 0.;
mp->d_species_source[MAX_VARIABLE_TYPES + 1] = dQ2dx1;
mp->d_species_source[MAX_VARIABLE_TYPES + 2] = dQ1dx2 + dQ2dx2;
mp->d_species_source[MAX_VARIABLE_TYPES + 3] = dQ1dx3;
mp->d_species_source[MAX_VARIABLE_TYPES + four] = 0.;
mp->d_species_source[MAX_VARIABLE_TYPES + five] = dQ2dx5;
}
break;
case 3:
mp->species_source[species_no] = Q1 + Q3;
var = TEMPERATURE;
if (pd->v[var])
{
mp->d_species_source[var] = 0.;
}
var = MASS_FRACTION;
if (pd->v[var])
{
mp->d_species_source[MAX_VARIABLE_TYPES + 0] = dQ3dx0;
mp->d_species_source[MAX_VARIABLE_TYPES + 1] = 0.;
mp->d_species_source[MAX_VARIABLE_TYPES + 2] = dQ1dx2;
mp->d_species_source[MAX_VARIABLE_TYPES + 3] = dQ1dx3 + dQ3dx3;
mp->d_species_source[MAX_VARIABLE_TYPES + four] = dQ3dx4;
mp->d_species_source[MAX_VARIABLE_TYPES + five] = 0.;
}
break;
case 4:
mp->species_source[species_no] = -Q3;
var = TEMPERATURE;
if (pd->v[var])
{
mp->d_species_source[var] = 0.;
}
var = MASS_FRACTION;
if (pd->v[var])
{
mp->d_species_source[MAX_VARIABLE_TYPES + 0] = -dQ3dx0;
mp->d_species_source[MAX_VARIABLE_TYPES + 1] = 0.;
mp->d_species_source[MAX_VARIABLE_TYPES + 2] = 0.;
mp->d_species_source[MAX_VARIABLE_TYPES + 3] = -dQ3dx3;
mp->d_species_source[MAX_VARIABLE_TYPES + four] = -dQ3dx4;
mp->d_species_source[MAX_VARIABLE_TYPES + five] = 0.;
}
break;
case 5:
mp->species_source[species_no] = -Q2;
var = TEMPERATURE;
if (pd->v[var])
{
mp->d_species_source[var] = 0.;
}
var = MASS_FRACTION;
if (pd->v[var])
{
mp->d_species_source[MAX_VARIABLE_TYPES + 0] = 0.;
mp->d_species_source[MAX_VARIABLE_TYPES + 1] = -dQ2dx1;
mp->d_species_source[MAX_VARIABLE_TYPES + 2] = -dQ2dx2;
mp->d_species_source[MAX_VARIABLE_TYPES + 3] = 0.;
mp->d_species_source[MAX_VARIABLE_TYPES + four] = 0.;
mp->d_species_source[MAX_VARIABLE_TYPES + five] = -dQ2dx5;
}
break;
case 6:
mp->species_source[species_no] = 0.;
var = TEMPERATURE;
if (pd->v[var])
{
mp->d_species_source[var] = 0.;
}
var = MASS_FRACTION;
if (pd->v[var])
{
mp->d_species_source[MAX_VARIABLE_TYPES + 0] = 0.;
mp->d_species_source[MAX_VARIABLE_TYPES + 1] = 0.;
mp->d_species_source[MAX_VARIABLE_TYPES + 2] = 0.;
mp->d_species_source[MAX_VARIABLE_TYPES + 3] = 0.;
mp->d_species_source[MAX_VARIABLE_TYPES + four] = 0.;
mp->d_species_source[MAX_VARIABLE_TYPES + five] = 0.;

mp->d_species_source[MAX_VARIABLE_TYPES+2] = (0.7*dr1_dT - dr2_dT);

but mm_std_models also expects MAX_CONC = 8

https://github.com/goma/goma/blob/efb8a20b7d0807f8c7c91f16f0bc40f5140b5f19/mm_std_models.c#L5491-5496

Question on unused values in the Makefiles

  1. Is CHEMKIN still used? I see that the changelog says that it has been untested for 10 years. (goma_c target)
  2. Is Purify still used, I see it is commented out by default
  3. Are the parser files still in use? I see that mm_parser.h is not referenced anywhere but parser files and it doesn't look like it is being compiled anymore.
  4. Is linting still being done? (goma_lint target)

Makefile requires specialization for specific platforms

Currently, each GOMA developer has to modify the Makefile to point to library and compiler locations on their specific platform. It would be much more useful to use standard autoconf tools to enable a configure script to interrogate the environment for the proper locations and generate an appropriate Makefile.

set_mp_to_unity causing aggressive loop optimizations warning

in mm_input.c

This looks like a similar issue to butler_volmer_heat_source array but is causing a -Waggressive-loop-optimizations warning to come up (perhaps as it ends up writing memory in the struct but outside of the array bounds)

The function set_mp_to_unity seems to be trying to access values outside of the inner arrays of the structs as they are defined, I'm unsure if the loop should be initializing all the values to 0 of d_porous_diffusivity and d_porous_vapor_pressure in mp_glob[mn] or if this was meant to do something else.

set_mp_to_unity(const int mn)

goma/mm_input.c

Lines 10890 to 10905 in 413d19c

for ( w=0; w<pd_glob[mn]->Num_Porous_Eqn; w++)
{
mp_glob[mn]->PorousDiffusivityModel[w]=CONSTANT;
mp_glob[mn]->porous_diffusivity[w] = 1.;
mp_glob[mn]->PorousLatentHeatVapModel[w] = CONSTANT;
mp_glob[mn]->porous_latent_heat_vap[w] = 1.;
mp_glob[mn]->PorousLatentHeatFusionModel[w] = CONSTANT;
mp_glob[mn]->porous_latent_heat_fusion[w] = 1.;
mp_glob[mn]->PorousVaporPressureModel[w] = CONSTANT;
mp_glob[mn]->porous_vapor_pressure[w] = 1.;
for ( v=0; v<MAX_PMV + MAX_CONC + MAX_VARIABLE_TYPES; v++)
{
mp_glob[mn]->d_porous_diffusivity[w][v] = 0.;
mp_glob[mn]->d_porous_vapor_pressure[w][v] = 0.;
}
}

 for ( w=0; w<pd_glob[mn]->Num_Species_Eqn; w++)    
...
      for ( v=0; v<MAX_PMV + MAX_CONC + MAX_VARIABLE_TYPES; v++)
    {
      mp_glob[mn]->d_porous_diffusivity[w][v] = 0.;
      mp_glob[mn]->d_porous_vapor_pressure[w][v] = 0.;
    }
extern struct Material_Properties *mp, **mp_glob, *mp_old;

  dbl d_porous_diffusivity[MAX_PMV][MAX_VARIABLE_TYPES + MAX_CONC]; 
...
  dbl d_porous_vapor_pressure[MAX_PMV][MAX_VARIABLE_TYPES + MAX_CONC];

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.