GithubHelp home page GithubHelp logo

nasa / cape Goto Github PK

View Code? Open in Web Editor NEW
19.0 4.0 7.0 56.19 MB

Computational Aerosciences Productivity & Execution

License: Other

Python 96.24% C 1.59% Shell 2.12% MAXScript 0.02% Assembly 0.02%

cape's Introduction

Description

CAPE is software to improve Computational Aerosciences Productivity & Execution.

Here is a link to a recording of a NASA Advanced Modeling & Simulation seminar that announced the release of CAPE 1.0.0

https://www.nas.nasa.gov/pubs/ams/2023/03-09-23.html

CAPE has two main purposes:

1. To execute and post-process three different Computational Fluid Dynamics (CFD) solvers:

  1. Cart3D
  2. FUN3D
  3. OVERFLOW
  4. Kestrel (CAPE 1.1+)
  1. Create and use "datakits" a combination of database and toolkit. These are data structures particularly well-suited to databases created from aerosciences data.

This package has been used by NASA teams to create large databases (in some instances including over 10,000 CFD solutions) for flight programs such as Space Launch System.

The main benefit of using CAPE for CFD run matrices is having a single tool to do many of the steps:

  • create separate folders for each CFD case,
  • copy files and make any modifications such as setting flight conditions in input files,
  • submit jobs to high-performance computing scheduling system such as PBS,
  • monitor the status of those jobs,
  • create PDF reports of results from one or more solutions,
  • extract data and conduct other post-processing,
  • archive (and unarchive, if necessary) solution files.

This software is released under the NASA Open Source Agreement Version 1.3 (see LICENSE.rst). While the software is freely available to everyone, user registration is requested by emailing the author(s).

A collection of tutorials and examples can be found at

https://github.com/nasa-ddalle/

Full documentation can be found at

https://nasa.github.io/cape-doc

Installation

The simplest method to install CAPE is to use pip:

Linux/MacOS
$ python3 -m pip install git+https://github.com/nasa/cape.git#egg=cape
Windows
$ py -m pip install git+https://github.com/nasa/cape.git#egg=cape

Notes:

  1. On Windows (where CAPE might be of limited use), replace python3 with py
  2. It's also possible to install from one of the provided .whl files.
  3. In many cases, adding --user to the above command is appropriate; on systems where you are not a root/admin user, such is likely required.
  4. If installing a new version of CAPE where one might already be present, add the option --upgrade.

Notices

Copyright © 2022 United States Government as represented by the Administrator of the National Aeronautics and Space Administration. All Rights Reserved.

Disclaimers

No Warranty: THE SUBJECT SOFTWARE IS PROVIDED "AS IS" WITHOUT ANY WARRANTY OF ANY KIND, EITHER EXPRESSED, IMPLIED, OR STATUTORY, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTY THAT THE SUBJECT SOFTWARE WILL CONFORM TO SPECIFICATIONS, ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR FREEDOM FROM INFRINGEMENT, ANY WARRANTY THAT THE SUBJECT SOFTWARE WILL BE ERROR FREE, OR ANY WARRANTY THAT DOCUMENTATION, IF PROVIDED, WILL CONFORM TO THE SUBJECT SOFTWARE. THIS AGREEMENT DOES NOT, IN ANY MANNER, CONSTITUTE AN ENDORSEMENT BY GOVERNMENT AGENCY OR ANY PRIOR RECIPIENT OF ANY RESULTS, RESULTING DESIGNS, HARDWARE, SOFTWARE PRODUCTS OR ANY OTHER APPLICATIONS RESULTING FROM USE OF THE SUBJECT SOFTWARE. FURTHER, GOVERNMENT AGENCY DISCLAIMS ALL WARRANTIES AND LIABILITIES REGARDING THIRD-PARTY SOFTWARE, IF PRESENT IN THE ORIGINAL SOFTWARE, AND DISTRIBUTES IT "AS IS."

Waiver and Indemnity: RECIPIENT AGREES TO WAIVE ANY AND ALL CLAIMS AGAINST THE UNITED STATES GOVERNMENT, ITS CONTRACTORS AND SUBCONTRACTORS, AS WELL AS ANY PRIOR RECIPIENT. IF RECIPIENT'S USE OF THE SUBJECT SOFTWARE RESULTS IN ANY LIABILITIES, DEMANDS, DAMAGES, EXPENSES OR LOSSES ARISING FROM SUCH USE, INCLUDING ANY DAMAGES FROM PRODUCTS BASED ON, OR RESULTING FROM, RECIPIENT'S USE OF THE SUBJECT SOFTWARE, RECIPIENT SHALL INDEMNIFY AND HOLD HARMLESS THE UNITED STATES GOVERNMENT, ITS CONTRACTORS AND SUBCONTRACTORS, AS WELL AS ANY PRIOR RECIPIENT, TO THE EXTENT PERMITTED BY LAW. RECIPIENT'S SOLE REMEDY FOR ANY SUCH MATTER SHALL BE THE IMMEDIATE, UNILATERAL TERMINATION OF THIS AGREEMENT.

cape's People

Contributors

derek-dalle avatar dvickernasa avatar nasa-ddalle avatar vcoralic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

cape's Issues

Bug in set_nMin method of the DataBook module

In v1.0.0rc2, there's a bug in the set_nMin method of the DataBook module. set_nMin has nMin as an input, but uses nStats instead, which is not in the namespace of the method and leads to NameError: name 'nStats' is not defined. I expect it should be self['nMin'] = nMin instead of self['nMin'] = nStats on line 427.

Bug in set_Slurm_N method of slurm module

In v1.0.0rc2, there's a bug in the set_Slurm_N method of the slurm module. Method expects N as an input, but actually uses n, which results in NameError: name 'n' is not defined.

No Symmetry Volume Mesh Option in AutoInputs

When using CART3D without CAPE, I can specify autoInputs -halfbody -symmY to generate a volume mesh on only one side of a half body model, but no option for this exists in CAPE.

Bugfix: Databook Write

I noticed that when I have custom columns in my run matrix that use strings with a comma, CAPE cannot write databook files correctly. In cape/cfdx/dataBook.py/write(), the run matrix cells are correctly read and interpreted, but on the write command the comma inside the string is recognized as a delimiter. The result is that the strings are split into multiple columns. This breaks anything that needs to process databooks.

I wonder if there's a way to make sure that strings are recognized and not split up? Another approach could be to use pandas to read/write csv files, but that's probably a longer term solution. I'm sure it would allow you to eliminate some code.

sampling_parameters not Writing Correctly

I noticed an issue that’s causing a sampling_parameters namelist read error in FUN3D as of the latest CAPE release (1.0.2). Previously, I would specify a three-component tuple for a plane using a "hook". In Python, the dictionary defining the fifth plane’s normal vector would be:
{"plane_normal(1:3, 5)": (1.0 0.0 0.0)}

With the sampling_parameters key of my namelist dictionary populated with planes in this manner, I would then use the ApplyDict method (i.e., cntl.Namelist.ApplyDict()) to populate the CAPE namelist class. The resultant namelist entry would be written as:

plane_normal(1:3, 5) = 1.0 0.0 0.0

With CAPE 1.0.2, the same approach now results in:

plane_normal(1:3, 5)(1) = 1.0
plane_normal(1:3, 5)(2) = 0.0
plane_normal(1:3, 5)(3) = 0.0

This results in an error in FUN3D:

Invalid input in namelist:     plane_normal(1:3, 5)(1) = 1.0
  Probable incomplete read of namelist: &sampling_parameters iostat2=        5010

Any thoughts on the best way to move forward? With the current framework, I think I’d now have to express my dictionary as:
{"plane_normal(1, 5)": 1.0, "plane_normal(2, 5)": 1.0, "plane_normal(3, 5)": 1.0}

Overwriting component parameters

When specifying component parameters for tracking forces and moments in fun3d, pyfun does not pickup on existing components parameters that have already been specified in the template nml. For example if there are component parameters to track flow through forces (shown below) the existing components parameters are overwritten by the databook components

number_of_components = 1

component_count(1) = -1
component_input(1) = '0'! component not a boundary
component_type(1) = 'circle'
circle_center(1:3,1) = 0, 0, 0
circle_normal(1:3,1) = 1.0, 0.0, 0.0
circle_radius(1) = 0.2
component_name(1) = 'nozzleExit' ! component label

Is there a better way to specify component parameters or is there a way to make cape more robust such that it checks for component parameters prior setting up the databook components?

Add Arbitrary Slurm Instructions

In CAPE, we can add Slurm options via opts["Slurm"]["other"], but it has to be a dictionary. So, for instance, opts["Slurm"]["other"] = {"export": "ALL"} writes out #SBATCH --export=ALL in the submission script.

But what if we want something without the equals sign? Like #SBATCH --exclusive?

ApplyDict() not working as expected

When populating a Config dictionary in pyFun.json with a Components list and an Inputs dictionary, the resulting fun3d.*.nml file lists all components in &component_parameters twice. This does not seem to impact the execution of FUN3D, but it can result in a longer than necessary namelist file.

New Report Options

In CAPE 1.1.0 or greater, some subfigure options have been disabled, such as:

  • Grid
  • nMaxStats
  • nStats

Do these live somewhere else now?

CAPE resubmits case even when cases has failed with nan*.dat

When running fun3d cases with two phases, if the solver diverges on the 2nd phase cape will try to resubmit that case. This creates an endless loop where it will keep resubmitting until the case is manually killed. Does cape not look for nan*dat files

Bug: FUN3D Line Load Generation Fails When `boundary_animation_freq = -1`

CAPE does not correctly treat the corner case where boundary variables are only outputted at the end of a simulation. Specifically, when boundary_animation_freq = -1, FUN3D outputs the following boundary file name pattern, fglb = '%s_tec_boundary.plt' % proj, but CAPE still looks for fglb = '%s_tec_boundary_timestep[1-9]*.plt' % proj (see line 326 of pyfun/lineLoad.py).

On a related note, the failure mode when FUN3D is unable to find the relevant *.plt file is somewhat cryptic: “Not enough iterations (None) for analysis”. It might be a good idea to catch this particular failure mode more explicitly, i.e., by printing to screen that the required file(s) could not be found.

Let Slurm determine the number of cores for mpiexec

If the MPI library used to build FUN3D has proper Slurm integration then it should be possible to execute the solver without specifying the number of cores/procs in the mpiexec launch command. We typically launch jobs with just the -N and --ntasks-per-node options to Slurm, which avoids having to do the math for how many total cores are being requested.

Maybe an option in pyFun.json where setting "nProc" to -1 removes the -n {nProc} part of the mpiexec command?

Not sure if that would work with the other solvers but it should since it's dealing with the MPI library.

Unfavorable behaviour with cape and component_parameters

Using cape v1.1.1.1:

I'm finding some issues with cape generated namelist files for different phases. My framework builds from an existing template nml in order to define component_paramters for exit planes (circle components).

A few issues come up when doing this:

  • Circle components are indexed properly in the template but rearranged incorrectly in the output file.

    • E.g., components 1-214 are boundary components and the last 38 components are circles (215-253).
    • The circles are improperly re-indexed with indices (1-38) for components that don't exist for boundaries (i.e. circle_center, circle_normal, circle_radius, calculate_thrust_ratio).
  • For component properties with xyz index inputs, the order of indexing is switched from (xyz_index, component_index) to (component_index, xyz_index).

    • e.g. for circle_center, circle_normal, etc.. FUN3D user manual definition implies these should be defined as (xyz_index,
      component_index).

Drag Coefficient

I'm using PyCart to create carpet plots of cape runs. The CN and CA data all looks good and follows expected trends. Plotting the drag polar in excel using the axes transformations found here works well and yield proper drag curve with alpha. However, when plotting the sweep using pycart's "CD" coefficient, the CD doesn't match up. The issue makes it seem like the CN term in the CD formulation is negative when it should be positive. Is there a way I can check the formulation of CD for this and make sure it's correct for my axes definition?

Varying plt output behavior in pyfun

I'm also thinking that this can be used to drive a simulation to a certain number of steps, and then setting up a new folder where we can dump out plt files for every boundary_animation_freq steps so that we can average those plt files for visualizations. This would be a nice ability because FUN3D otherwise periodically generates plt files every boundary_animation_freq steps for the entire iteration history (rather than say for steps > N), which can be cumbersome.

Originally posted by @rbreslavsky in #6 (comment)

Bug: Coefficient Negations in Databooks

When specifying a component transformation like the one below, it seems that databooks negate CLL/CLN regardless of user intention:

"component": { "Type": "FM", "Transformations": [ { "Type": "ScaleCoeffs", "CLL": 1, "CLN": 1 } ] },

In this case, Databooks seem to perform two scaling transformations. The first action is to multiply each of CLL and CLN by "1". The next action is to subsequently multiply each by "-1". The end result is that databook always multiplies CLL and CLN by "-1" regardless of whether the scaling value is 1, -1, or not specified (defaults to -1).

This is not true for reports, which seem to faithfully reproduce the desired scaling if a value is specified. If a scaling value is not specified, reports do not seem to modify CLL and CLN.

DataBook read issue

Follow-on to #18

From @rbreslavsky:

One more comment on this. The databooks are all generated correctly now. However, when they are read back in (for carpet plots or other operations), lists of strings get split up into multiple parts. So for instance, if I have "['hvanleer', 'hvanleer']" in a cell, that gets split into two parts by util.split_line().

Reporting a bug within cape.plt.Plt()

There appears to be an issue within cape.plt.Plt() when using read/write ".dat" files from CART3D ".plt" files.

As an example, I've selected a test case using a tutorial ".plt" file from CART3D.
A python script reproducing this error is attached, along with the sample ".plt" file.

The pseudo code is as follows:
Starting with a ".plt" -> convert to a Plt class using plt_cls = Plt(fname) -> generate a .dat from the class plt_cls.WriteDat(dat)
The issue arises when trying to generate a new Plt class instance from the newly generated .dat file i.e., plt_dat = Plt(dat=dat)

Given the use of only Plt() utilities, expected to be able to go back and forth between ".plt" and ".dat". This issue seems isolated to CART3D files.
cape_plt_bug.zip

Bug in Subfigure Format

Unsure if this is a bug or intended behavior, but when indicating "Format": png for a subfigure, subfigure png files are created in addition to (not instead of) pdf files.

The resultant report is made of vector images rather than raster images.

Feature Request: Reports

Would like to request some features regarding reports:

  1. Command line option to clear a case report folder
  2. Command line option to force an update to a case report folder, even if images are assumed to be current
  3. Command line option to update case report folders without propagating their contents into a compiled final report

pycart continually restarting case

When running cart3d with several adaptions if the final iteration number is not exactly reached pycart re-runs aero.csh with a restart, which starts the case from adapt00 again. this seems to corrupt the data and causes pycart to get stuck in a continually restart loop

I have specified restart false in the RunControl section

"RunControl": {
    "MPI":false,
    "PhaseSequence": [0],
    "PhaseIters": [1600],
    "sbatch": true,
    "PBS":false,
    "Resubmit": false,
    "Continue": false,
    "nProc":96,
    "Adaptive": true,
    "Verbose": true,
    "intersect": {
        "run": true,
        "triged": false
    },

Is there a way to force pycart to not submit restarts?

CAPE Moment Line Loads

CAPE moment line loads, if manually integrated, do not correspond to their whole coefficient counterparts.

The questions I have are:

  • What do the line load moment columns represent?
  • What is the best way to accurately compute line load moments?

Modifying MAPBC Files

I'm attempting to modify a case's mapbc from the default value specified in pyfun.json.

In trying to do this via a case function, modifying the value of Mesh / BCFile does not result in the copying of the desired mapbc to the case folder. This seems to work fine for grids (Mesh / MeshFile). Is this a bug, or am I doing something wrong?

Project Name Issue When Using Hooks

When using CAPE and pyfun, I prefer not to use a template fun3d.nml. This actually works quite well, as I can use Python "hooks" to populate the entire FUN3D namelist file for a given simulation / case. However, I've had some difficulty pertaining to the project_rootname field in the &project namelist. I've tried to define it in my namelist Python hook, but it keeps getting overwritten such that project_rootname = "pyfun". Is this a bug?

Bug in set_ulimit method of ulimit module

In v1.0.0rc2, there's a bug in the set_ulimit method of the ulimit module. Method uses the setel method, which is not in the namespace of the ulimit module (it should be imported at the top of the module, just like getel).

Ability to generate lineloads along different axis

Currently lineloads are defaulted to be along the x direction. It would be beneficial to make this an input in the case of extracting lineloads in other directions for example for a wing in the spanwise direction

Feature Request: Warm Start

I'd like to request warm start capability as a feature for pyFun. Right now, this is possible by using shell commands to copy over previously run simulation files from other "seed" runs, but requires diligent tracking of phases and phase iteration count

Perhaps the feature could work by indicating one or more seed conditions (alpha, Mach, etc) in the run matrix, or a seed run case number. It would also be nice to be able to drop the seed run iteration history as an option (restart = 'on_nohistorykept).

Bug: FUN3D Line Load Generation Fails when Ref. Dimensions Are Only Defined in NML

When the reference length and area are defined in the FUN3D namelist file and not in the Config section of the master JSON, line loads fail to generate. The issue seems to be that CAPE simply does not look in the namelist file when it fails to grab those values from Config, and so populates those variables as None. This leads to a silent failure mode (which should probably be caught explicitly) that actually suggests the line load generation operation was carried out correctly, e.g.,

Importing module 'aero_00232_cape_srb_cfd'
Updating LineLoad component 'll_total' ...
    extracoarse/m00.900a+000.00r2.04e+07
      Adding new databook entry at iteration 10000.
    triloadCmd < triload.ll_total.i > triload.ll_total.o

However, when the user looks in the databook line load directory, it is empty.

cape.plt.Plt() bugs. StrandID and Zone Name handling discrepancies.

Discovered a few bugs when using cape.plt.Plt() 's Read() and ReadDat() functions.

  1. Zone Name parsing does not handle Multi-Core zone name outputs:

    • i.e. A Multi-Core FUN3D output produces zone names with the following formatting:
      • ["Boundary"_0, "Boundary"_1, "Boundary"_2, ...] where each zone is attributed to some core
      • The resulting parsed zone names from cape are as follows: ["Boundary", "Boundary", "Boundary", ... ]
    • Duplicate zone names cause significant disparities downstream when trying to define new class instances
    • Suspect this is a result of how the zone names are parsed in plt.Plt.Read() Line 260
      • zone = capeio.read_lb4_s(f).strip('"')
  2. StrandID are defined differently between reading from .dat files and .plt.

    • When reading from .plts (Plt.Read()) the StrandIDs are parsed in lines 266-268
      • i, = np.fromfile(f, count=1, dtype='i4')
    • When writing .dats (Plt.WriteDat() ), StrandIDs are auto generated in lines 775-780
      • Hence, when using Plt.ReadDat(), the StrandIDs from the .dat do not match those from the .plt
    • This results in discrepancy between plt classes read in from .dat and .plt files as the StrandIDs differ.
  3. Another, minor bug when reading zone names in Plt.ReadDat() lines 632-640

    • Current implementation assumes there is always only one "=" within the string; however, this breaks when zones are named with = signs (i.e. boundary 0 (X=0))
    • Suggested Fix: change k, v = s.split("=") to k, v = s.split("=", maxsplit = 1)
    • This ensures only two outputs from the split method, and it ensures the full zone name is parsed out.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.