GithubHelp home page GithubHelp logo

reactionmechanismgenerator / rmg-tests Goto Github PK

View Code? Open in Web Editor NEW
3.0 14.0 14.0 5.84 MB

Continous Integration Testing Platform for RMG-Py

Shell 58.09% Python 41.91%
integration-testing regression-testing

rmg-tests's Introduction

This git repository tracks changes in RMG model generation by retaining a history of RMG generated models in CHEMKIN format.

Normally RMG-tests run automatically all the examples registered in examples folder on Travis platform. For convenience of debugging and running customized jobs (which sometimes violate Travis's restricted rule of memory usage and CPU time), we added the functionality of running RMG-tests locally. To run locally, please follow the instruction below.

Instructions for simultaneous updates to RMG-Py and RMG-database

If you need an RMG-Py pull request to be tested on a certain branch of the RMG-database, add a commit that modifies the file .github/workflows/CI.yml in RMG-Py by changing the line RMG_DB_BRANCH: main to instead state the desired database branch.

Likewise, if you wish to test an RMG-database update using a specific branch of RMG-Py, then add a commit to the modify the file .github/workflows/CI.yml in your RMG-database changing the line RMG_PY_BRANCH: main to instead state the desired RMG-Py branch.

After the tests pass, remove the commits before merging the pull request to main.

Warning

Avoid hyphens in your branch names, on either RMG-database or RMG-Py.

Instructions for Running RMG-tests Locally

  1. Clone the repo down to your local machine: run git clone https://github.com/ReactionMechanismGenerator/RMG-tests.git

  2. Make sure your have installed anaconda (please skip this step if you have already installed). If you are going to use it on a server, you probably are using the global anaconda which means you usually don't have have permission to write into /path/to/anaconda/envs. In this case, you need to install your own local anaconda, so follow the commands below:

    # Set up anaconda
    wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh
    chmod +x miniconda.sh
    ./miniconda.sh -b -p $HOME/miniconda
    export PATH=$HOME/miniconda/bin:$PATH
    
    # Update conda itself
    conda update --yes conda
  3. Modify local_tests/submit_serial.sl by specifying the branch name of RMG-Py and RMG-database. Note that these branches must be already pushed and located on github.com/ReactionMechanismGenerator. Also change the job name to the one you want to run. If you need to test customized jobs, add a new folder with RMG input file which you want to run to examples/rmg/.

  4. Run RMG-tests by

    cd local_tests
    bash submit_serial.sl

If you are running on a server using SLURM for submitting jobs, we've provided two example submission scripts (serial and parallel). For serial mode of testing: sbatch submit_serial.sl For parallel mode of testing: sbatch submit_parallel.sl $(pwd)

  1. You'll find the test log in folder RMG-tests/tests/check/${you job name}, and two versions of RMG-generated CHEMKIN models in folder RMG-tests/tests/benchmark/${you job name} and RMG-tests/tests/testmodel/${you job name} for detailed analysis.

  2. We're also testing recording test results into a central mongo database. Please contact [email protected] to get an authentication file (in config.cfg format) to access to database. Currently it's required to run thermochemistry validation test.

rmg-tests's People

Contributors

aelong avatar alongd avatar amarkpayne avatar calvinp0 avatar connie avatar goldmanm avatar hwpang avatar kehang avatar mjohnson541 avatar mliu49 avatar nickvandewiele avatar nyee avatar pengzhang13 avatar pique0822 avatar rgillis8 avatar rwest avatar sevyharris avatar yunsiechung avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rmg-tests's Issues

Complete Migration to GitHub Actions on RMG-Py

TL;DR:

Migrate RMG-tests into RMG-Py and deprecate this repo.
(1) if CI passes on RMG-Py, run the regression testing there by extending the GitHub action CI.yml
(2) we can create our 'benchmark' model on a nightly basis and use GitHub artifacts to use it as comparison throughout the day without having to recreate it everytime we want to run regression
(3) speed up the build by only building environment once
(4) enable forked repos to work

The long version:

The unit testing has been moved into the GitHub actions in the RMG-Py repository, but the regression testing remains here. We should completely migrate this to also be part of the GitHub actions and deprecate RMG-tests.

Biggest limitation of the current workflow is that the regression testing will always fail on forked repos. This means that if want anyone outside of our groups to contribute, they needed to be added to the RMG user group and create a branch on RMG-Py itself. This basically means we cannot get other people to contribute.

Moving will also decrease the build time. Right now, we (1) build environment/install RMG to run unittests and (2) repeat this to run the regression tests (also using the slower conda instead of mamba).

We can achieve all the functionality of this repo in GitHub actions:

  • with conditional steps, we can perform the regression testing only on PRs which pass the unit tests while preserving the run environment
  • with GitHub artifacts we can avoid having to run the main branch of RMG everytime we want to use a baseline for comparison
  • avoid manually cloning repositories and switching branches by using github action's Checkout

This would be a huge boost to maintainability -- all of the testing (unit tests + regression) would be consolidated in one actions file.

Request for Input

I am requesting some input on feasibility as well as my understanding of what this repo does. Please let me know if there is any part of the RMG-tests workflow I have missed or misunderstood. If there's something special about doing the regression testing here rather than in RMG-Py, I would like to discuss it.
Tagging some of you who show up most often on the git blame for the relevant scripts:
@sevyharris
@KEHANG
@Laxzal
@rwest
@nickvandewiele

Methane example polymerizes nitrogen

The methane example has nitrogen as a reactive species, so it ends up creating species like N(=NN=[N])N=NN=NN=[N] and N([N]N=NN=[N])(N=NN=NN=NN=[N])N=[N].

If nitrogen was intentionally set as a reactive species, then we probably need to set a species constraint on the maximum number of nitrogen atoms.

MCH example times out

The MCH example is timing out most runs right now:

The command "./run.sh examples/eg1 no" exited with 0.
571.08s$ ./run.sh examples/MCH no
Running examples/MCH test case
Benchmark code directory: /home/travis/build/ReactionMechanismGenerator/RMG-tests/code/benchmark/RMG-Py
Running benchmark job
/home/travis/miniconda/envs/benchmark/lib/python2.7/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead.
  warnings.warn(message, mplDeprecation, stacklevel=1)
Benchmark job timed out
The command "./run.sh examples/MCH no" exited with 1.
cache.2
store build cache
0.01s
31.45schange detected (content changed, file is created, or file is deleted):
/home/travis/build/ReactionMechanismGenerator/RMG-tests/code/benchmark/RMG-database/.git/ORIG_HEAD
/home/travis/build/ReactionMechanismGenerator/RMG-tests/code/benchmark/RMG-Py/external/cclib/bridge/cclib2biopython.pyc
/home/travis/build/ReactionMechanismGenerator/RMG-tests/code/benchmark/RMG-Py/external/cclib/bridge/cclib2openbabel.pyc
/home/travis/build/ReactionMechanismGenerator/RMG-tests/code/benchmark/RMG-Py/external/cclib/bridge/__init__.pyc
/home/travis/build/ReactionMechanismGenerator/RMG-tests/code/benchmark/RMG-Py/external/cclib/__init__.pyc
/home/travis/build/ReactionMechanismGenerator/RMG-tests/code/benchmark/RMG-Py/external/cclib/method/calculationmethod.pyc
/home/travis/build/ReactionMechanismGenerator/RMG-tests/code/benchmark/RMG-Py/external/cclib/method/cda.pyc
/home/travis/build/ReactionMechanismGenerator/RMG-tests/code/benchmark/RMG-Py/external/cclib/method/cspa.pyc
/home/travis/build/ReactionMechanismGenerator/RMG-tests/code/benchmark/RMG-Py/external/cclib/method/dens
...
changes detected, packing new archive
.
.
.
.
uploading archive
after_failure
2.77s$ . ./after_failure.sh
Executing after script tasks...
GIT_NAME: Travis Deploy
GIT_EMAIL: [email protected]
To https://github.com/ReactionMechanismGenerator/RMG-tests.git
 - [deleted]         rmgdb-rulestotraining
Done. Your build exited with 1.

We should probably fix it or remove it.

need an authentication file (in config.cfg format) to access to database

https://github.com/ReactionMechanismGenerator/DataDrivenEstimator.git
When I tried to reproduce the code here,I ran into a problem that looked like an authentication file (in config.cfg format) was missing to access the database. The error message is as follows:
pymongo.errors.ServerSelectionTimeoutError: rmg.mit.edu:27018: [Errno -2] Name or service not known, Timeout: 30s, Topology Description: <TopologyDescription id: 6388a7abbf21b29a7bc4e82e, topology_type: Single, servers: [<ServerDescription ('rmg.mit.edu', 27018) server_type: Unknown, rtt: None, error=AutoReconnect('rmg.mit.edu:27018: [Errno -2] Name or service not known')>]>

duplicate reactions wrongly identified as different

Recently, make eg6 identified the following differences between the benchmark model (RMG-Py v1.0.1 with RMG-database v1.0.2) and RMG-Py v1.0.3 for the edge reactions:

Non-identical kinetics!
tested: C=O(17) + C=O(17) => C1COO1(42)
original:   C=O(17) + C=O(17) => C1COO1(42)
k(1bar)|300K   |400K   |500K   |600K   |800K   |1000K  |1500K  |2000K  
k(T):  | -81.03| -59.91| -47.17| -38.62| -27.86| -21.32| -12.43|  -7.87
k(T):  | -43.29| -31.26| -24.02| -19.17| -13.03|  -9.30|  -4.27|  -1.74
kinetics: 
kinetics: 
Non-identical kinetics!
tested: C=O(17) + C=O(17) => C1COO1(42)
original:   C=O(17) + C=O(17) => C1COO1(42)
k(1bar)|300K   |400K   |500K   |600K   |800K   |1000K  |1500K  |2000K  
k(T):  | -43.29| -31.26| -24.02| -19.17| -13.03|  -9.30|  -4.27|  -1.74
k(T):  | -81.03| -59.91| -47.17| -38.62| -27.86| -21.32| -12.43|  -7.87
kinetics: 
kinetics: 

Looking back at the chemkin model, the two reactions are identified as duplicate reactions:

! Reaction index: Chemkin #96; RMG #152
! PDep reaction: PDepNetwork #25
CH2O(17)+CH2O(17)(+M)=>C1COO1(42)(+M)               1.000e+00 0.000     0.000    
    TCHEB/ 300.000   3000.000 /
    PCHEB/ 0.001     98.692   /
    CHEB/ 6 4/
    CHEB/ -1.543e+01   -1.034e-02   -7.707e-03   -4.698e-03  /
    CHEB/ 2.202e+01    8.282e-03    6.162e-03    3.744e-03   /
    CHEB/ 1.758e-01    2.189e-04    1.716e-04    1.125e-04   /
    CHEB/ 4.127e-02    1.905e-04    1.424e-04    8.714e-05   /
    CHEB/ -4.568e-03   1.202e-04    8.996e-05    5.515e-05   /
    CHEB/ -1.255e-02   6.705e-05    5.020e-05    3.079e-05   /
DUPLICATE

! Reaction index: Chemkin #100; RMG #158
! PDep reaction: PDepNetwork #26
CH2O(17)+CH2O(17)(+M)=>C1COO1(42)(+M)               1.000e+00 0.000     0.000    
    TCHEB/ 300.000   3000.000 /
    PCHEB/ 0.001     98.692   /
    CHEB/ 6 4/
    CHEB/ -3.656e+01   -9.303e-03   -6.942e-03   -4.235e-03  /
    CHEB/ 3.877e+01    6.608e-03    4.921e-03    2.993e-03   /
    CHEB/ 3.886e-01    3.283e-04    2.503e-04    1.577e-04   /
    CHEB/ 1.302e-01    2.733e-04    2.043e-04    1.250e-04   /
    CHEB/ 3.979e-02    1.548e-04    1.159e-04    7.106e-05   /
    CHEB/ 1.223e-02    8.269e-05    6.191e-05    3.796e-05   /
DUPLICATE

Hence, the identified differences are simply the result of switching both reactions, not because of a real discrepancy in the chemistry.

We should improve the way differences are detected, to avoid false positives like this one.

CI tests timeout due to Julia error

The CI tests currently fail with the following error message:

Running benchmark job
Traceback (most recent call last):
  File "/usr/share/miniconda/envs/benchmark/lib/python3.7/site-packages/julia/pseudo_python_cli.py", line 308, in main
    python(**vars(ns))
  File "/usr/share/miniconda/envs/benchmark/lib/python3.7/site-packages/julia/pseudo_python_cli.py", line 59, in python
    scope = runpy.run_path(script, run_name="__main__")
  File "/usr/share/miniconda/envs/benchmark/lib/python3.7/runpy.py", line 263, in run_path
    pkg_name=pkg_name, script_name=fname)
  File "/usr/share/miniconda/envs/benchmark/lib/python3.7/runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "/usr/share/miniconda/envs/benchmark/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/runner/work/RMG-tests/RMG-tests/code/benchmark/RMG-Py/rmg.py", line 118, in <module>
    main()
  File "/home/runner/work/RMG-tests/RMG-tests/code/benchmark/RMG-Py/rmg.py", line 112, in main
    rmg.execute(**kwargs)
  File "/home/runner/work/RMG-tests/RMG-tests/code/benchmark/RMG-Py/rmgpy/rmg/main.py", line 707, in execute
    self.initialize(**kwargs)
  File "/home/runner/work/RMG-tests/RMG-tests/code/benchmark/RMG-Py/rmgpy/rmg/main.py", line 519, in initialize
    self.reaction_model.add_species_to_edge(spec)
  File "/home/runner/work/RMG-tests/RMG-tests/code/benchmark/RMG-Py/rmgpy/rmg/model.py", line 1146, in add_species_to_edge
    self.edge.phase_system.phases["Default"].add_species(spec)
  File "/home/runner/work/RMG-tests/RMG-tests/code/benchmark/RMG-Py/rmgpy/rmg/reactors.py", line 272, in add_species
    spec = to_rms(spc)
  File "/home/runner/work/RMG-tests/RMG-tests/code/benchmark/RMG-Py/rmgpy/rmg/reactors.py", line 516, in to_rms
    return rms.Species(obj.label, obj.index, "", "", "", thermo, atomnums, bondnum, diff, rad, obj.molecule[0].multiplicity-1, obj.molecular_weight.value_si)
RuntimeError: <PyCall.jlwrap (in a Julia function called from Python)
JULIA: MethodError: no method matching ReactionMechanismSimulator.Species(::String, ::Int64, ::String, ::String, ::String, ::ReactionMechanismSimulator.NASA{ReactionMechanismSimulator.EmptyThermoUncertainty}, ::Dict{Any, Any}, ::Int64, ::ReactionMechanismSimulator.StokesDiffusivity{Float64}, ::Float64, ::Int64, ::Float64)
Closest candidates are:
  ReactionMechanismSimulator.Species(::Any, ::Any, ::Any, ::Any, ::Any, ::T, ::Any, ::Any, ::N, ::Any, ::Any, ::Any, !Matched::N1, !Matched::N2) where {T<:ReactionMechanismSimulator.AbstractThermo, N<:ReactionMechanismSimulator.AbstractDiffusivity, N1<:ReactionMechanismSimulator.AbstractHenryLawConstant, N2<:ReactionMechanismSimulator.AbstractLiquidVolumetricMassTransferCoefficient} at /usr/share/miniconda/envs/benchmark/share/julia/site/packages/Parameters/MK0O4/src/Parameters.jl:526
Stacktrace:
  [1] invokelatest(::Any, ::Any, ::Vararg{Any, N} where N; kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ Base ./essentials.jl:708
  [2] invokelatest(::Any, ::Any, ::Vararg{Any, N} where N)
    @ Base ./essentials.jl:706
  [3] _pyjlwrap_call(f::Type, args_::Ptr{PyCall.PyObject_struct}, kw_::Ptr{PyCall.PyObject_struct})
    @ PyCall /usr/share/miniconda/envs/benchmark/share/julia/site/packages/PyCall/7a7w0/src/callback.jl:28
  [4] pyjlwrap_call(self_::Ptr{PyCall.PyObject_struct}, args_::Ptr{PyCall.PyObject_struct}, kw_::Ptr{PyCall.PyObject_struct})
    @ PyCall /usr/share/miniconda/envs/benchmark/share/julia/site/packages/PyCall/7a7w0/src/callback.jl:44
  [5] macro expansion
    @ /usr/share/miniconda/envs/benchmark/share/julia/site/packages/PyCall/7a7w0/src/exception.jl:95 [inlined]
  [6] #107
    @ /usr/share/miniconda/envs/benchmark/share/julia/site/packages/PyCall/7a7w0/src/pyfncall.jl:43 [inlined]
  [7] disable_sigint
    @ ./c.jl:458 [inlined]
  [8] __pycall!
    @ /usr/share/miniconda/envs/benchmark/share/julia/site/packages/PyCall/7a7w0/src/pyfncall.jl:42 [inlined]
  [9] _pycall!(ret::PyObject, o::PyObject, args::Tuple{Vector{String}}, nargs::Int64, kw::Ptr{Nothing})
    @ PyCall /usr/share/miniconda/envs/benchmark/share/julia/site/packages/PyCall/7a7w0/src/pyfncall.jl:29
 [10] _pycall!
    @ /usr/share/miniconda/envs/benchmark/share/julia/site/packages/PyCall/7a7w0/src/pyfncall.jl:11 [inlined]
 [11] #_#114
    @ /usr/share/miniconda/envs/benchmark/share/julia/site/packages/PyCall/7a7w0/src/pyfncall.jl:86 [inlined]
 [12] (::PyObject)(args::Vector{String})
    @ PyCall /usr/share/miniconda/envs/benchmark/share/julia/site/packages/PyCall/7a7w0/src/pyfncall.jl:86
 [13] top-level scope
    @ none:4
 [14] eval
    @ ./boot.jl:360 [inlined]
 [15] exec_options(opts::Base.JLOptions)
    @ Base ./client.jl:261
 [16] _start()
    @ Base ./client.jl:485>
Benchmark job timed out

This occurs for every test case (aromatics, nitrogen, superminimal, etc.)

track model core and edge changes between each iteration

I think maybe one lesson from this is that the travis diff models may need to check 1) time for converge b) model core and edge changes between each iteration, extracted from the RMG.log into some sort of plot or analysis to show that we don't stall like this again.

RMG-tests creates write-protected files

After running RMG-tests, if I try to delete the repository, with the dangerous rm -r *, I get an error that some files are write-protected. I can get rid of them with the more dangerous rm -rf *. Ideally, the program should be able to be safely cleaned without force removing protected files.

Maybe having a script for cleaning the directory safely would be helpful in avoiding removal of important files.

P-dep simulations fail with "AttributeError: 'PDepReaction' object has no attribute 'family'"

All the P-dep simulation cases (eg 5) run during the Travis CI build suffer from the same type of error:

Test model folder: /home/travis/build/ReactionMechanismGenerator/RMG-tests/testing/testmodel/eg5
Traceback (most recent call last):
  File "/home/travis/build/ReactionMechanismGenerator/code/tested/RMG-Py/scripts/checkModels.py", line 288, in <module>
    main()
  File "/home/travis/build/ReactionMechanismGenerator/code/tested/RMG-Py/scripts/checkModels.py", line 80, in main
    check(name, benchChemkin, benchSpeciesDict, testChemkin, testSpeciesDict)
  File "/home/travis/build/ReactionMechanismGenerator/code/tested/RMG-Py/scripts/checkModels.py", line 98, in check
    errorReactions = checkReactions(commonReactions, uniqueReactionsTest, uniqueReactionsOrig)
  File "/home/travis/build/ReactionMechanismGenerator/code/tested/RMG-Py/scripts/checkModels.py", line 186, in checkReactions
    [printReaction(rxn) for rxn in uniqueReactionsTest]
  File "/home/travis/build/ReactionMechanismGenerator/code/tested/RMG-Py/scripts/checkModels.py", line 263, in printReaction
    logger.error('rxn: {}\t\tfamily: {}'.format(rxn, rxn.family))
AttributeError: 'PDepReaction' object has no attribute 'family'

travis nightly builds?

while we are asleep we should use that time to trigger more extensive builds and tests.

  • profiling
  • more elaborate simulations

RMG tests fails upon one error

I think the intended purpose of RMG tests may be similar to unittests, where each one runs independently. Currently, if one example fails in RMG-tests, it is impossible to see the tests after that test which can hide multiple errors. Ideally, in my opinion RMG tests should mention an error message and go to the next example instead of quitting.

Messages detailing differences refers to wrong model?

It looks like the messages from the RMG-tests refers to the wrong model. In build 952, where I fix the R_Addition_COm family, the travis build states the "original model" has new species and reactions associated with the R_Addition_COm family, instead of the test model.

RMG-tests Travis build crashes due to psutil dependency issue.

HEAD is now at 9d3c4df... Deprecate $CONDA_ENV_PATH in .travis.yml
Fetching package metadata .........
Solving package specifications: ..........
Fetching packages ...
libgcc-5.2.0-0 100% |################################| Time: 0:00:00  36.15 MB/s
glib-2.43.0-0. 100% |################################| Time: 0:00:00  48.87 MB/s
pandas-0.18.1- 100% |################################| Time: 0:00:00  50.73 MB/s
harfbuzz-0.9.3 100% |################################| Time: 0:00:00  22.74 MB/s
pango-1.39.0-0 100% |################################| Time: 0:00:00  30.07 MB/s
coolprop-6.0.1 100% |################################| Time: 0:00:00  39.94 MB/s
graphviz-2.38. 100% |################################| Time: 0:00:00  36.49 MB/s
Extracting packages ...
[      COMPLETE      ]|###################################################| 100%
Linking packages ...
[      COMPLETE      ]|###################################################| 100%
No psutil available.
To proceed, please conda install psutil#
# To activate this environment, use:
# $ source activate rmg_env
#
# To deactivate this environment, use:
# $ source deactivate
#
test version of RMG: /home/travis/build/ReactionMechanismGenerator/code/tested/RMG-Py
could not find environment: tested

I believe the following happened:
In the install.sh of RMG-tests, we create an anaconda environment tested from scratch by fetching the dependencies defined within the RMG-Py repository.

For some unknown reason, the psutil dependency could not be fetched, resulting in an incomplete creation of the tested environment, thereby crashing the Travis build.

Feature coverage of examples

It is not clear what all the examples we have in RMG-tests are supposed to test. We should probably review what features we want each job to test. Here is a list of the ones we are currently running on Travis:

  • eg1: ethane pyrolysis, no p-dep
  • eg3: liquid octane oxidation, octane as solvent, very light pruning
  • eg5: n-heptane pyrolysis with p-dep
  • eg6: ethane oxidation with p-dep
  • eg7 ethane pyrolysis, no pdep, tighter addToCore tolerance, but looser finish criteria than eg1
  • MCH: methyl cyclohexane oxidation, no pdep
  • NC: methyl amine oxidation, no pdep
  • solvent_hexane: liquid hexane oxidation, hexane as solvent
  • methane: methane oxidation, no pdep

Although I don't know the story behind all of these, it seems the pairs eg1/eg7, eg5/eg6, eg3/solvet_hexane, and methane/eg6 have some degree of unnecessary redundancy.

Features currently tested:

  • liquid reactors, solvent thermo (eg3, solvent_hexane)
  • p-dep (eg5, eg6)
  • ring corrections (MCH)
  • nitrogen chemistry (NC)

Features not currently tested sufficiently:

  • resonance/aromaticity
  • polycyclics thermo corrections
  • pruning
  • QMTP (not sure if its a good idea to test considering limited travis time)
  • Pretty much every feature that is not finished yet: isotopes, benzene reactivity, sulfur chemistry, long-distance interactions

Tests fail if branch name contains hyphen

Currently we parse the RMG-tests branch name in order to determine the correct RMG-Py/database branch to test. The parsing method currently relies on splitting the name on hyphens, which causes failures if the RMG-Py/database branch name contains hyphens.

rmg-database build fails with KeyError: 'pydqed-1.0.1-py27_0.tar.bz2'

The following error occurs in a Travis CI build of a branch of RMG-database, after it seemingly successfully installed the benchmark conda environment.

SHA1: fd954f1528cf010e97fd56c4970601e571bc883d
Fetching package metadata: ....
Fetching package metadata: ....
src_prefix: '/home/travis/miniconda/envs/benchmark'
dst_prefix: '/home/travis/miniconda/envs/tested'
Packages: 67
Files: 0
An unexpected error has occurred, please consider sending the
following traceback to the conda GitHub issue tracker at:
    https://github.com/conda/conda/issues
Include the output of the command 'conda info' in your report.
Traceback (most recent call last):
  File "/home/travis/miniconda/bin/conda", line 6, in <module>
    sys.exit(main())
  File "/home/travis/miniconda/lib/python2.7/site-packages/conda/cli/main.py", line 139, in main
    args_func(args, p)
  File "/home/travis/miniconda/lib/python2.7/site-packages/conda/cli/main.py", line 146, in args_func
    args.func(args, p)
  File "/home/travis/miniconda/lib/python2.7/site-packages/conda/cli/main_create.py", line 49, in execute
    install.install(args, parser, 'create')
  File "/home/travis/miniconda/lib/python2.7/site-packages/conda/cli/install.py", line 247, in install
    clone(args.clone, prefix, json=args.json, quiet=args.quiet, index=index)
  File "/home/travis/miniconda/lib/python2.7/site-packages/conda/cli/install.py", line 84, in clone
    quiet=quiet, index=index)
  File "/home/travis/miniconda/lib/python2.7/site-packages/conda/misc.py", line 246, in clone_env
    sorted_dists = r.dependency_sort(dists)
  File "/home/travis/miniconda/lib/python2.7/site-packages/conda/resolve.py", line 729, in dependency_sort
    depends = lookup(value)
  File "/home/travis/miniconda/lib/python2.7/site-packages/conda/resolve.py", line 724, in lookup
    return set(ms.name for ms in self.ms_depends(value + '.tar.bz2'))
  File "/home/travis/miniconda/lib/python2.7/site-packages/conda/resolve.py", line 573, in ms_depends
    deps = [MatchSpec(d) for d in self.index[fn].get('depends', [])]
KeyError: 'pydqed-1.0.1-py27_0.tar.bz2'

Better Logging of error causes

I ran a local job and I think it had an error, since it didn't create all the files in data_dir. However, the slurm.out file didn't state where the error occurred. It would be helpful for the slurm output to mention where the error likely occurred. Increased logging may be helpful. If there is already a way to get logging at the source of the error, that would be helpful.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.