GithubHelp home page GithubHelp logo

cambridgenuclear / scone Goto Github PK

View Code? Open in Web Editor NEW
34.0 3.0 19.0 26.83 MB

Stochastic Calculator Of Neutron transport Equation

Home Page: https://scone.readthedocs.io

License: Other

Fortran 96.90% CMake 1.34% Python 1.62% Shell 0.14%
monte-carlo neutronics radiation-transport scone

scone's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

scone's Issues

MaterialMap accepts non-existent materials

When using materialMap, it is possible to feed in non-existent materials and SCONE will run without complaining. If, for example, one had a typo in the input, it would be difficult to detect until the results were observed. Hence, it seems desirable to throw an error if a non-existent material is fed into the map.

Make the input parser more informative

First issue of the new repository!

I write a gargantuan input file, only to crash when I run it with the error:
"There is unparsed content. Missing a token ';' or '{...}'"

That's certainly something, but it would be nice to pin down a line, or to spit out some of the input about the point where it crashes out to speed up error hunting. Otherwise, the search may be quite lengthy. I'm sure everyone would appreciate this!

I suppose this would involve editing the input parser and adding a few small functions. Hopefully not too much effort would be required for a big benefit.

Reuse of random numbers after source generation

Currently calculations suffer from the reuse of random numbers that has been used to generate a source in the first few events of the transport simulation. Sadly it seems that the best way to remove it is just to advance the RNG by stride * totalPop after the source generation. The appropriate code needs to be added to physics packages.

Small fix needed in input file reading

When in the input file one writes, for example, for an energy boundary, 1E-11 rather than 1.0E-11, SCONE reads 1E-111. We should either fix this or at least have a warning when energy values are unreasonable.

Material colourmaps

It is well know by this point that the Knuth hash maps index 1 (usually water or other coolant in the reactor) into a very unappealing shade of brown in 24-bit colour space. Perhaps to increase the visual appeal of the plots we should introduce a way of specifying a colormaps.

The main question is if the user defined 24bit colours should be defined in the materials dictionary or in the separate dictionary in the bmp visualisation definition. Personally I am more inclined to the latter option.

Burn-up

This has been too long - an issue seems a good way to maybe make some movement.

We should add burn-up. A few bits of this should be straightforward to add in small pieces, a few will take more effort and could maybe benefit from some discussion.

The easy parts:

  1. Add a CRAM solver and tests. Ideally this should be relatively quick for a large matrix size (~3000). Although not immediately important, it would be ideal if it was a sparse solver. We would also prefer not to have any additional dependencies as a result.
  2. Microscopic reaction rate tallies. We already have a microscopic XS tally, but is this one optimal? It requires creating a new material - can this be simplified for depletion, e.g., under the assumption that the XS data has been independently created?
  3. Additionally, it would be ideal to have the choice to use a fine-group flux tally and condense down to one group afterwards. The tally itself already exists - I think I need to do some reading to understand how to conveniently do that condensation.
  4. Power tally - I think we already have this, albeit in quite an approximate form.
  5. Fission yield tally. This one is slightly more complicated because we need to have access to fission yield data, but the mechanics should otherwise be straightforward.
  6. Tally flushing/resetting. This would be necessary for enforcing xenon equilibrium.

The hard parts:

  1. Reading/processing data. I think we already discussed that it would be ideal to have an external script to do all of the data processing from the fission yield and decay data files. I suppose this would ideally have an input of the (hopefully customisable) chains we would like to include and an output of straightforwardly SCONE-readable decay constants and energy-dependent fission yields. On chains, we might want to allow the option of a lumped fission product?
  2. Expanding nuclear data to hold this information. This is the most daunting bit for me anyway, I don't have a good view as to how this should look or where it should sit.
  3. Creating a new physics package. I think this is probably the optimal way to handle burnup. Annoying because it will duplicate a significant chunk of the eigenvalue package, but it would also add quite a lot to the package otherwise (time-stepping algorithms, calling tally processing, enforcing xenon equilibrium). Perhaps there are other ways to do this that are neater?

These are just the thing I can think of at the moment. I can edit this to include other things as we suggest. This can hopefully give a bit of direction and help us make a start.

Test diffrent compilers in CI

Currently the build-and-test workflow only compiles SCONE with the default Fortran compiler of Ubuntu 20.04LTS (gfortran-11). However we would probably wish to test with more versions to see if there are any problems. In particular:

  • gfortran-8 (which will become the lowest supported version soon
  • gfortran-12 (or whichever is the latest version at the moment)

For good measure we could just cover them all for >8.

Basically I think we should aim to have a similar solution we had on BitBucket, with containers having the right python for pfUnit, compiler and pfUnit preinstalled (to save build time).

I will try to set it up soon.

Multi-thread OpenMP reproducibility

At the moment SCONE gives reproducible results (same for the same seed) as long as it is run with a single thread.
With multiple threads however reproducibility is not obtained as a result of the race condition when adding new particles to the dungeon for the next cycles (basically the order and hence index of particles is not deterministic).

This is a well known problem in MC with OpenMP and can be solved by making sure the order of particles does not depend on the number of threads.

Note:
The method is described in https://www.osti.gov/biblio/1417155
Pages 341 - 348

NaNs in xs generation

@ChasingNeutrons got some NaNs in the standard deviation of the out-scatter transport cross section calculation when generating cross sections for a reactor problem. We're still not sure of where they come from, so needs checking!

Check on presence of S(alpha,beta) nuclide

I was running a calculation with a moderating nuclide given S(alpha,beta) data. It would run for one iteration and crash mysteriously on the second iteration, reporting neutrons having zero energy.

It turns out that the problem was I had forgotten to include the density of the moderating nuclide! That is, I could specify 'moder {1001.03 filename;} without actually having 1001.03 present in the material.

Hence I propose adding a check when applying Sab data to ensure that the given nuclide is indeed present in the material.

Add a cookie to `coord`

As it is stated in the online documentation in the Geometry section, the coord class should contain some memory which universes could use to store some extra, universe-specific, information between the calls. It could be used to e.g. avoid re-computation of integer co-ordinates in the lattice from a localID, store a state of a seed in a procedurally, on-the-fly generated realisation etc.

The cookie should probably be some array of integers, which would be sufficient in most cases. It would still be possible to store arbitrary data with transfer providing there is enough space.

Thus coord should be extended by a member like

  integer(shortInt), dimension(COOKIE_SIZE) :: cookie 

While I am certain that the cookie should find its way into coord I still have doubts about what would be the best name and type for it.

Energy printing in MGxsClerk

Right now results from MGxsClerk are printed fast -> thermal in energy, but the energy map is not which could be confusing.

I would flip the order of the printed energy map for this clerk, and write in the user manual which order to expect.

Initialise problem

I got an error of initialization when running the involute reactor: 'Infinite loop in sampling of fission sites. Please check that defined volume contains fissile material.' The ( i > 200) might be too small. We can have a larger i, or we can make it un option for user to change.

Surface tracking crashing

I have a model where a cell boundary coincides with the domain boundary. When a particle crosses that boundary, in geometryStd % move this is considered a movement between local cells, rather than a movement outside of the domain. This means that p % matIdx is assigned to OUTSIDE_MAT, and when ST tries to get the xss to calculated the next distance I get a Segmentation Fault.

I'm not sure about what the best solution could be. I suppose inside self % closestDist(dist, surfIdx, level, coords), we could include an check to see if there's more than one surface within tolerance, and if any of those is the outside border.

What do you think?

PS: this happens regardless of cache or no cache.

Colors for uniqueID plots

Could we change the colors for the cellID plots too, other than the materials? They look so much better now!

Updating inputs

Some of the example inputs we have in the repository by default are showing their age a bit.

First, it would be good to have some which are perhaps a bit more commented and more standard rather than the mix at the moment.

Second, in the benchmarks, the S(alpha,beta) format is an older style that we used to have so these should be updated.

Finally, the inputs should have their aceLibrary variable converted to the environmental variable $SCONE_ACE (rather than pointing to somewhere on my/Mikolaj's/Valeria's computer!

Nuclide cache a bit broken

A recent bug found in the NuclideCache logic: it could happen that nucCache % E_tot and nucCache % E_tail were different. When looking for microscopic xss, the code would occasionally check on nucCache % E_tail, which was correct, and as a consequence get the wrong total xs, that was updated earlier.

For now, checks on E_tot were added wherever E_tail is checked. In the future, we could separate the caches: one for the total xs and one for all the other xss, such to avoid confusion on what E_tot and E_tail refer to.

Assertion module

When I was programming in SCONE I remember that very often I was hesitating to perform range checks on physical quantities out of the fear of hurting performance. Often I would decide against the range check, which in hindsight was a bad idea as proven by the bug in calculation of μ in LAB frame which hunted SCONE for a long time...

Thus I was thinking that we should try to have a sone and eat it, by trying to replicate something similar to the cassert of C++, but of course it comes with some Fortran caveats.

Basically we can have a module with assertion subroutines where we include preprocessor to optionally remove all active code

module asserts
  implicit none
  public

contains
  subroutine assertRange(val, min, max)
    real, intent(in) :: val
    real, intent(in) :: min
    real, intent(in) :: max
#ifdef DEBUG
    if (val < min .or. val > max) then
      error stop "Assert Failed."
    end if
#endif
  end subroutine
end module asserts

Which of course can be used in programmes

program test
  use asserts
  implicit none
  real :: a,b,c

  call random_number(a)
  a = a * 17.0
  b = 5.0
  c = 17.0
  call assertRange(a, b, c)
end program test

Now as long as there is no -DDEBUG flag given to the compiler and link-time optimisation works ok the assert function should be inlined, and since it is empty, disappear altogether. However, LTO is a key here, since if it is disabled a call to an empty function will be included. This should not be a problem though since SCONE depends on LTO for performance as it is.

An example of the test project: error_test.zip. After compiling one can inspect dissasembly to see if the assertion was called/removed/inlined via:

$ gdb ./build/test
(gdb) layout asm

Basically the questions are:

  • Is this a feature we wish to have?
  • When and how do we want to add it and how to propagate the asserts to the existing code?
  • What would be most intuitive signatures of the assert subroutines and how we can minimise errors. (e.g. should the value be a middle argument in the assertRange?)

Checks on URR and Sab

For now there's no check on overlaps between URR and Sab energy ranges. Honestly that should never happen, but you never know with newer (or faulty) nuclear data.

We should implement a check and give priority to one or the other (I think Sab) if this ever happens.

Access `errors_mod` directly

Following the #35 the fatalError subroutine has been moved to a separate module errors_mod, which is intended to handle all the error and warning messages.

However to avoid too many git conflicts the fatalError is currently imported into genericProcedures and all other files get it via genericProcedures module like before.
We should import the fatalError for errors_mod directly though. I think when the most outstanding PRs are merged we can do it in a single PR for all files.

Volumetric calculation issue

Problems with rayVolPP

  • vacuum boundary conditions have to added to trackRay
  • should change the robust check to the collision point (?)
  • NUDGE in cellUniverse is a bit dodgy and might cause troubles.

Lattice distance/location error

I was running random ray on a problem featuring latUniverses and got some NaN results. On increasing the pitch of the lattice elements by a very small amount (from 5.0 to 5.001) the error was gone. I suspect something may be up with the distance calculation when a particle (or ray) ends up in the padMat of a lattice.

Precomputing the majorant + unionised grids with S(alpha,beta)

At the moment, by default SCONE does not precompute the majorant when using delta tracking. This will hamstring delta tracking performance in problems with many materials/nuclides. One way of doing this is to use unionised grids (through using aceNeutronDatabaseUni). This has two problems: for problems with lots of nuclides present this will also become inefficient and it also currently does not work when S(alpha,beta) is turned on.

I suppose the ideal fix would be having a nice option for general majorant precomputation. I'd like to try and work on this when I get some time but I'm happy for anyone else to dive in.

Memory reference error with JSON output

I'm trying to run an example with JSON output, not Matlab output, but including the line

outputFormat asciiJSON;
in my input file results in the error message:

Program received signal SIGSEGV: Segmentation fault - invalid memory reference.

Backtrace for this error:
#0 0x100baf187
#1 0x100bae163
#2 0x19ef782a3
#3 0x100867f4f
zsh: segmentation fault ./Build/scone.out InputFiles/c5g7_mox

I'm running on a mac, and it runs fine with either no output format reference in the input file, or a matlab output.

Reducing output file memory

At the moment, when writing the output file, all the tallies print their results in a unique charTape instance, and this gets written in the output file at once.

This approach is memory expensive when there's a lot of tallies, since the memory taken by the tallies gets duplicated when the results are written in the charTape. It would be more memory efficient to write the charTape into the output file after each tally is printed, and then clear the charTape. This would avoid long memory-intensive tapes.

Upgrade to pfUnit >=4.0

I have run into a problem where our gold old pfUnit python preprocessor fails with Python 3.10 [But should fail for any version >=3.8]
The reason seems to be the fact that it cannot find the iterable in collections module, which is due to the fact that it has been deprecated. As the result we have no choice but to update to pfUnit 4.0. Due to the requirements of the latter it will mean the definite end of support for the older gfortran versions. Hopefully we will be able to preserve 8. At worst we will need to switch to 9.

I will start looking into that and try to keep this thread updated.

Installation Mac

Hi I am trying to install SCONE on my macOS 13 and when running the automated tests I have failures in the plane_test_suite.testHalfspace and in the plane_test_suite.testDistance.
My compiler is GNU Fortran (GCC) 12.2.0.

If you have any insights or advice on how to proceed, please let me know.

Thanks

Code crashes if 'dist' in Surface tracking becomes NaN

Hello!

In case any modification in surface tracking transport operator leads to a 'NaN' flight distance, the code crashes without printing the root cause of the problem (dist = 'Nan'). Perhaps a suitable error message will be helpful for users to debug the code?

Cheers
Anuj

Current crash message ->
Fatal has occurred in:
decreaseLevel (coord_class.f90)

Because:
New nesting: 0 is invalid. Current nesting: 3
<><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><><>
Note: The following floating-point exceptions are signalling: IEEE_INVALID_FLAG IEEE_UNDERFLOW_FLAG IEEE_DENORMAL
ERROR STOP

Error termination. Backtrace:
#0 0x7f64eee86d11 in ???
#1 0x7f64eee87859 in ???
#2 0x7f64eee88f87 in ???
#3 0x5587d6256cab in __errors_mod_MOD_fatalerror
at /home/ad2221/SCONE_ORIGINAL/SCONE/SharedModules/errors_mod.f90:66
#4 0x5587d622400b in __coord_class_MOD_decreaselevel
at /home/ad2221/SCONE_ORIGINAL/SCONE/Geometry/coord_class.f90:318
#5 0x5587d61e7037 in __geometrystd_class_MOD_move_nocache
at /home/ad2221/SCONE_ORIGINAL/SCONE/Geometry/geometryStd_class.f90:238
#6 0x5587d621d73a in __transportoperatorst_class_MOD_surfacetracking
at /home/ad2221/SCONE_ORIGINAL/SCONE/TransportOperator/transportOperatorST_class.f90:199
#7 0x5587d618d4b6 in __transportoperator_inter_MOD_transport.constprop.0
at /home/ad2221/SCONE_ORIGINAL/SCONE/TransportOperator/transportOperator_inter.f90:111
#8 0x5587d6256f9a in __eigenphysicspackage_class_MOD_cycles._omp_fn.1
at /home/ad2221/SCONE_ORIGINAL/SCONE/PhysicsPackages/eigenPhysicsPackage_class.f90:209
#9 0x7f64eead28e5 in ???
#10 0x5587d6257700 in __eigenphysicspackage_class_MOD_cycles
at /home/ad2221/SCONE_ORIGINAL/SCONE/PhysicsPackages/eigenPhysicsPackage_class.f90:198
#11 0x5587d62537fc in __eigenphysicspackage_class_MOD_run
at /home/ad2221/SCONE_ORIGINAL/SCONE/PhysicsPackages/eigenPhysicsPackage_class.f90:130
#12 0x5587d6183954 in scone
at /home/ad2221/SCONE_ORIGINAL/SCONE/Apps/scone.f90:56
#13 0x5587d618352e in main
at /home/ad2221/SCONE_ORIGINAL/SCONE/Apps/scone.f90:3

Clean up Factories

Some factories (like TallyMap1DFactory) have code repetition in the init function that could be made prettier (e.g., move the new % init outside of the select).

Optimise unionised energy grid construction

Optimise the creation of the unionised energy grid used to precomputed the majorant cross section for DT.

One could use a dynamic array as in the dynArray data structure (only exists for integers so far) to make the allocation and deallocation of the energy grid array less expensive.

This would also require adding new procedures, equivalent to quickSort and removeDuplicates, for the new data type.

Fatal Error format

Right now the maximum length of fatal errors is 100 characters, and everything beyond that gets cut off. We should have longer errors and printing them on two lines.

Pre calculating majorant xs with URR

At the moment, the majorant xs in not pre calculated when probability tables are used.

One way to possibly precalculate it is to sample from the tables many times and use the largest value found, assuming it will be close enough to the maximum.

Unionised grid DBs yay or nay?

I propose getting rid of the unionised grid databases.

This is because:

  • a lot of physics is not implemented there (Sab and urr, and soon dbrc), because it would keep some not very elegant book-keeping that would be ugly and confusing right now
  • probably having a different database class is not the nicest way to incorporate unionised grids
  • it's annoying to maintain

I think a nice alternative would be do remove them (they are stored in some past version of the repository anyway), and in the future implement hash tables and make them the default method.

Opinions? NOTE that I need answers before I can merge DBRC.

Out-of-bounds array access in invertInelastic

I was running in debug mode (so array bounds checking switched on) and got a crash on line 198 of aceNeutronNuclide_class.f90.

Namely, inside the invertInelastic when calling
bottomXS = self % MTdata(i) % xs(idxT)

This presumably happens because one or two lines previous there's a check
if( idxT < 0 ) cycle
which obviously misses the possibility of 0.

Nothing seems to go wrong when debug mode is disabled, but this should certainly be sorted out!

Incorrect units for precursor decay constants

ACE files contain the precursor decay constant in [shakes^-1] but we read them (in fissionCE) and state that the value is in units of [s^-1]. We should do a unit conversion of 10^8 shakes/s to obtain the correct values given our chosen units.

Deferred length character arguments in output files

As the documentation notes in: https://github.com/CambridgeNuclear/SCONE/blob/62a7615321a30010eee10298cc740c51e6af8843/docs/Output.rst?plain=1#L83C1-L88C1

It is probably time that the 'near future' arrives.
Basically the arguments to outputFile procedures being character(nameLen) require a bit of an awkward interface :

character(nameLen) :: name 
type(fileOutput)        :: out 

... Some Initialisation code ... 

name = "name"
call out % startBlock(name)

If this was used instead:

call out % startBlock("name") 

Since input character is not of nameLen length, the trailing characters would be filled with random bits.

To avoid this issue, the interface of fileOutput procedures must be changed to take names with differed length character (character(*), intent(in) :: smt). Some extra care needs to be taken though to make sure that the names are not too long and fit on the stack of previously defined blocks.

Random ray in a SCONEy way

The current PR on random ray is dead for good reason - it doesn't look like SCONEy code and it isn't easy to extend to the many variations on RR we want to make.

I'd like to have some discussion on how a generic RR code that can handle all these variations can look. These variations include eigenvalue/fixed-source/noise and flat source/linear source and isotropic/anisotropic.

How a given RR solver looks is as follows:

  1. Initialise all of the necessary structures (fluxes, sources)
  2. Generate a source (RHS of the transport equation)
  3. Perform the transport sweep (sample rays, move rays, attenuate fluxes, accumulate to the flux vector)
  4. Accumulate quantities (fluxes, k)
  5. Check for some measure of convergence (iterative or statistical)
  6. Finalise statistics
  7. Output quantities of interest

I have a handful of ideas for how to make this a bit more general. I would appreciate any feedback on this.

First, we make source and fluxes objects. By default these would contain the current flux, previous flux, accumulated flux (for the flux object) and the current source for the source object. However, these can allocate additional components as necessary, such as linear components, fixed sources, etc. A given physics package will tell these objects to allocate the necessary components. These can have procedures for accumulation, finalisation, and output. We should probably do similar for geometry estimates. For example, this would contain the estimated volume (which will be common to everything) but could optionally allocate spatial moment matrices also. Data should also be packaged more neatly... However, it will essentially need to be a flat array of each XS in the end. The question is whether to do this internally in nuclear data or whether to make a separate object that is constructed from nuclear data.

Second, we make the transport sweep and source update both into objects. Each of these varies dramatically, depending on the calculation performed, but there will be some overlap (flat isotropic versions will be identical for fixed source and eigenvalue). Additionally, the call to movement and its settings will be the same across all physics packages. Each of these objects would be fed pointers to the relevant flux/source/data objects. Likewise they can be constructed depending on the exact scheme being used. For initialising rays, we could reuse the standard fixed source particle sampling with uniformity in space and angle? I think that's what OpenMC have gone for.

Third, we can define a convergence metric object. This could be more or less sophisticated, e.g., Shannon entropy or simply the number of iterations. This is only to allow for code reuse across all the physics packages.

Fourth, it might also make sense to define a k object, since it will be common across eigenvalue and noise. I don't have strong feelings about this - it has relatively little to do other than accumulate flux estimates, which is only a few lines.

Thoughts? I don't think this should be so hard to do and I suspect this will preserve performance while making things tidier and more easily modifiable in future.

Empty XS library when running on HPC

When compiling and running SCONE (in my case time-dependent version, but should be the same for other versions) on the HPC (login node and slurm job alike), there was an issue that SCONE tried to find the dictionary "delayed" for a material that did not have any delayed neutron data. It turned out, that the problem occurred because the dictionary "delayed" was present in previous material XS data but not in the material for which the error occurred. The problem was resolved by editing "baseMgNeutronDatabase_class.f90", adding "call tempDict % kill()" after intitialising tempDict.

This issue did not occur running on Lise and Fermiac. Gfortran version HPC: 11.2.0, Gfortran version Fermiac: 9.4.0

Scone input files build and run in ubuntu 20.04 ?

How can i build and run in scone input file sample?Which with commandlines?
emil@DESKTOP-:/SCONE/InputFiles$ l
BenchmarksMCNP/ JEZ POPSY SCONE_Inf SCONE_Slab XS/ c5g7_mox mox_vol sphere_with_DT volCalc
emil@DESKTOPU-:
/SCONE/InputFiles$ ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.