GithubHelp home page GithubHelp logo

lbl-anp / becquerel Goto Github PK

View Code? Open in Web Editor NEW
43.0 11.0 16.0 29.89 MB

Becquerel is a Python package for analyzing nuclear spectroscopic measurements.

License: Other

Python 100.00%
nuclear-physics spectroscopy gamma-ray python

becquerel's Introduction

becquerel

Ruff PyPI version PyPI pyversions PyPI license tests Coverage Status

Becquerel is a Python package for analyzing nuclear spectroscopic measurements. The core functionalities are reading and writing different spectrum file types, fitting spectral features, performing detector calibrations, and interpreting measurement results. It includes tools for plotting radiation spectra as well as convenient access to tabulated nuclear data, and it will include fits of different spectral features. It relies heavily on the standard scientific Python stack of numpy, scipy, matplotlib, and pandas. It is intended to be general-purpose enough that it can be useful to anyone from an undergraduate taking a laboratory course to the advanced researcher.

Installation

pip install becquerel

Features in development (contributions welcome!)

  • Reading additional Spectrum file types (N42, CHN, CSV)
  • Writing Spectrum objects to various standard formats
  • Fitting spectral features with Poisson likelihood

If you are interested in contributing or are want to install the package from source, please see the instructions in CONTRIBUTING.md.

Reporting issues

When reporting issues with becquerel, please provide a minimum working example to help identify the problem and tag the issue as a bug.

Feature requests

For a feature request, please create an issue and label it as a new feature.

Dependencies

External dependencies are listed in requirements.txt and will be installed automatically with the standard pip installation. They can also be installed manually with the package manager of your choice (pip, conda, etc). The dependencies beautifulsoup4, lxml and html5lib are necessary for pandas.

Developers require additional requirements which are listed in requirements-dev.txt. We use pytest for unit testing, ruff for code formatting and linting, and are planning to eventually support numpydoc docstrings.

Copyright Notice

becquerel (bq) Copyright (c) 2017-2021, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy) and University of California, Berkeley. All rights reserved.

If you have questions about your rights to use or distribute this software, please contact Berkeley Lab's Intellectual Property Office at [email protected].

NOTICE. This Software was developed under funding from the U.S. Department of Energy and the U.S. Government consequently retains certain rights. As such, the U.S. Government has been granted for itself and others acting on its behalf a paid-up, nonexclusive, irrevocable, worldwide license in the Software to reproduce, distribute copies to the public, prepare derivative works, and perform publicly and display publicly, and to permit others to do so.

becquerel's People

Contributors

alihanks avatar bplimley avatar chunhochow avatar cosama avatar dhellfeld avatar jccurtis avatar jvavrek avatar markbandstra avatar micahfolsom avatar pre-commit-ci[bot] avatar tannerdalen avatar thjoshi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

becquerel's Issues

Rebinning: cast input as float instead of assert float?

_check_ndim_and_dtype(in_spectra, 2, np.float, 'in_spectrum')
_check_ndim_and_dtype(in_edges, 2, np.float, 'in_edges')
_check_ndim_and_dtype(out_edges, 1, np.float, 'out_edges')

Should we cast the inputs .astype(np.float) instead of checking using assertions?
If so I'll also split the function _check_ndim_and_dtype() into a check_ndim() and check_dtype() and have _check_ndim_and_dtype() call both of them.

Electron range calculations and lookups

I have code that uses Tabata's analytical formulae for electron CSDA and extrapolated ranges. It's old code, so it needs cleanup. I'd like to add web queries for NIST ESTAR (electron stopping powers and ranges). ESTAR doesn't give extrapolated ranges, and only gives CSDA ranges for their default grid energies, which is why Tabata is useful.

I'm making this issue to remind myself to work on this later.

Unit conventions

Following up on our discussion about using pint, I looked a bit more into its usability in our ecosystem. It seems very tightly coupled with numpy and uncertainties to the point where using it is nearly transparent, but its integration with pandas is poor. Quantities with both uncertainties and units can be stored in pandas data structures, but they need to be handled one-by-one instead of as an entire DataFrame or Series. (This is a known issue with pandas not supporting different methods of incorporating units.)

This could be a deal-breaker if we decide to rely on pandas heavily in this project.

I, for one, am not a big pandas user so the cost-benefit of using pint is weighed heavily toward benefit. For example, is this spectrum in units of counts, or counts per second, or counts per second per keV? Is this branching ratio a percentage or a dimensionless number? I have an activity in Becquerels; how do I do the conversion to mCi again?

materials_test.py test_element_data failing

I'm getting this test failure. It is a webtest so Travis doesn't run it.

The traceback shows code in site-packages/bs4/element.py. This suggests that materials.py is loading the wrong element module somehow. Can others reproduce this? Is it something wonky in my installation?

_________________________________________________ test_element_data _________________________________________________

    @pytest.mark.webtest
    def test_element_data():
        """Test fetch_element_data........................................."""
>       materials.fetch_element_data()

tests/materials_test.py:47: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
becquerel/tools/materials.py:81: in fetch_element_data
    tables = pd.read_html(text, header=0, skiprows=[1, 2])
/usr/local/anaconda2/lib/python2.7/site-packages/pandas/io/html.py:874: in read_html
    parse_dates, tupleize_cols, thousands, attrs, encoding)
/usr/local/anaconda2/lib/python2.7/site-packages/pandas/io/html.py:739: in _parse
    for table in tables:
/usr/local/anaconda2/lib/python2.7/site-packages/pandas/io/html.py:197: in <genexpr>
    return (self._build_table(table) for table in tables)
/usr/local/anaconda2/lib/python2.7/site-packages/pandas/io/html.py:349: in _build_table
    header = self._parse_raw_thead(table)
/usr/local/anaconda2/lib/python2.7/site-packages/pandas/io/html.py:355: in _parse_raw_thead
    thead = self._parse_thead(table)
/usr/local/anaconda2/lib/python2.7/site-packages/pandas/io/html.py:413: in _parse_thead
    return table.find_all('thead')
/usr/local/anaconda2/lib/python2.7/site-packages/bs4/element.py:1299: in find_all
    return self._find_all(name, attrs, text, limit, generator, **kwargs)
/usr/local/anaconda2/lib/python2.7/site-packages/bs4/element.py:541: in _find_all
    return ResultSet(strainer, result)
/usr/local/anaconda2/lib/python2.7/site-packages/bs4/element.py:1754: in __init__
    super(ResultSet, self).__init__(result)
/usr/local/anaconda2/lib/python2.7/site-packages/bs4/element.py:538: in <genexpr>
    result = (element for element in generator
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <table border="0" cellpadding="5" cellspacing="0">\n<tbody><tr>\n<th colspan="...d>\n<td>0.38651</td>\n<td>890.0</td>\n<td>1.895E+01</td></tr>\n</tbody></table>

    @property
    def descendants(self):
        if not len(self.contents):
            return
        stopNode = self._last_descendant().next_element
        current = self.contents[0]
        while current is not stopNode:
            yield current
>           current = current.next_element
E           AttributeError: 'NoneType' object has no attribute 'next_element'

/usr/local/anaconda2/lib/python2.7/site-packages/bs4/element.py:1317: AttributeError
====================================== 1 failed, 925 passed in 3791.09 seconds ======================================

Addition and subtraction of calibrated spectra

Currently the magic methods in Spectrum for addition and subtraction refuse to operate on calibrated spectra, since in general the binning will be different. To do so requires a rebinning method (#29).

`Spectrum` rebinning

  • Rebinning according to an input bin edges vector
  • rebin_like(spec) to use the binning from another spectrum

Please expand on this, I'm not very familiar with rebinning methods or needs.

Saving to file

Either in external formats or internal formats.

External formats will probably not be able to contain all the information (e.g. uncertainties in a subtracted spectrum, calibration points in addition to coefficients). So an internal format (based on XML or JSON or something) would be helpful.

XCOM attenuation calculator

Unless I'm mistaken there is not currently a tool to calculate the attenuation of a gamma ray given energy, elemental compound, density and thickness. I'm pretty sure @markbandstra has developed this for his gamma spectrum emission and detection simulator. I propose we include the calculator inside the XCOM tool as another function which could use the material definitions in #51.

Isotope objects

Context

@markbandstra's Isotope class currently handles parsing of element, mass, and isomers from a string.

In addition, the nndc module allows lookup of an isotope's wallet card (half life, decay mode(s) and branching, etc.) as well as decay radiation.

My goals

I'm thinking the Isotope class should be able to contain and use useful data from NNDC or elsewhere, including

  • half-life (in seconds, as well as in a human-readable unit)
  • gamma emissions (energies, branching ratios)
  • natural abundance if any
  • thermal neutron activation cross section (from ENSDEF or so...)

This data could be fetched from the web, or preferably from cached data (another issue).

These properties could be optional (you can instantiate Isotope without them), but if defined, they would allow a variety of useful operations. Specifically, if we make an IsotopeQuantity class that derives from Isotope and contains a quantity (activity/mass/atoms) and a date for that quantity, then you can ask questions like

  • How much of this will be left at time t in the future?
  • How much of this existed at time t in the past (assuming it has been decaying undisturbed since then)?
  • What will my detector measure in interval (t_start, t_stop) given an efficiency calibration?
  • How much will be activated if I put this in a given neutron flux?

And if you couldn't already tell, this has the power for a lot of useful NAA calculations. (Which, incidentally, RadWatch needs to do, because we've been measuring irradiated fish for the last 2 weeks.)

Discussion

Thoughts or suggestions on what I've described?

How to present example code

Clearly, an important part of usability is having example code for people to see how Becquerel works.

There are several possible approaches to this, at least:

  • code snippets in docstrings (as in spectrum.py and energycal.py)
  • scripts in examples directory (as in #16)
  • Jupyter notebooks
  • code as part of HTML documentation (like Sphinx; see #18)

Comments:

  • Docstrings: preferably not more than 2-4 lines of code
  • Scripts in examples: this works for longer, standalone examples
  • Jupyter notebooks: maybe "nicer" than example scripts, allowing comments with markdown, inline plots, etc. @jccurtis suggested this a while back.
  • HTML documentation: maybe this would be more along the lines of docstring examples, as part of explaining how the code works. But more extensive examples might not belong in HTML docs.

The examples can always be moved into different formats later. But what should our near-term approach be?

Make repo public

We need to select a proper license (or none at all) before we make the repo public. There is a good discussion of licensing terms here and an explanation of the primary license types here:

  • MIT License - Most open and my initial choice
  • Apache License 2.0
  • GNU GPLv3

What do you think? Take note that this issue will be tracked and publicly viewable once we are done. ๐Ÿ˜„

Use py.test instead of nose

[nose] says the project has been in maintenance mode and updates may cease in the future. It suggests switching to nose2 (based on unittest2) or py.test.

I'm not very familiar or picky, but I know @jccurtis has talked about py.test already.

Linting standards

Which linters should we use to standardize our code?

I have become more of a fan of pylint recently. It can give a lot of messages at first, but I find them actually pretty helpful in the long run once a couple of the worst offenders are muted.

Maybe we can consider a minimum linting requirement, like pyflakes or flake8 or pep8, and a recommended standard like pylint with some errors ignored. It sounds like we can integrate these when we set up continuous integration.

Any thoughts?

Spectrum bin convention

We have an unresolved convention to decide on for our Spectrum classes -- should we use bin centers or edges?

I am personally in favor of using bin edges, since that is the more fundamental quantity for histograms (e.g., counts and bin edges are returned by numpy.histogram). Sometimes you may want to use uneven bin sizes, for example. Bin centers can always be calculated from the bin edges. Using bin edges is less convenient when plotting, but I think that plotting should really be done using the bin edges and drawing bars anyway.

If we go this way, we need to decide how to handle integer channels in RawSpectrum objects. Are the bin edges for a channel [x-0.5, x+0.5] or are they [x, x+1]?

More example spectra

Once we start working on actual analysis modules, it will become important to have a variety of spectra for testing. Let's brainstorm our needs here, and we can all contribute spectra as we have a chance.

File types (for parsers that haven't been written yet):

  • *.CHN
  • *.N42

Detector types:

  • NaI
  • CdZnTe
  • CsI

Use cases / sources:

  • check sources for calibrations
  • environmental samples for gamma counting
  • Th, U ores
  • neutron activation spectra with lots of peaks

Test function names

@markbandstra I notice that your XCOM tests are in two named classes but all the methods are numbered (XCOMQueryTests.test_01(), etc.).

At first I thought this was less readable, but I realized that 1) the description is in the docstring, which otherwise is essentially redundant with the method name, and 2) the numbering ensures that unittest/nose/whatever runs them in the desired order, since they are processed alphabetically.

Any other thoughts or objections? Otherwise I'm going to start numbering my tests.

Add XCOM Table2 materials to XCOM query tool

There are a group of materials built into XCOM for quick reference in the X-Ray Mass Attenuaton Coeff Tables 2 and 4. I end up using these tables often for quick reference for attenuation through common materials like air and concrete.

I want to add another argument to the XCOM query called material which would takes args like Concrete, Ordinary which the user could see with a global called MATERIALS. I'm not sure if this should go into fetch_xcom_data or _XCOMQuery or another function... Lets discuss.

html5lib dependency

When I run pytest -m "" to include webtests, I get two failures in test_materials.py on test_element_data and test_compound_data.

If html5lib is required for these should it go in requirements.txt?

===================================================== FAILURES ======================================================
_________________________________________________ test_element_data _________________________________________________

    @pytest.mark.webtest
    def test_element_data():
        """Test fetch_element_data........................................."""
>       materials.fetch_element_data()

tests/materials_test.py:47: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
becquerel/tools/materials.py:81: in fetch_element_data
    tables = pd.read_html(text, header=0, skiprows=[1, 2])
/usr/local/anaconda2/lib/python2.7/site-packages/pandas/io/html.py:874: in read_html
    parse_dates, tupleize_cols, thousands, attrs, encoding)
/usr/local/anaconda2/lib/python2.7/site-packages/pandas/io/html.py:726: in _parse
    parser = _parser_dispatch(flav)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

flavor = 'bs4'

    def _parser_dispatch(flavor):
        """Choose the parser based on the input flavor.
    
        Parameters
        ----------
        flavor : str
            The type of parser to use. This must be a valid backend.
    
        Returns
        -------
        cls : _HtmlFrameParser subclass
            The parser class based on the requested input flavor.
    
        Raises
        ------
        ValueError
            * If `flavor` is not a valid backend.
        ImportError
            * If you do not have the requested `flavor`
        """
        valid_parsers = list(_valid_parsers.keys())
        if flavor not in valid_parsers:
            raise ValueError('%r is not a valid flavor, valid flavors are %s' %
                             (flavor, valid_parsers))
    
        if flavor in ('bs4', 'html5lib'):
            if not _HAS_HTML5LIB:
>               raise ImportError("html5lib not found, please install it")
E               ImportError: html5lib not found, please install it

/usr/local/anaconda2/lib/python2.7/site-packages/pandas/io/html.py:670: ImportError
________________________________________________ test_compound_data _________________________________________________

    @pytest.mark.webtest
    def test_compound_data():
        """Test fetch_compound_data........................................"""
>       materials.fetch_compound_data()

tests/materials_test.py:53: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
becquerel/tools/materials.py:155: in fetch_compound_data
    tables = pd.read_html(text, header=0, skiprows=[1, 2])
/usr/local/anaconda2/lib/python2.7/site-packages/pandas/io/html.py:874: in read_html
    parse_dates, tupleize_cols, thousands, attrs, encoding)
/usr/local/anaconda2/lib/python2.7/site-packages/pandas/io/html.py:726: in _parse
    parser = _parser_dispatch(flav)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

flavor = 'bs4'

    def _parser_dispatch(flavor):
        """Choose the parser based on the input flavor.
    
        Parameters
        ----------
        flavor : str
            The type of parser to use. This must be a valid backend.
    
        Returns
        -------
        cls : _HtmlFrameParser subclass
            The parser class based on the requested input flavor.
    
        Raises
        ------
        ValueError
            * If `flavor` is not a valid backend.
        ImportError
            * If you do not have the requested `flavor`
        """
        valid_parsers = list(_valid_parsers.keys())
        if flavor not in valid_parsers:
            raise ValueError('%r is not a valid flavor, valid flavors are %s' %
                             (flavor, valid_parsers))
    
        if flavor in ('bs4', 'html5lib'):
            if not _HAS_HTML5LIB:
>               raise ImportError("html5lib not found, please install it")
E               ImportError: html5lib not found, please install it

/usr/local/anaconda2/lib/python2.7/site-packages/pandas/io/html.py:670: ImportError
====================================== 2 failed, 924 passed in 487.88 seconds =======================================

Spectrum plotting

Methods in Spectrum for plotting. Visualization is not a main focus, but at least basic plotting with some common options.

Some ideas:

  • use drawstyle='steps-mid' for "stairs"-type plot
  • log y-scale by default
  • automatically label axes
  • color, legend label, title
  • allow plotting into an existing axes

Streamline test suite

Currently, the test suite includes plots (from the file parsers) as well as web queries (for xcom). The plots require clicking to close the window, before the tests continue. The web queries just take several seconds and require internet.

How about a --quick option or so, that skips some of these. I haven't thought about how to do this from setup.py test and setup.py nosetests, but in the test suites you can define a module-level flag and use @unittest.skipIf or @unittest.skipUnless decorators.

Normalized spectrum subtraction

Add a method in Spectrum that takes another Spectrum instance as input, scales it by livetime, and subtracts it from the base object, returning a new Spectrum object.

This could be a standalone function that takes two spectra, or a Spectrum method, or both.

See also #28.

Efficiency calibration

Energy-dependent detector efficiency calibration.

The basis for this (having peaks with a net area and assigning a calibration value) should be in my upcoming pull request for energycal and peaks. But it will take some expanding, probably along the lines of the (upcoming) energycal design.

Floating-point precision in exponential decays

IsotopeQuantity.decays_from and IsotopeQuantity.from_decays can suffer in precision if halflife >> time interval. This is because the calculations involve a 1 - np.exp(-decay_const * time) kind of expression, and if decay_const * time is very small, the exponent is very close to 1, and the difference loses precision because of these two floating point quantities. See np.spacing.

Better precision could be achieved in some cases by a Taylor series approximation:

1 - exp(-lambda * t)
~= 1 - (1 - lambda * t)
~= lambda * t

The code could use np.spacing to check whether the series approximation or floating point has better precision.

This is relevant for long-lived isotopes like K-40. (I'm getting ~0.1% loss of precision for an hour's interval with K-40, which was causing tests to fail.)

Uncertainties in energy calibration points

This is split off from #23 (PR #43) because it has gotten complicated, and is not considered urgent. My work so far on it is on branch feature-energycal-uncertainties. (And some comments I made are in the "Show outdated" line comments on #43.)

One suggestion @markbandstra had was to see if lmfit knows how to handle x-uncertainties in a least squares fit, in order to avoid inverting the calibration equation (since the uncertainty is on channel, not keV).

Set up continuous integration

There are many continuous integration services that integrate with GitHub. A very popular one is travis-ci, which is free as long as the project is open source (#1), although there is a trial period we could take advantage of.

The purpose of continuous integration is to run the unit tests on each commit to make sure that the code doesn't regress. For this to be effective we have to be sure to write unit tests that cover our code well.

It looks like unit tests and linting can be run automatically, so our decisions on #3 and #4 would inform setting up our .travis.yml file.

PyNE as a dependency?

PyNE is a "nuclear engineering toolkit" which a variety of Berkeley people have put together (Rachel Slaybaugh, students, and other volunteers). Their focus is reactors (in other words, "real" nuclear engineering). So, MCNP, cross sections, ORIGEN, and other things I'm unfamiliar with.

They have functionality that overlaps with some of our tools modules, including material and nuclear data lookup. I like ours better, but we might leverage their tools for cross sections at least, and be aware of what else is available.

Their install instructions suggest using conda install but do not talk about pip, although there does seem to be an older version of PyNE on pip. So it may be messy to have PyNE as a dependency of becquerel.

Fitting module

For peak fitting, energy cal fitting.
To use lmfit.
@jccurtis to take code from GRPP as a start.

(see also #22)

@jccurtis TODO:

  • Update erf naming
  • Add warning when fit errors unknown

Organization of tools

The tools module is growing, with xcom, nndc, materials, and estar in pull requests or issues. This is great, what's the best structure for organizing these tools?

  • As per @markbandstra's comment in #51, xcom, materials, and estar could all go in a nist submodule.
  • I see several similar classes that @markbandstra has created for HTML requests, and I wonder if their structure could be more explicitly standard, for example with each type deriving from a base class and overwriting methods that do the parsing. This would be a long-term goal for code organization.

How do channel numbers relate to bins? (external files)

Related to #35.

If I'm loading a Spe file from GammaVision (for example), and the file provides polynomial coefficients for a calibration curve, and I put N (channel value) into the equation, is the result the energy of the center of that bin? Or the left edge (or right edge)? It affects how we load calibrated spectra from external files, and whether we reproduce the calibration correctly. This is an aspect of GammaVision (and other software) behavior that we must learn or infer.

Another aspect of this is whether GammaVision's (for example) bin indexing is 0-indexed or 1-indexed.

I think this could be settled quickly by an HPGe acquisition with say 256 bins only. Or it might be in the manual somewhere... but it's a big manual.

Poisson uncertainty convention

For counts N, we commonly use sqrt(N) as the 1-sigma uncertainty, which is valid for Gaussian statistics (N >> 0). However, in spectra we often deal with counts that are low or zero. Specifically, sqrt(N) gives an uncertainty of 0 in the case of 0 counts.

Personally in these situations I have used a suggestion I found in this Fermilab text article, and adopted +-0.5 + sqrt(n+0.25). Another solution could be to use sqrt(N) for N > 0, but specify an uncertainty of 1 in the case of N = 0.

How do others handle this?

Re-examine RawSpectrum and CalSpectrum design

In our early design process we decided to have classes RawSpectrum and CalSpectrum with CalSpectrum inheriting from RawSpectrum. I'm starting to dislike that design choice as I think about use cases and energy calibration. Two reasons come to mind.

  1. When you load an uncalibrated spectrum and then calibrate it, it transforms from a RawSpectrum into a CalSpectrum. That means that you have to create a new object for the CalSpectrum. Yes, this can be done with a @classmethod but it seems clunky for an object to change identity as you apply a calibration. And then what if you don't like the calibration and want to remove it, but aren't ready with a new calibration yet?
  2. If you load a spectrum from file, you may not know whether it includes a calibration. You can either try: s = CalSpectrum(file); except: s = RawSpectrum(file), or have some helper function that returns the appropriate object, but it seems clunky to not know what object you're getting.

Instead I would suggest:

  • One Spectrum class
  • A calibrated property which can be True or False
  • An UncalibratedError that gets raised if you try to perform any energy-related operation on an uncalibrated spectrum

Thoughts?

Unstable Isotopes in NeutronIrradiation

We are using the Isotope class for RadWatch analysis, specifically for NAA. We found that there is a discrepancy between what NNDC says about Eu-151 versus what nucleardata.lu regarding stability; NNDC says Eu-151 is not stable. This is complicating our attempts to use this class for calculations for finding concentration of Eu-151 in our samples because the NeutronIrradiation class currently does not allow for unstable isotopes as the initial isotope.

How should channel numbers relate to bins? (bq convention)

This came up at some point previously.

If I'm calibrating energy and I say that channel N corresponds to a certain energy, what does that mean? Is that the energy of the center of the bin indexed by N? Or the energy of the left edge (or right edge)? It affects how we generate a calibration curve from fits in the spectrum. This is a convention we can decide.

I don't feel strongly, since I haven't dealt with binning/rebinning issues in my work much.

Purge internal items before releasing to public

We should purge at least internal meeting notes (both the normal meetings and IPO discussions) before making the repo public. I can't think of anything else that needs to be purged right now.

Counts, CPS, CPS/keV

See also discussion in #27.

Various units are useful in different situations, like normalizing and rebinning. Arithmetic operations on spectra produce a data result that could be considered to not have a time duration, but only be in CPS.

How should these different count units be implemented?

package import structure

Currently, becquerel/__init__.py only imports core, parsers, and tools. becquerel/core/__init__.py then imports useful things like the Spectrum class and LinearEnergyCal. Thus:

import becquerel as bq

spec = bq.core.Spectrum.from_file('foo.spe')
...
cal = bq.core.LinearEnergyCal.from_coeffs({'b': 0.37, 'c': -4})
...
bq.core.plot_spectrum(spec)

I suggest importing more core classes and functions at the top level:

import becquerel as bq

spec = bq.Spectrum.from_file('foo.spe')
...
cal = bq.LinearEnergyCal.from_coeffs({'b': 0.37, 'c': -4})
...
bq.plot_spectrum(spec)

Of course, one can always from bq.core import Spectrum, ... but this is not always a preferred approach.

This could be done for the main API definitions from tools and/or parsers as well, as long as everything is clearly named. But I think at least core should be in the top-level namespace. Thoughts?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.