GithubHelp home page GithubHelp logo

cherishyli / gsas_use Goto Github PK

View Code? Open in Web Editor NEW

This project forked from antongagin/gsas_use

0.0 0.0 0.0 2.49 MB

An extension to the GSAS-II Rietveld package GSAS_USE (Bayesian Statistics Approach to Accounting for Unknown Systematic Errors)

License: GNU General Public License v2.0

Python 100.00%

gsas_use's Introduction

title author output
GSAS_USE
Anton Gagin and Igor Levin
html_document

Please open README.pdf to see the formulas

This is an extension to the GSAS-II Rietveld package GSAS_USE (Bayesian Statistics Approach to Accounting for Unknown Systematic Errors), written and maintained by Anton Gagin ([email protected], [email protected])

GSAS_USE addresses the effects of systematic errors in Rietveld refinements. The errors are categorized into multiplicative, additive, and peak-shape types. Corrections for these errors are incorporated into using a Bayesian statistics approach, with the corrections themselves treated as nuisance parameters and marginalized out of the analysis. Structural parameters refined using the proposed method represent probability-weighted averages over all possible error corrections. See Gagin, A. & Levin, I. (2015). Accounting for Unknown Systematic Errors in Rietveld Refinements: A Bayesian Statistics Approach. J. Appl. Cryst. 48, 1201-1211 for details.

The current version has been tested with GSAS-II version 1.0.0, revision 3420.
For details of the GSAS-II package, refer to Toby, B. H. & Von Dreele, R. B. (2013). J. Appl. Cryst. 46, 544-549, or visit their website.


Table of Contents
1. Installation
2. Usage
3. Description
4. Example
5. Bugs


Installation

To apply this patch, place the patchSystErrors folder in your GSAS-II local folder and run __apply_patch__.py, or print

execfile('__apply_patch__.py')

in a python command line interpreter from GSAS-II local folder. If everything works correctly, the following message will be displayed

### based on Diff, Match and Patch Library
###          http://code.google.com/p/google-diff-match-patch/
###          by Neil Fraser
###          Copyright 2006 Google Inc.


--------------------------
This script will patch your current version of the GSAS-II package

Begin [y/n]?

Type y and follow the instructions.

Folder originalOld contains some of the original GSAS-II source files under revision 1970. Folder modifiedOld contains our modification of these files. The script copies the source files from your current revision of GSAS-II into the originalNew folder. Before applying the patch please ensure that the local folder with GSAS-II contains the original GSAS-II-files and not the modified versions! __apply_patch__.py calculates the patch from the difference between the files in the originalOld and modifiedOld folders, applies this patch to the files in the originalNew folder, and writes the results to the modifiedNew folder (as well as to the GSAS-II local folder.)

To restore the original GSAS-II-files, run __restore_original__.py.

To update patch, run __update_patch__.py.

Usage

After the patch has been applied, start GSAS-II normally. In Controls menu specify the correction parameters. If several histograms are refined simultaneously, list these parameters, separated by commas, in the order corresponding to the order of the histograms (it may not correspond to their order on the data tree). If you wish to the same value of the parameter for all histograms, enter a single number. Set $E_mu$, $E_beta$ or $s$ to zero, if you do not want to apply a particular correction (multiplicative, additive, or peak-shape.)

If you select Estimate optimal k_mu?, the Prior factor k_mu field will be set to optimal. The same is true for the Estimate optimal k_beta? and Prior factor k_beta fields. Deselecting Estimate optimal k? will restore the previous value in Prior factor k.

If you click on Correlation length l_delta field, the estimate it as FWHM / field will be set to none, and vice versa. The same is true for the fields Stdev sigma_delta and estimate it as l_delta/.

To start a Bayesian-corrected refinement, select Calculate/Refine in the GSAS-II data tree window. To see refinement results, select Data/Open .lst file or Data/Compare standard and Bayesian fits.

Description

  • The multiplicative correction $\mu(x)$ is approximated by a set of $E_{\mu}$ cubic spline functions $\phi_j^{(\mu)}(x)$ $$ \mu(x) = \sum_{j=1}^{E_{\mu}} \left( 1+c_j^{(\mu)}\right) \phi_j^{(\mu)}(x), $$ where $c_j^{(\mu)}$ are the spline coefficients. Spline-knot positions are selected equidistantly.
    The scaling parameter $k_{\mu}$ reflects the strength of the restriction on closeness of the multiplicative correction to unity. It can be estimated by the program from the residual of a standard fit (no corrections), if Estimate optimal k_mu? is selected.

  • The additive correction is approximated using a set of $E_{\beta}$ cubic spline functions $\phi_j^{(\beta)}(x)$ $$ \beta(x) = \sum_{j=1}^{E_{\beta}} c_j^{(\beta)} \phi_j^{(\beta)}(x). $$ The scaling parameter $k_{\beta}$ reflects the strength of the smoothness restriction on the additive correction.

  • A diffraction profile is corrected by varying x-coordinates of the individual points of a diffraction curve. A probability of each 'move' $\delta x$ is calculated as $$ p(\delta x) \propto \exp \left( -\frac{1}{2}\delta x^T \Sigma_{\delta}^{-1} \delta x \right), $$ where the covariance matrix $\Sigma_{\delta}^{-1}$ is defined as $$ \Sigma_{ij}^{(\delta)} = \sigma_{\delta}^2 \exp \left( -\frac{1}{2} \left( \frac{x_i-x_j}{l_{\delta}} \right)^2 \right). $$ The scaling parameters $\sigma_{\delta}$ and $l_{\delta}$ describe a standard deviation for the correction and correlation length for the point coordinates, respectively. $l_{\delta}$ can be estimated from characteristic FWHM values for diffraction peaks (which depend on x) as $FWHM /p1$, where $p1$ can be any real number. For a multi-phase refinement, if estimated from FWHM, $l_{\delta}$ is calculated as a an average weighted by a number of peaks for all the phases. Fig. 1 provides a hint on how to select $p1$ for $l_{\delta}$.

$\sigma_{\delta}$ can be estimated from the $l_{\delta}$ value(s) as $l_{\delta}/p2$, where $p2$ can be any real number. Normally, $p2 \approx 1.5-2$ To reduce computational complexity (e.g. one may get an out-of-memory error for extremely large histograms) and speed the calculations up, the fitted x-range is divided into $s$ independent segments.

  • The iterative procedure works as follows:

    • a standard fit is performed
    • a Bayesian-corrected fit is performed
    • the optimal corrections are calculated and applied to the experimental data
    • a Bayesian-corrected fit is repeated

The second Bayesian-corrected fit is prone to overfitting because it uses the same correction parameters as those that have been already applied to the data. Therefore, we advise to limit the use of the iterative option to cases of large systematic errors.

  • If your select run sampler for MCMC? the patch will do the following:
    • perform a standard fit
    • call the emcee library and run the Goodman & Weare's Affine Invariant MCMC sampler
    • perform a Bayesian-corrected fit to obtain the final estimates

Results of the MCMC sampler will be saved in a text file and as a plot in a project folder. Prior to using this feature make sure that emcee and triangle_plot libraries are installed.

Example

  • Download the example files for a 'Combined X-ray/CW-neutron refinement of PbSO4' from the GSAS-II tutorial. Perform the refinements as described in the tutorial.
  • Deselect all the refinable parameters except for the structural variables which include 3 lattice parameters, 11 sets of atomic coordinates, and 5 isotropic atomic displacement parameters. MAKE SURE to deselect Background and Histogram scale factor!
  • For this example we want to correct all three types of errors. Set the Number of knots E_mu to
15, 20

(more splines are selected for the XRD data because it exhibits the worse residual). These numbers of knots can be increased up to

30, 45

but this will take longer to calculate. Set Prior factor k_mu to

1, 1
  • Set Number of knots E_beta to
15, 20

and select Estimate optimal k_beta?

  • Set Number of blocks s to
8, 8

To estimate correlation lengths $l_delta$ and standard deviations $sigma_delta$, type

1.5

in the estimate it as FWHM / and

2.0

in the estimate it as l_delta / fields, respectively.

  • Select Calculate/Refine in the GSAS-II data tree window. The program will perform a standard least-squares fit followed by a Bayesian-corrected fit. The results will be saved in the projectName.lst file. The details of the Bayesian fit will be stored in the projectName_cor_iHist.txt files, where iHist is the histogram number.

Select Data/Open .lst file to see the GSAS-II .lst project file. The residuals are summarized in the table entitled as

********************************************************
*
* == SUMMARIZING REFINEMENT RESULTS: ==

Calculated as a sum of squared residuals for the Bayesian approach are expected to be larger than those obtained using standard LS technique. Calculated with optimal corrections residuals are expected to be smaller.

Select Data/Compare standard and Bayesian fits to see fit results. The notation for the parameters is the following:

i::Name:j

Here $i$ and $j$ indicate histogram and atom number, respectively, and $Name$ indicates parameter name. Note, that GSAS-II fits the changes in atomic coordinates rather than their absolute values. These changes are calculated with respect to the starting values. Absolute values for the atomic coordinates are given in the .lst project file.

Bugs

To report a bug or ask a question, send an e-mail to both of us ([email protected] and [email protected]). For a bug report, please include the error message and traceback from the console window [text beginning with "Traceback (most recent call..."].

Please cite Gagin, A. & Levin, I. (2015). Accounting for Unknown Systematic Errors in Rietveld Refinements: A Bayesian Statistics Approach. J. Appl. Cryst. 48, 1201-1211 in publications that use this method.

gsas_use's People

Contributors

antongagin avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.