GithubHelp home page GithubHelp logo

lammps / lammps Goto Github PK

View Code? Open in Web Editor NEW
2.0K 117.0 1.6K 707.13 MB

Public development project of the LAMMPS MD software package

Home Page: https://www.lammps.org

License: GNU General Public License v2.0

Python 2.10% Shell 0.39% Tcl 5.93% C 0.97% C++ 86.00% Fortran 0.60% Makefile 0.24% Gnuplot 0.01% Cuda 1.90% Perl 0.24% CMake 0.95% Jupyter Notebook 0.06% Roff 0.29% xBase 0.01% Emacs Lisp 0.01% Arc 0.10% MATLAB 0.04% Awk 0.01% Rich Text Format 0.16% Metal 0.01%
molecular-dynamics lammps simulation kokkos

lammps's Introduction

This is the LAMMPS software package.

LAMMPS stands for Large-scale Atomic/Molecular Massively Parallel
Simulator.

Copyright (2003) Sandia Corporation.  Under the terms of Contract
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
certain rights in this software.  This software is distributed under
the GNU General Public License.

----------------------------------------------------------------------

LAMMPS is a classical molecular dynamics simulation code designed to
run efficiently on parallel computers.  It was developed at Sandia
National Laboratories, a US Department of Energy facility, with
funding from the DOE.  It is an open-source code, distributed freely
under the terms of the GNU Public License (GPL) version 2.

The code is maintained by the LAMMPS development team who can be emailed
at [email protected].  The LAMMPS WWW Site at www.lammps.org has
more information about the code and its uses.

The LAMMPS distribution includes the following files and directories:

README                     this file
LICENSE                    the GNU General Public License (GPL)
bench                      benchmark problems
cmake                      CMake build files
doc                        documentation
examples                   simple test problems
fortran                    Fortran wrapper for LAMMPS
lib                        additional provided or external libraries
potentials                 interatomic potential files
python                     Python wrappers for LAMMPS
src                        source files
tools                      pre- and post-processing tools

Point your browser at any of these files to get started:

https://docs.lammps.org/Manual.html         LAMMPS manual
https://docs.lammps.org/Intro.html          hi-level introduction
https://docs.lammps.org/Build.html          how to build LAMMPS
https://docs.lammps.org/Run_head.html       how to run LAMMPS
https://docs.lammps.org/Commands_all.html   Table of available commands
https://docs.lammps.org/Library.html        LAMMPS library interfaces
https://docs.lammps.org/Modify.html         how to modify and extend LAMMPS
https://docs.lammps.org/Developer.html      LAMMPS developer info

You can also create these doc pages locally:

% cd doc
% make html                # creates HTML pages in doc/html
% make pdf                 # creates Manual.pdf

lammps's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lammps's Issues

comm_modify cutoff/multi option

Adds a cutoff/multi option to comm_modify to specify type-dependent communication cutoffs when in mode multi. Also adds explanation about default behaviour of option cutoff when in mode single to doc.

Compilation fails with package USER-CUDA on r14624

../verlet_cuda.cpp: In member function ‘virtual void LAMMPS_NS::VerletCuda::setup()’:
../verlet_cuda.cpp:126:17: error: ‘class LAMMPS_NS::Update’ has no member named ‘max_wall’
if (update->max_wall > 0) {
^
../verlet_cuda.cpp:128:35: error: ‘class LAMMPS_NS::Update’ has no member named ‘max_wall’
double totalclock = update->max_wall;
^
../verlet_cuda.cpp: In member function ‘virtual void LAMMPS_NS::VerletCuda::run(int)’:
../verlet_cuda.cpp:658:17: error: ‘class LAMMPS_NS::Update’ has no member named ‘time_expired’
if (update->time_expired()) {
^

implement generic logger class to replace "logfile" and "screen"

we should have a logger class with semantics similar to C++ iostreams, which would make logging simpler and less programming effort, as all choices whether output should be sent to "screen" or "logfile" would be delegated to the logger. using iostream semantics would allow to get rid of temporary fixed size buffers. VMD uses something similar. it would be nice to also have an option to choose verbosity, i.e. assign "urgency" levels, so that output would be either very terse or more verbose.
the error class already serves some of the same purposes, but primarily suffers from requiring a char * argument. perhaps those can be combined. using a little preprocessor trickery, it may be possible to also hide the FLERR macro.

Random crashes in Python wrapper

I've written a script using the Python wrapper version of LAMMPS. This is supposed to integrate a mobile wall, which is responding to the constant external pressure plus the forces coming from the LJ particles (which are self-propelled). While the code produces sensible results, it is plagued by random crashes occurring after anything between 1 second and 24 hours, without any warning or error message. Each crash seems to be preceded by a random drop or increase in the measured force on the wall a few (10-100) timesteps before crashing, which is definitely unphysical and seems to be indicating a communication fault and/or memory error. The errors are much more frequent using higher CPU numbers, but will always occur sooner or later as soon as the CPU number is > 1. The error is furthermore the same when using mpi4py instead of PyPar, so it is unlikely to be related to the MPI wrapper.

Attached is a slightly reduced version of the Python script together with an input file that crashes within a few thousand timesteps on at least two machines when running on 8 CPUs. Version information is as follows:

Lammps-10Aug2015 compiled with icc and openMPI 1.8.4
PyPar 2.1.4.94
Python 2.7.9

Run script (test.py): https://dl.dropboxusercontent.com/u/12407652/Lammps-debug/test.py
Lammps input file (in.test): https://dl.dropboxusercontent.com/u/12407652/Lammps-debug/in.test
Starting config (conf.test): https://dl.dropboxusercontent.com/u/12407652/Lammps-debug/conf.test

It is simply executed by running python with mpirun, i.e., "mpirun -np 8 python test.py"

Kind regards,

Joakim

Fix qeq_fire fails to compile with Intel (16)

I got following failure to compile:

mpicxx -g -O3 -restrict   -DLAMMPS_GZIP -DLMP_USER_OMP -DLMP_PYTHON -DLMP_MPIIO -DLMP_KOKKOS  -DMPICH_SKIP_MPICXX -DOMPI_SKIP_MPICXX=1    -I/projects/install/rhel6-x86_64/sems/compiler/python/2.7.9/include/python2.7 -I/projects/install/rhel6-x86_64/sems/compiler/python/2.7.9/include/python2.7 -I./ -I../../lib/kokkos/core/src -I../../lib/kokkos/containers/src -I../../lib/kokkos/algorithms/src  --std=c++11   -c ../fix_qeq_fire.cpp
../fix_qeq_fire.cpp(228): warning #3494: a user-provided literal suffix must begin with "_"
        sprintf(str,"Charges did not converge at step "BIGINT_FORMAT
                    ^

../fix_qeq_fire.cpp(228): error: user-defined literal operator not found
        sprintf(str,"Charges did not converge at step "BIGINT_FORMAT
                    ^

compilation aborted for ../fix_qeq_fire.cpp (code 2)

This is with the following modules loaded (which have the compiler versions and so in the name):

  1) gcc/4.7.2/base                3) intel/16.0.1/openmpi/1.10.1   5) gdb/7.9.1
  2) intel/16.0.1/base             4) git/2.1.3                     6) python/2.7.9

This is the list of enabled packages:

Installed YES: package ASPHERE
Installed YES: package BODY
Installed YES: package CLASS2
Installed YES: package COLLOID
Installed YES: package COMPRESS
Installed YES: package CORESHELL
Installed YES: package DIPOLE
Installed  NO: package GPU
Installed YES: package GRANULAR
Installed  NO: package KIM
Installed YES: package KOKKOS
Installed YES: package KSPACE
Installed YES: package MANYBODY
Installed YES: package MC
Installed  NO: package MEAM
Installed YES: package MISC
Installed YES: package MOLECULE
Installed YES: package MPIIO
Installed YES: package OPT
Installed YES: package PERI
Installed  NO: package POEMS
Installed YES: package PYTHON
Installed YES: package QEQ
Installed  NO: package REAX
Installed YES: package REPLICA
Installed YES: package RIGID
Installed YES: package SHOCK
Installed YES: package SNAP
Installed YES: package SRD
Installed  NO: package VORONOI
Installed YES: package XTC

Installed  NO: package USER-ATC
Installed  NO: package USER-AWPMD
Installed YES: package USER-CG-CMM
Installed  NO: package USER-COLVARS
Installed  NO: package USER-CUDA
Installed YES: package USER-DIFFRACTION
Installed YES: package USER-DPD
Installed YES: package USER-DRUDE
Installed YES: package USER-EFF
Installed YES: package USER-FEP
Installed  NO: package USER-H5MD
Installed  NO: package USER-INTEL
Installed YES: package USER-LB
Installed YES: package USER-MGPT
Installed YES: package USER-MISC
Installed YES: package USER-MOLFILE
Installed YES: package USER-OMP
Installed YES: package USER-PHONON
Installed  NO: package USER-QMMM
Installed YES: package USER-QTB
Installed  NO: package USER-QUIP
Installed YES: package USER-REAXC
Installed  NO: package USER-SMD
Installed YES: package USER-SMTBQ
Installed YES: package USER-SPH
Installed YES: package USER-TALLY
Installed  NO: package USER-VTK

If you need more info let me know.

Enhancements for USER-HADRESS package

  • support for write_data: binary restart files are not very portable between LAMMPS versions and platforms, so people may need a data file to move a simulation from one platform to another or have a portable restart after equilibration that can be used independent from changes. since settings in the data file are required, write_data should output them.
  • the pow() function is very slow compared to MathSpecial::square(), MathSpecial::cube() and MathSpecial::powint(), which take 2, 3, or any integer as second argument.
  • we are in the process to convert all calls that use the polynomial approximation to erfc(), which has only single precision accuracy, to a full accuracy double precision implementation, which also affects coul/dsf and thus your custom pair styles. Please see, port over and carefully test the changes in https://github.com/akohlmey/lammps/tree/coulomb-analytic-double vs. the lammps-icms branch. Note that this is work in progress.
  • the documentation contains loads of explicit html typesetting and special character entities. This is creating all kinds of problem when building the html/pdf documentation. you can see for yourself by going to the doc folder and typing "make html". it is best to avoid super/subscripts, mixed uppercase/lowercase and special characters, but rather write them out literally and use all lowercase. if you absolutely need to typeset math expressions, try using MathJAX instead.

pair srp is not fully compatible with 64-bit tags

pair/fix srp use a double array to store tags, but uses casts and not the ubuf() union for it. that will fail for tags larger than 2^52 (~4.5 quadrillion). this should be done with ubuf() or a test and warning needs to be added.

"clear" command doesn't free all memory with reax/c

From lammps-users:

Hello
I've encountered a problem with the "clear" command. It doesn't free memory when used along with pair_style reax/c and two 'run' commands in a loop.I'm attaching a simple input script which shows the problem. I'm using LAMMPS from 6 Nov 2015.
Cheers Michal

label                   loopa
clear
units                   real
atom_style              charge
region                  universe block -10 10 -10 10 -10 10 units box
lattice                 sc 3
create_box              1 universe
mass                    1 12.09
create_atoms                1 box
pair_style              reax/c NULL checkqeq no
pair_coeff              * * ffield.reax.cho C
run                     0 
run                     0
jump                    SELF loopa

Initialize pointers to zero before any errors can be thrown

When LAMMPS is used as a library, it could make sense to have the process still running after a LAMMPS crash (i.e. invalid command). As of now, several of the classes with arrays have pointers that aren't set to NULL before the first possible error.

One example is fix_ave_chunk.cpp which can throw an error in the first line in the constructor:
if (narg < 7) error->all(FLERR,"Illegal fix ave/chunk command");

where the pointer count_list isn't set to NULL until the end of constructor.

What can happen is that an error is thrown (before pointers were NULLed) so that if the LAMMPS destructor is called, it tries to destroy memory that never was allocated, but pointer value is non-zero.

What to do

In all classes, NULL all pointers before any error can be thrown.

List over remaining directories:

  • src
  • src/ASPHERE
  • src/BODY
  • src/CLASS2
  • src/COLLOID
  • src/COMPRESS
  • src/CORESHELL
  • src/DIPOLE
  • src/GPU
  • src/GRANULAR
  • src/KIM
  • src/KOKKOS
  • src/KSPACE
  • src/MANYBODY
  • src/MC
  • src/MEAM
  • src/MISC
  • src/MOLECULE
  • src/MPIIO
  • src/OPT
  • src/PERI
  • src/POEMS
  • src/PYTHON
  • src/QEQ
  • src/REPLICA
  • src/RIGID
  • src/SHOCK
  • src/SNAP
  • src/SRD
  • src/VORONOI
  • src/USER-ATC
  • src/USER-AWPMD
  • src/USER-CG
  • src/USER-COLVARS
  • src/USER-DIFFRACTION
  • src/USER-DPD
  • src/USER-DRUDE
  • src/USER-EFF
  • src/USER-FEP
  • src/USER-H5MD
  • src/USER-HADRESS
  • src/USER-INTEL
  • src/USER-LB
  • src/USER-MANIFOLD
  • src/USER-MGPT
  • src/USER-MISC
  • src/USER-MOLFILE
  • src/USER-OMP
  • src/USER-PHONON
  • src/USER-QMMM
  • src/USER-QTB
  • src/USER-QUIP
  • src/USER-REAXC
  • src/USER-SMD
  • src/USER-SMTBQ
  • src/USER-SPH
  • src/USER-TALLY
  • src/USER-VTK

[Feature Request] Long-range point-dipole solver

LAMMPS has an Ewald-based long-range dipole solver, but not a more efficient PPPM version. With increasing support for more complex force fields and interest in point dipole polarizable force field, this option would be required for simulating larger systems efficiently.

problem with hybrid sw and lj with GPU package

From lammps-users:

I want to simulate a system that contain mW water with Sw potential 
and also hybrid with lj potential with other type particle 
everything is ok when I just use mpi run this system 
but when i try to use gpu to accelerate, the thermal output like pressure, 
get nan and finally system get crash.
(the neighbor list doesn't use gpu-accelerate,  also I check the gpu-accelerate 
can be used in the example input file) 

1. the version I used often is 15May2015 , but i also try the 3May2016 version
    and also have these problem
2. (a) Linux version 2.6.32-573.12.1.el6.x86_64 ([email protected])
         (gcc version 4.4.7 20120313 (Red   Hat 4.4.7-16) (GCC) )
    (b) CPU : Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz  ( 8 core)
    (c) GPU : Tesla C2075
                   Nvidia-smi 352.68
    (d) the package is gpu on pair force ( sw/gpu and lj/gpu)
    (e) at first, I use the double precision, and also single give me the same problem

The file are all in attached files 

Thank for all your assistance

Yi-Xian 

mw.sw.txt
data.input.txt
in.nanobubble.txt
forcefield.input.txt

Possible data corruption / incomplete initialization for compute voronoi/atom with dynamic groups

Report from lammps-users:

Hi all:
When I use the command: compute 1 all voronoi/atom occupation, I get very large number of the occupations like this:

4005 1 28.5721 24.6568 19.6988 1.74533e+07 2
4006 1 16.1782 29.0215 34.2943 0 2
4007 1 26.6656 12.3828 15.4321 1.75695e+07 2
4008 1 0.702219 6.15419 15.3856 0 2
4009 1 29.9978 26.4226 19.2562 0 2
4010 1 19.0529 20.7435 7.05381 0 2
4011 1 10.4844 7.46817 9.03348 0 2
4012 1 21.0488 2.33835 0.101724 0 2
4013 1 26.6171 12.219 13.1767 48 2
4014 1 0.689011 5.05151 17.792 0 2
4015 1 31.3881 28.2242 18.8385 65552 2

In this case, I just randomly create atoms in the cell and then test this command. Is it a bug or this command can not be used when the atom number changes? I used the latest version of lammps.
Thanks.
Shijun Zhao

Include dihedral style spherical

From lammps-users by Andrew Jewett:

I've attached code, docs, and a test system for a dihedral_style which is very handy for the coarse-grained polymer simulations I am running.

"For this dihedral style, the energy can be any function that combines the 4-body dihedral-angle (phi) and the two 3-body bond-angles (theta1, theta2)."

I have not benchmarked it, but it should be reasonably efficient. Adding terms to the series should not increase the computation time by that much. I tried to minimize the number of redundant trig function calls (although there are probably ways to optimize this further).
Cheers

Andrew

Allow to set at which respa_level a fix is been run

Most fixes that support run style respa and add/modify forces, are run at the outermost respa level. We could make this configurable via an extension to the fix_modify command. The following steps would be needed:

  • add a flag respa_level to fix.h with a default value of -1, indicating to use the outermost level.
  • in a fix that supports this extension, add a Fix::modify_param() method, that responds to the "respa_level" flag. allowed values: -1 and 1 - nlevels_respa. store either -1 or the value-1 in respa_level
  • rather than testing ilevel against nlevels_respa-1, test against respa_level, so that the _respa() method is run at the desired level

Example use:

run_style respa 2 5 bond 1 pair 2
fix tether all spring/self 10.0
fix_modify tether respa_level 1

This would run fix spring/self at respa level 1 together with the bond forces, instead of level 2 together with pair and kspace.

Discontinuity of energy in reaxFF

Reported by Michał Kański on lammps-users:

Hello all,

I encountered a discontinuity of energy when using ReaxFF. Namely, there is a rapid change of energy (few kcals/mol) when a bond in a diatomic molecule is being broken. The issue also occurs when an atom is being detached from a molecule. I checked the code and I believe that the change occurs, because the undercoordination term is not calculated when an atom does not have any bond. The simplest resolution I have found is removing if-statements from reaxc_multibody.cpp (lines 190, 205, 211) which restrict calculation of undercoordination term to bonded atoms.

Could someone confirm the presence of the issue?

I’m attaching a simple input script which shows the problem. I used the 31 May 2016 version of LAMMPS.

Cheers,
Michal

discontinuity_reaxFF.in.txt

pair_comb_omp fails to compile with NVCC

Using nvcc this pair style looks like it is failing to compile. I am 99% convinced this is an issue of NVCC not of the code though. For now I'd like to document it here. I will also file a bug report with NVIDIA. That said we could potentially use a workaround for the use case where this hurts (i.e. Kokkos and OMP are enabled, and Kokkos is compiled for the Cuda+OpenMP). That workaround would be to replace the OpenMP atomic with a Kokkos atomic (which has a simple pointer interface, and doesn't rely on Kokkos being initialized or anything else) if KOKKOS_HAVE_CUDA is defined. Not sure if that is acceptable. The only other thing I can do right now is to remove that pair style from the compilation.

mpicxx -g -O3   -DLAMMPS_GZIP -DLMP_USER_OMP -DLMP_PYTHON -DLMP_MPIIO -DLMP_KOKKOS  -DMPICH_SKIP_MPICXX -DOMPI_SKIP_MPICXX=1    -I/projects/install/rhel6-x86_64/sems/compiler/python/2.7.9/include/python2.7 -I/projects/install/rhel6-x86_64/sems/compiler/python/2.7.9/include/python2.7 -I./ -I../../lib/kokkos/core/src -I../../lib/kokkos/containers/src -I../../lib/kokkos/algorithms/src  --std=c++11 -Xcompiler -fopenmp   -c ../pair_comb_omp.cpp
../domain_kokkos.h(27): warning: overloaded virtual function "LAMMPS_NS::Domain::x2lamda" is only partially overridden in class "LAMMPS_NS::DomainKokkos"

../domain_kokkos.h(27): warning: overloaded virtual function "LAMMPS_NS::Domain::lamda2x" is only partially overridden in class "LAMMPS_NS::DomainKokkos"

../pair_comb_omp.cpp(419): warning: variable "fqj" was set but never used

../pair_comb_omp.cpp(88): warning: variable "ecoul" was set but never used

../domain_kokkos.h(27): warning: overloaded virtual function "LAMMPS_NS::Domain::x2lamda" is only partially overridden in class "LAMMPS_NS::DomainKokkos"

../domain_kokkos.h(27): warning: overloaded virtual function "LAMMPS_NS::Domain::lamda2x" is only partially overridden in class "LAMMPS_NS::DomainKokkos"

../pair_comb_omp.cpp(88): warning: variable "ecoul" was set but never used

../pair_comb_omp.cpp(419): warning: variable "fqj" was set but never used

../pair_comb_omp.cpp: In member function \u2018virtual double LAMMPS_NS::PairCombOMP::yasu_char(double*, int&)\u2019:
../pair_comb_omp.cpp:522:1: error: expected primary-expression before \u2018}\u2019 token
 #endif

Bug in fix rigid integrator with nose-hoover thermostat when using fix_modify temp

Reported by Reese Jones in personal e-mail:

Hi Steve and Axel,
it looks like fix_rigid_np.cpp has a cut-n-paste error compounded by char pointers not be initialized:

rjones@vikramarka:src$ diff fix_rigid_nh.cpp RIGID/
151,153d150
< 
<   id_temp = NULL;
<   id_press = NULL;
1271c1268
<     if (!tstat_flag) error->all(FLERR,"Illegal fix_modify command");

---
>     if (!pstat_flag) error->all(FLERR,"Illegal fix_modify command");
1276c1273
<     if (id_temp) delete [] id_temp;

---
>     delete [] id_temp;
1310c1307
<     if (id_press) delete [] id_press;

---
>     delete [] id_press;

basically it looks the flag to indicate the presence of a barostat was used where the flag for a thermostat should have been. The effect was invalidating changing the temperature definition with fix_modify.

what I did was pretty cursory but it is an improvement — I can run the fix in nvt mode and change the temp definition . if you’d like I can go through the code more thorough job — there seem to be other errors in parsing logic.

thanks,
Reese

[Feature Request] Convert analytical approximation for erfc() to use double precision version

Apply the following change to all coul/long pair styles, this will improve the accuracy and precision of computing the analytical coulomb potential for use with long-range solvers significantly.

diff --git a/src/KSPACE/pair_lj_charmm_coul_long.cpp b/src/KSPACE/pair_lj_charmm_coul_long.cpp
index 2c709a3..f4695d9 100644
--- a/src/KSPACE/pair_lj_charmm_coul_long.cpp
+++ b/src/KSPACE/pair_lj_charmm_coul_long.cpp
@@ -30,18 +30,14 @@
 #include "neighbor.h"
 #include "neigh_list.h"
 #include "neigh_request.h"
+#include "math_special.h"
+#include "math_const.h"
 #include "memory.h"
 #include "error.h"

 using namespace LAMMPS_NS;
-
-#define EWALD_F   1.12837917
-#define EWALD_P   0.3275911
-#define A1        0.254829592
-#define A2       -0.284496736
-#define A3        1.421413741
-#define A4       -1.453152027
-#define A5        1.061405429
+using namespace MathSpecial;
+using namespace MathConst;

 /* ---------------------------------------------------------------------- */

@@ -144,11 +140,10 @@ void PairLJCharmmCoulLong::compute(int eflag, int vflag)
           if (!ncoultablebits || rsq <= tabinnersq) {
             r = sqrt(rsq);
             grij = g_ewald * r;
-            expm2 = exp(-grij*grij);
-            t = 1.0 / (1.0 + EWALD_P*grij);
-            erfc = t * (A1+t*(A2+t*(A3+t*(A4+t*A5)))) * expm2;
+            expm2 = expmsq(grij);
+            erfc = my_erfcx(grij) * expm2;
             prefactor = qqrd2e * qtmp*q[j]/r;
-            forcecoul = prefactor * (erfc + EWALD_F*grij*expm2);
+            forcecoul = prefactor * (erfc + MY_ISPI4*grij*expm2);
             if (factor_coul < 1.0) forcecoul -= (1.0-factor_coul)*prefactor;
           } else {
             union_int_float_t rsq_lookup;
@@ -479,11 +474,10 @@ void PairLJCharmmCoulLong::compute_outer(int eflag, int vflag)
           if (!ncoultablebits || rsq <= tabinnersq) {
             r = sqrt(rsq);
             grij = g_ewald * r;
-            expm2 = exp(-grij*grij);
-            t = 1.0 / (1.0 + EWALD_P*grij);
-            erfc = t * (A1+t*(A2+t*(A3+t*(A4+t*A5)))) * expm2;
+            expm2 = expmsq(grij);
+            erfc = my_erfcx(grij) * expm2;
             prefactor = qqrd2e * qtmp*q[j]/r;
-            forcecoul = prefactor * (erfc + EWALD_F*grij*expm2 - 1.0);
+            forcecoul = prefactor * (erfc + MY_ISPI4*grij*expm2 - 1.0);
             if (rsq > cut_in_off_sq) {
               if (rsq < cut_in_on_sq) {
                 rsw = (r - cut_in_off)/cut_in_diff;
@@ -572,7 +566,7 @@ void PairLJCharmmCoulLong::compute_outer(int eflag, int vflag)
         if (vflag) {
           if (rsq < cut_coulsq) {
             if (!ncoultablebits || rsq <= tabinnersq) {
-              forcecoul = prefactor * (erfc + EWALD_F*grij*expm2);
+              forcecoul = prefactor * (erfc + MY_ISPI4*grij*expm2);
               if (factor_coul < 1.0) forcecoul -= (1.0-factor_coul)*prefactor;
             } else {
               table = vtable[itable] + fraction*dvtable[itable];
@@ -966,11 +960,10 @@ double PairLJCharmmCoulLong::single(int i, int j, int itype, int jtype,
     if (!ncoultablebits || rsq <= tabinnersq) {
       r = sqrt(rsq);
       grij = g_ewald * r;
-      expm2 = exp(-grij*grij);
-      t = 1.0 / (1.0 + EWALD_P*grij);
-      erfc = t * (A1+t*(A2+t*(A3+t*(A4+t*A5)))) * expm2;
+      expm2 = expmsq(grij);
+      erfc = my_erfcx(grij) * expm2;
       prefactor = force->qqrd2e * atom->q[i]*atom->q[j]/r;
-      forcecoul = prefactor * (erfc + EWALD_F*grij*expm2);
+      forcecoul = prefactor * (erfc + MY_ISPI4*grij*expm2);
       if (factor_coul < 1.0) forcecoul -= (1.0-factor_coul)*prefactor;
     } else {
       union_int_float_t rsq_lookup;

Similar for pppm styles:

diff --git a/src/KSPACE/pppm.cpp b/src/KSPACE/pppm.cpp
index 9b18ad8..4ff1bd8 100644
--- a/src/KSPACE/pppm.cpp
+++ b/src/KSPACE/pppm.cpp
@@ -1200,23 +1200,24 @@ double PPPM::compute_qopt()
           sum2 = 0.0;
           sum3 = 0.0;
           sum4 = 0.0;
+          const double inv_gew = 1.0/g_ewald;
           for (nx = -2; nx <= 2; nx++) {
             qx = unitkx*(kper+nx_pppm*nx);
-            sx = exp(-0.25*square(qx/g_ewald));
+            sx = expmsq(0.5*qx*inv_gew);
             argx = 0.5*qx*xprd/nx_pppm;
             wx = powsinxx(argx,twoorder);
             qx *= qx;

             for (ny = -2; ny <= 2; ny++) {
               qy = unitky*(lper+ny_pppm*ny);
-              sy = exp(-0.25*square(qy/g_ewald));
+              sy = expmsq(0.5*qy*inv_gew);
               argy = 0.5*qy*yprd/ny_pppm;
               wy = powsinxx(argy,twoorder);
               qy *= qy;

               for (nz = -2; nz <= 2; nz++) {
                 qz = unitkz*(mper+nz_pppm*nz);
-                sz = exp(-0.25*square(qz/g_ewald));
+                sz = expmsq(0.5*qz*inv_gew);
                 argz = 0.5*qz*zprd_slab/nz_pppm;
                 wz = powsinxx(argz,twoorder);
                 qz *= qz;
@@ -1288,7 +1289,7 @@ double PPPM::newton_raphson_f()
   double zprd = domain->zprd;
   bigint natoms = atom->natoms;

-  double df_rspace = 2.0*q2*exp(-g_ewald*g_ewald*cutoff*cutoff) /
+  double df_rspace = 2.0*q2*expmsq(g_ewald*cutoff) /
        sqrt(natoms*cutoff*xprd*yprd*zprd);

   double df_kspace = compute_df_kspace();
@@ -1329,7 +1330,7 @@ double PPPM::final_accuracy()

   double df_kspace = compute_df_kspace();
   double q2_over_sqrt = q2 / sqrt(natoms*cutoff*xprd*yprd*zprd);
-  double df_rspace = 2.0 * q2_over_sqrt * exp(-g_ewald*g_ewald*cutoff*cutoff);
+  double df_rspace = 2.0 * q2_over_sqrt * expmsq(g_ewald*cutoff);
   double df_table = estimate_table_accuracy(q2_over_sqrt,df_rspace);
   double estimated_accuracy = sqrt(df_kspace*df_kspace + df_rspace*df_rspace +
                                    df_table*df_table);
@@ -1570,22 +1571,23 @@ void PPPM::compute_gf_ik()
           numerator = 12.5663706/sqk;
           denominator = gf_denom(snx,sny,snz);
           sum1 = 0.0;
+          const double inv_gew = 1.0/g_ewald;

           for (nx = -nbx; nx <= nbx; nx++) {
             qx = unitkx*(kper+nx_pppm*nx);
-            sx = exp(-0.25*square(qx/g_ewald));
+            sx = expmsq(0.5*qx*inv_gew);
             argx = 0.5*qx*xprd/nx_pppm;
             wx = powsinxx(argx,twoorder);

             for (ny = -nby; ny <= nby; ny++) {
               qy = unitky*(lper+ny_pppm*ny);
-              sy = exp(-0.25*square(qy/g_ewald));
+              sy = expmsq(0.5*qy*inv_gew);
               argy = 0.5*qy*yprd/ny_pppm;
               wy = powsinxx(argy,twoorder);

               for (nz = -nbz; nz <= nbz; nz++) {
                 qz = unitkz*(mper+nz_pppm*nz);
-                sz = exp(-0.25*square(qz/g_ewald));
+                sz = expmsq(0.5*qz*inv_gew);
                 argz = 0.5*qz*zprd_slab/nz_pppm;
                 wz = powsinxx(argz,twoorder);

@@ -1653,6 +1655,7 @@ void PPPM::compute_gf_ik_triclinic()
           numerator = 12.5663706/sqk;
           denominator = gf_denom(snx,sny,snz);
           sum1 = 0.0;
+          const double inv_gew = 1.0/g_ewald;

           for (nx = -nbx; nx <= nbx; nx++) {
             argx = MY_PI*kper/nx_pppm + MY_PI*nx;
@@ -1673,13 +1676,13 @@ void PPPM::compute_gf_ik_triclinic()
                 x2lamdaT(&b[0],&b[0]);

                 qx = unitk_lamda[0]+b[0];
-                sx = exp(-0.25*square(qx/g_ewald));
+                sx = expmsq(0.5*qx*inv_gew);

                 qy = unitk_lamda[1]+b[1];
-                sy = exp(-0.25*square(qy/g_ewald));
+                sy = expmsq(0.5*qy*inv_gew);

                 qz = unitk_lamda[2]+b[2];
-                sz = exp(-0.25*square(qz/g_ewald));
+                sz = expmsq(0.5*qz*inv_gew);

                 dot1 = unitk_lamda[0]*qx + unitk_lamda[1]*qy + unitk_lamda[2]*qz;
                 dot2 = qx*qx+qy*qy+qz*qz;
@@ -1716,6 +1719,7 @@ void PPPM::compute_gf_ad()
   int k,l,m,n,kper,lper,mper;

   const int twoorder = 2*order;
+  const double inv_gew = 1.0/g_ewald;

   for (int i = 0; i < 6; i++) sf_coeff[i] = 0.0;

@@ -1724,7 +1728,7 @@ void PPPM::compute_gf_ad()
     mper = m - nz_pppm*(2*m/nz_pppm);
     qz = unitkz*mper;
     snz = square(sin(0.5*qz*zprd_slab/nz_pppm));
-    sz = exp(-0.25*square(qz/g_ewald));
+    sz = expmsq(0.5*qz*inv_gew);
     argz = 0.5*qz*zprd_slab/nz_pppm;
     wz = powsinxx(argz,twoorder);

@@ -1732,7 +1736,7 @@ void PPPM::compute_gf_ad()
       lper = l - ny_pppm*(2*l/ny_pppm);
       qy = unitky*lper;
       sny = square(sin(0.5*qy*yprd/ny_pppm));
-      sy = exp(-0.25*square(qy/g_ewald));
+      sy = expmsq(0.5*qy*inv_gew);
       argy = 0.5*qy*yprd/ny_pppm;
       wy = powsinxx(argy,twoorder);

@@ -1740,7 +1744,7 @@ void PPPM::compute_gf_ad()
         kper = k - nx_pppm*(2*k/nx_pppm);
         qx = unitkx*kper;
         snx = square(sin(0.5*qx*xprd/nx_pppm));
-        sx = exp(-0.25*square(qx/g_ewald));
+        sx = expmsq(0.5*qx*inv_gew);
         argx = 0.5*qx*xprd/nx_pppm;
         wx = powsinxx(argx,twoorder);

Apparent Bug in Fix GCMC

Reported on lammps-users by Karl Hammond:

There seems to be a difference in how fix gcmc operates when hybrid vs.
non-hybrid potentials are used. In particular, the use of pair_style
hybrid causes a segmentation fault at the end of a run, and the number
of insertions and (especially) deletions doesn't match between otherwise
identical runs.

The following minimum working example seems to reproduce the problem;
uncomment the four lines in the input file to "fix" the problem:

** FILE mymol.txt **:


3 atoms

Coords

1 0 0 -0.549
2 0 0 0.549
3 0 0 0

Types

1 1
2 1
3 2

** FILE in.test **

units metal
atom_style bond
region myRegion block 0 10 0 10 0 10
create_box 2 myRegion
mass * 1
molecule nitrogens mymol.txt
pair_style hybrid lj/cut 12
pair_coeff 1 1 lj/cut 0.003140 3.32
pair_coeff 1 2 lj/cut 0 3.3
pair_coeff 2 2 lj/cut 0 3.3

#pair_style lj/cut 12
#pair_coeff 1 1 0.003140 3.32
#pair_coeff 1 2 0 3.3
#pair_coeff 2 2 0 3.3

fix 1 all gcmc 5000 40 500 0 1234 770 123 1 pressure 200 mol nitrogens
thermo_style custom step atoms press temp f_1[4] f_1[6] f_1[1] f_1[2]
run 100000

Replace Makefile.omp with non-MPI version

Right now, the Makefile.omp looks like

CC = mpicxx
CCFLAGS = -g -O3 -restrict -fopenmp

which doesn't work without MPI installed. I suggest replacing current Makefile.omp to

CC = g++
CCFLAGS = -g -O3 -restrict -fopenmp

and adding -fopenmp as default flag in i.e. Makefile.g++_openmpi.

Make python wrapper examples python3 compatible

After the lammps module itself has been checked and improved for python3 compatibility, we should also upgrade the various input examples to run with both python2 (say, from 2.6 (2.5?) onward) and python 3.

charmm2lammps counting one atom type less

Dear All;

I have a problem with charmm2lammps. Hope someone can help. In short, charmm2lammps is counting one atom type less in the data files.

I want to use charmm2lamps to convert PSF/PDB file into DATA/IN lammps format. So, I decided to test it with a TIP3P water box. First, I created a solvate box with VMD (solvate.psf, solvate.pdb). Second, I create topology and parameter files for TIP3P (top_TIP3P.rtf, par_TIP3P.prm). These topology/parameter file were taken from the CHARMM force field, the files are provided below. Finally, I run charmm2lammps as follows:

>> perl charmm2lammps.pl TIP3P solvate

During the execution, I got the following warning:

>> Warning: 1 atom types present, but only 2 pair coeffs found

then, when trying to run lammps, I got the following error:

>> ERROR: Unknown identifier in data file: 2    15.9994 (../read_data.cpp:654)

So far, the problem seems to be that charmm2lammps writing one atom type less, if you open the data file, you read:

>>  1  atom types

But all other information about two atom types is written in the DATA file. I mean, masses, parameters and so on. If I open the data file and correct that line into:

>>  2  atom types

the minimization/simulation proceeds.

I got a similar error when I build a protein/water system, which has much more atom types. In that case, charmm2lammps still counted one atom type less.

So, my question: is this "atom type" counting a minor bug?, am I doing something wrong?, are my topology/parameter files wrong?. Although simulations can run now, I got a bit apprehensive after this error, and I am wondering if charmm2lammps is still doing OK, as the last update was in 2005

Regards;

Eduardo

#### par_TIP3P.prm
* Toplogy and parameter information for water and ions.
*

BONDS
!
!V(bond) = Kb(b - b0)**2
!
!Kb: kcal/mole/A**2
!b0: A
!
!atom type Kb          b0
!
HT    HT      0.0       1.5139  ! from TIPS3P geometry (for SHAKE w/PARAM)
HT    OT    450.0       0.9572  ! from TIPS3P geometry


ANGLES
!
!V(angle) = Ktheta(Theta - Theta0)**2
!
!V(Urey-Bradley) = Kub(S - S0)**2
!
!Ktheta: kcal/mole/rad**2
!Theta0: degrees
!Kub: kcal/mole/A**2 (Urey-Bradley)
!S0: A
!
!atom types     Ktheta    Theta0   Kub     S0
!
HT   OT   HT     55.0      104.52   ! FROM TIPS3P GEOMETRY

NONBONDED nbxmod  5 atom cdiel fshift vatom vdistance vfswitch -
!TIP3P LJ parameters
HT       0.0       -0.046     0.2245
OT       0.0       -0.1521    1.7682

END
##### top_TIP3P.rtf
31 1

MASS  1   HT    1.00800 H  ! TIPS3P WATER HYDROGEN
MASS  2   OT   15.99940 O  ! TIPS3P WATER OXYGEN

AUTOGENERATE ANGLES DIHE

RESI TIP3         0.000 ! tip3p water model, generate using noangle nodihedral
GROUP
ATOM OH2  OT     -0.834
ATOM H1   HT      0.417
ATOM H2   HT      0.417
BOND OH2 H1 OH2 H2 H1 H2    ! the last bond is needed for shake
ANGLE H1 OH2 H2             ! required
DONOR H1 OH2
DONOR H2 OH2
ACCEPTOR OH2
PATCHING FIRS NONE LAST NONE

END

solvate.psf.txt
solvate.pdb.txt

[Feature Request] USER-DPD rx should not hardcode indices into atom->dname and atom->dvector arrays

The various "rx" components in USER-DPD has an arbitrary requirement, that no other feature in LAMMPS may use fix property/atom. This is against the modular design of LAMMPS and not really needed. For the same reason, there is no need to define two fix property/atom instances; one should be enough. all that would be needed to achieve the same features, would be generating a table (or map) that identifies which specific index in the atom->dname[] array matches with the desired property and then access them accordingly.

Extend output in finish.cpp to correctly reflect threads used by KOKKOS

Currently the run summary output will report OpenMP threads only in relation to their use in the USER-OMP package. In cases where the package is not installed, but KOKKOS is used with OpenMP threads, the output claims that no threads are used. This needs to be amended to report KOKKOS threads as well.

improper_style hybrid crashs with intel

From lammps-users:

Dear all,

there seems to be an incompatibility between improper_style hybrid and cvff/intel.
With this combination i get a reproducible, system-independent segmentation fault.
Either on its own is working without problems.

Sebastian

Disabling integration in chosen directions for particles in a region

I am simulating polymers near an implicit wall with the Steele 10-4-3 potential. Say the wall spans x and y at a constant z. I would like to restrict movement in the x,y directions for particles within some distance of the wall, preventing free sliding along the wall.

  1. Is this already possible through existing lammps features I've missed?
  2. If not, and I implement a fix nve/regionfreeze say, would it be worth adding to core lammps? i.e. should I put effort to make this more general than a one-time personal hack?

Thank you for your time.

Remove dependency of all pair styles on accelerate_kokkos.h

Currently all pair styles and styles depending on pair.h are recompiled when the KOKKOS package is installed or uninstalled. This is a substantial overhead which has no effect, since the cause are kokkos specific functions in pair.cpp. Those functions should be moved into a PairKOKKOS class or similar, so that only KOKKOS based classes reference it and thus less files get recompiled.

Bug in Compute temp/chunk

Posted on lammps-users:

Dear Developers,

I believe there is a bug in the remove_bias() and restore_bias() functions in ComputeTempChunk. This issue will cause a seg fault when using a temp/chunk with keyword "com yes" as the temperature compute in fix nvt. I think it's just an off by one error:

L692 and L729 of compute_temp_chunk.cpp (May 14 2016 version):

int index = cchunk->ichunk[i];
should be changed to:
int index = cchunk->ichunk[i]-1;

The same correction may also be needed for remove_bias_all() and restore_bias_all() on L714 and L752. Also, I'm a little confused about whether these two functions are correct. As far as I can tell, vbias is never set in temp/chunk. I may be mistaken but I think L716, for example, should be changed to:

v[i][0] -= vcmall[index][0];

Thanks,
-David

PyLammps: Better error output while using the library interface

Currently, any error in LAMMPS will trigger an exit() or MPI_Abort() and kill off the entire process. In many cases, this is not the ideal exit strategy. E.g., in IPython notebooks this leads to a dead kernel. The error message is only dumped to stderr, however, since the parent process is also killed there is no way to properly react and display that message.
Other codes such as GUIs like Atomify(@andeplane) have a similar problem and have worked around it by patching LAMMPS.

A cleaner way of reacting is to use exceptions. In the Python case, this would mean throwing a C++ exception instead of an exit or MPI_Abort, detecting the error condition and rethrowing it as Python exception. In many cases, this will still require the kernel to be restarted, but at least now we would know what is going on.

The main argument against using C++ exceptions are performance implications caused by bad older compilers. AFAIK most modern compilers now implement a zero-overhead strategy using tables, see Technical Report on C++ Performance (5.4.1.2, Page 39). This basically means the compiler generates a lookup table for catch locations which is only accessed during a throw. The capability of unwinding the stack is enabled by default, which has not been disabled by LAMMPS Makefiles. So we shouldn't see a change to the status quo.

Status

  • So far I've got the ok from Steve to try this by adding a compile option and using #ifdef regions. That means modifying error.cpp and adding catch blocks to both main and the library interface methods.
  • Update August 23, 2016: Initial implementation ( 6c154bb)

Agenda

  • Implement the proposed changes, making them optional using the preprocessor
  • Benchmark the performance impact, if there is any

Related Issues:

adjust fix poems to detect other cell changing fixes

fix poems explicitly looks for fix npt and nph and requires to be run after them. this probably needs to be investigated and extended to apply to all cell shape changing fixes?
explore if Fix::box_change_size is correctly used across all fixes.

examples/KAPPA/log.heat.1Feb14 & /log.heatflux.1Feb14

As far as I can tell, these two log files have come from the same calculation, run on separate occasions. I noticed the problem in the header of log.heat, it is supposed to relate to "add/subtract energy to 2 regions via fix heat", not the Green-Kubo method of log.heatflux. As such, the output of running in.heat does not match up with the provided log.heat.

TL;DR, log.heat in examples/KAPPA does not match up with in.heat.

A bug in the improper class2 angle-angle virial update call

As reported on the lammps-users mailing list:

Hello. I've noticed a bug in the pressure computation when angle-angle
interactions ('angleangle' function in improper_class2.cpp) are enabled.
Currently (lammps-16Feb16) the energy/virial update call is

     if (evflag)
       ev_tally(i1,i2,i3,i4,nlocal,newton_bond,eimproper,
                fabcd[0],fabcd[2],fabcd[3],
delxAB,delyAB,delzAB,delxBC,delyBC,delzBC,delxBD,delyBD,delzBD);

It should be called in the same way as in it's main function 'compute':

     if (evflag)
       ev_tally(i1,i2,i3,i4,nlocal,newton_bond,eimproper,
                fabcd[0],fabcd[2],fabcd[3],
delxAB,delyAB,delzAB,delxBC,delyBC,delzBC,delxBD-delxBC,delyBD-delyBC,delzBD-delzBC);

Best regards, Ivan A. Strelnikov, ICP RAS.

EAM Files missing

Hi,
The eam files( src/pair_eam_*.cpp ) seem to be missing from the latest releases.
Have the eam potentials been included elsewhere?

Thanks!

pair srp + fix deform triclinic flip yes

Pair srp fails with an error when used with combination of triclinic box + fix deform + flip yes + large strain. This happens due to the order that fix deform and fix srp occur in pre_exchange step. Currently fix srp is automatically invoked and runs ~last in pre_exchange. This patch allows the user to (optionally) define the order of fix srp in the input script. The doc page is also updated.

patch.26Oct15.txt

Possible bug for Voronoi anlaysis after box is flipped upon shearing

From lammps-users:

Dear Steve,

There might be a bug for the voronoi analysis after the simulation box is flipped upon shearing. I noticed there is an early post for discussing the voronoi analysis of a triclinic box

http://lammps.sandia.gov/threads/msg60447.html

I used the latest version (14May16). It is no problem for the normal shearing with this version. However, when I do a severe shearing simulation that allows box flipping. I found after the flipping, only a part of the atoms have the values from the voronoi analysis, such as atomic volume, number of faces, but the rest are all zero. So it seems that there is still a bug in dealing with the triclinic box. I am trying to debug it but still not making it all right...

Best regards,
Suzhi

Elastic script

Would it be possible to change the Elastic script (dispalcement.mod) to include the compliance tensor and calculate the polycrystalline averages of Bulk, Young's and Shear modulus? If the compliance tensor can be calculated we can easily calculate the K, E and G Voigh, Ruess and Hill averages.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.