GithubHelp home page GithubHelp logo

brucefan1983 / gpumd Goto Github PK

View Code? Open in Web Editor NEW
425.0 28.0 110.0 189.74 MB

Graphics Processing Units Molecular Dynamics

Home Page: https://gpumd.org/dev

License: GNU General Public License v3.0

Cuda 92.65% Makefile 0.15% MATLAB 0.60% C++ 2.13% Python 3.76% Batchfile 0.03% Objective-C 0.03% Shell 0.52% Vim Script 0.09% Dockerfile 0.03%
molecular-dynamics-simulation heat-transport cuda molecular-dynamics gpumd phonon gpu machine-learning physics-simulation simulation

gpumd's People

Contributors

alexgabourie avatar ambrosewong avatar bigd4 avatar brucefan1983 avatar dragonpara avatar elindgren avatar erhart1 avatar grtheaory avatar hailan2005 avatar hityingph avatar initqp avatar jonsnow-willow avatar kick-h avatar liangzhixin-202169 avatar mushroomfire avatar nicklasosterbacka avatar psn417 avatar shdchen avatar tamaswells avatar tingliangstu avatar xix-yang avatar yanzhou-wang avatar zhyan0603 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gpumd's Issues

Add MPI support

For larger systems, it may be useful to run on multiple GPU's. MPI libraries can be compiled so that you can copy with memory addresses on GPU.

implementing the ADP potential

After finishing the general spline-based EAM potential, I will work on implementing the ADP (angular dependent potential) potential, which is a combination of the EAM potential and an extra angular part.

Error when compiling GPUMD with netCDF

When trying to compile GPUMD with netCDF support one gets the following error message:

measure/dump_netcdf.cuh(71): error: identifier "Atom" is undefined

1 error detected in the compilation of "/scratch/tmpxft_000b7c80_00000000-6_measure.cpp1.ii".
makefile:100: recipe for target 'measure/measure.o' failed
make: *** [measure/measure.o] Error 1

Potential energy in small cells

Hi,

I've been trying out GPUMD lately, and it has worked very smoothly. However, I have come across one pitfall that may be a bug or at least a somewhat unexpected behavior that I couldn't find any mention of in the user guide. The potential energy per atom in very small cells differs from the corresponding quantity in large cells. It seems like more long-ranged models converge slower than short-ranged ones when cell size is increased, so my guess is that it has to do with handling of periodic images. It seems like the behavior is the same independent of type of potential, cell shape (cubic vs non-cubic), and the choice of cutoff in xyz.in.

Is this a known behavior and can I avoid it without repeating my cell sufficiently?

Below is an output of energy per atom for three different Si potentials, with potential energy calculated for cells with different number of atoms (just repeated along periodic boundary conditions):

tersoff       2 atoms, potential energy: -1.571410841
tersoff      16 atoms, potential energy: -4.6296117249375
tersoff      54 atoms, potential energy: -4.629611725
tersoff     128 atoms, potential energy: -4.629611724921875
tersoff     250 atoms, potential energy: -4.6296117248
tersoff     432 atoms, potential energy: -4.629611725

sw            2 atoms, potential energy: -1.0841295637
sw           16 atoms, potential energy: -4.336518254875
sw           54 atoms, potential energy: -4.336518254814815
sw          128 atoms, potential energy: -4.33651825484375
sw          250 atoms, potential energy: -4.3365182548000005
sw          432 atoms, potential energy: -4.336518254861111

nep           2 atoms, potential energy: -2.7376508713
nep          16 atoms, potential energy: -2.5197944939375
nep          54 atoms, potential energy: -2.2679282692592593
nep         128 atoms, potential energy: -2.267928259453125
nep         250 atoms, potential energy: -2.2679282560400003
nep         432 atoms, potential energy: -2.2679282542824075

Representative run.in:

potential      ../potentials/tersoff/Si_Fan_2019.txt 0
velocity       0.001
dump_thermo    1
time_step      1e-6
run            1

Representative xyz.in (repeat=1):

2 1024 10 1 0 0 
1 1 1 0.0 2.715 2.715 2.715 0.0 2.715 2.715 2.715 0.0 
0 0.0 0.0 0.0 28.085
0 1.3575 1.3575 1.3575 28.085

Everything needed to reproduce this:
debug.zip

I've been using the dev version at bbd724d

Best,
Magnus Rahm

Harmonic lattice dynamics

Harmonic lattice dynamics calculations based on the force constant matrix calculated using finite difference.

high-order force constants

Calculate the high-order (such as 3rd and 4th) force constants, which is useful for anharmonic lattice dynamics calculations.

Suggestion: Allow compression of netCDF (movie.nc) files.

Currently, the netCDF files created when supplying the keyword dump_netcdf are not compressed, even though this is supported in the recommended version (4.6.3). Since it is more convenient to perform the compression when creating the files instead of leaving it in the hands of the user, as a post-process step, I would suggest that a "compress" option is introduced, which might include additional parameters such as compression (deflate) level and chunk size. Additional information can be found here

EXPLORE: Make it possible to compute SDC for multiple groups .

Background?

Currently, it is only possible to compute the SDC for the entire system or a single group. In many cases it is interesting to study the SDC for multiple groups of atoms, for instance all of them, for the purpose of calculating, e.g. the mean-square displacements.

What is the current behaviour?

The compute_sdc command only accepts a single group as an optional argument.

What is the desired behaviour?

It is possible to calculate the SDC for all groups of atoms defined for a specific grouping method, by providing group group_method as an optional argument. This means that the following command should be accepted:

compute_sdc 8 500 group 1

The dos.out and mvac.out files would thus contain one column with correlation times followed by 6N columns with VAC and SDC data, where N is the number of groups in the specified grouping method.

implementing a more general Tersoff potential

The current Tersoff potential in GPUMD is not of the most general form. I will make a more general version which can deal with most of the Tersoff-like potentials proposed in the literature. I also realized that there is still some room for improving the performance of the Tersoff and SW potentials. Therefore, there is a hope to have faster and more general Tersoff and SW potentials in GPUMD in the near future.

Thermal conductivity from BTE

Calculating the lattice thermal conductivity based on the Boltzmann transport equation (BTE) and lattice dynamics (harmonic and anharmonic).

No neighbor list building by default?

By reading the docs, it seems like if I don't specify neighbor command in the run.in script, neighbor list is not being rebuilt during the simulation. This seems to be correct based on performance numbers as well.

Isn't this a huge pitfall for users? Default value should be that it will build whenever it is needed, yes?

EXPLORE: Make it possible to compute DOS for multiple groups .

Background?

Currently, it is only possible to compute the DOS for the entire system or a single group. In many cases it is interesting to study the partial DOS, e.g. the contributions from all atoms of each type.

What is the current behaviour?

The compute_dos command only accepts a single group as an optional argument.

What is the desired behaviour?

It is possible to calculate the DOS for all groups of atoms defined for a specific grouping method, by providing group group_method as an optional argument. This means that the following command should be accepted:

compute_dos 8 500 55.0 group 1

The dos.out and mvac.out files would thus contain one column with frequencies followed by 3N columns with DOS and VAC data, respectively, where N is the number of groups in the specified grouping method.

BUG: CUDA (cudaMemset) error

What happens?

When running GPUMD the following error, sometimes, occurs:

    CUDA Error:
    File:       neighbor_ON1.cu
    Line:       222
    Error code: 1
    Error text: invalid argument

What is the reason?

As is indicated by the error message, it is related to line 222 in neighbor_ON1.cu:

CHECK(cudaMemset(cell_count, 0, sizeof(int)*N_cells));

What is the expected behaviour?

It is possible to run GPUMD on systems with multiple GPUs without receiving any error messages.

BUG: Error is raised when trying to compile the gpumd binary.

BACKGROUND

It is currently not possible to compile the gpumd binary after cloning the GitHub repository.

WHAT HAPPENS

The following error message is raised when trying to compile the gpumd binary:

force.cu(633): error: class "Measure" has no member "hnema"

1 error detected in the compilation of "/tmp/tmpxft_00002304_00000000-6_force.cpp1.ii".
make: *** [obj/force.o] Error 1
make: *** No rule to make target `obj_phonon/main_phonon.o', needed by `phonon'.  Stop.

implementing the general SW potential

@AlexGabourie
Do you want to make a general SW potential applicable to an arbitrary number of atom types, similar to your Tersoff1988? We can also remove the special constraint for SW13 and SW16, keeping only the original SW potential.

Changing the code from C to C++

This has been done partly. Now all the potential models are in classes inherited from an abstract base class. In this way, it is easier to add new potential models in the future. The class hierarchy related to the potential models is:

-- class Force points to class Potentials
-- class Potentials is inherited by
-- class Pair (This will contain all the pair potentials)
-- class SW (The SW potential)
-- class Tersoff (The Tersoff potential)
-- class EAM_analytical (This will contain all the analytical EAM-type potentials)
-- class Vashishta (The Vashishta potential)
-- class REBO_MOS (A REBO potential for MO-S systems only)

I am working on changing the other parts to C++. What I have in mind now is to design the following classes:
-- class Neighbor (everything about the neighbor list)
-- class Integrate (everything about the integration)
-- class Measure (everything related to measurement)

removing single precision option?

Single precision is usually about 2x faster than double precision, but it is not always safe to use single precision. I feel that there is no need to keep the option to use single precision. By using double precision only, the readability of the code can be improved. I want to minimize (if it is impossible to eliminate) the use of #ifdef.

How do you think? @AlexGabourie

Add vashishta potential

Hi,

I'm working on smaller systems using the vashishta potential (http://lammps.sandia.gov/doc/pair_vashishta.html), but gpu implementations (both GPU package and KOKKOS) in lammps have an upper limit of timesteps/second due to kernel execution overhead. So a pure GPU code could outperform LAMMPS thanks to this.

Since vashishta is so similar to stillinger weber, maybe it would be easy for you to add it? Another trick for vashishta is that 3-body forces usually have a shorter cutoff than 2-body, so the 3-body loop should make use of this (cache only neighbors within 3-body cutoff).

Great work btw!

The input xyz.in about the FCP

Dear Prof.Fan,

I noticed that the ex5 used the FCP from hiphive to run the gpumd, but I am confused about the input file of xyz.in: M = 1 and cutoff = 0.1.
while for the generate_files.py, the cutoff distance is 5 angstrom. (M > 1)

Even though the cutoff parameter is the initial cutoff distance, why we not set up a suitable value for example 5 angstrom?

Best,

Zezhu

dump the neighbor list

write a command to dump the neighbor list, such as:
dump_neighbor interval # using a default cutoff based on the radial distribution function
dump_neighbor interval cutoff # using a user-specified cutoff

Add hybrid potentials

In some cases, it is desirable to use different potentials for different parts of the system. Quite a few users have mentioned this to me. To achieve this, the force evaluation part and the neighbor list construction part should be re-designed. This will take some time, and I hope to finish it in the near future.

implementing the general spline-based EAM potential

Currently, there are only two EAM-type potentials based on analytical functions. However, it is conventional to use the general tabulated EAM potential in the literature. So I plan to implement the general spline-based EAM potential. I have finished a single-atom-type version and will publish the code after I generalize it to the multi-atom-type version.

nvt_nhc always gives nan

I'm trying to run in NVT ensemble, but it seems like nvt_nhc does not work properly. All positions and most thermo output gives nan.

I've tried different TDamp and temperatures.

Add support for binary dump file

When simulating either large systems or long time scales, parsing a text file (xyz.out) may be very slow. I suggest at least supporting just a binary file that is easily loaded in numpy (Python) or similar.

implementing triclinic box

The current version only uses orthogonal box. I aim to add an option to use triclinic box and enrich the features related to it gradually.

Implementing the MTP potential

This is also in our TODO list.

Reference:
Moment Tensor Potentials: a class of systematically improvable interatomic potentials
Alexander V. Shapeev

removing all warp-synchronous code

Warp-synchronous code is unsafe and is already incorrect for Volta (compute capability 7.0) and newer GPU architectures. So we must change this as soon as possible.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.