GithubHelp home page GithubHelp logo

espressopp / espressopp Goto Github PK

View Code? Open in Web Editor NEW
42.0 14.0 33.0 71.42 MB

Main ESPResSo++ repository

Home Page: http://www.espresso-pp.de/

License: GNU General Public License v3.0

CMake 1.41% Tcl 0.15% Python 34.47% C++ 63.96% Shell 0.01%
python espresso adress hpc-applications algorithm-challenges algorithmic method devtools c-plus-plus analysis

espressopp's Introduction

ESPResSo++

Build Status Code Climate codecov

ESPResSo++ is an extensible, flexible, fast and parallel simulation software for soft matter research. It is a highly versatile software package for the scientific simulation and analysis of coarse-grained atomistic or bead-spring models as they are used in soft matter research. ESPResSo and ESPResSo++ have common roots and share parts of the developer/user community. However their development is independent and they are different software packages. ESPResSo++ is free, open-source software published under the GNU General Public License (GPL).

Quick start:

To get a copy of the developer version (most recent version) of ESPResSo++, you can use git or docker. Using docker will give you a binary release (nothing to compile, but performance may not be optimal). If you use git clone or download a tarball, you will have to compile ESPResSo+ yourself, which might lead to better performance.

Using docker:

$ docker pull espressopp/espressopp
$ docker run -it espressopp/espressopp /bin/bash

Using git:

$ git clone https://github.com/espressopp/espressopp.git

Alternatively, you can download a tarball or zip file of previous release versions of ESPResSo++.

Dependencies

C++ Dependencies

  • Boost ( >= 1.69.0)
  • MPI
  • FFTW3
  • GROMACS (required when WITH_XTC flag is enabled, GROMACS needs to be built with GMX_INSTALL_LEGACY_API)
  • HDF5

Python Dependencies

ESPResSo++ requires Python 3.7 or newer. All required Python packages are listed in requirements.txt. You can install them via: pip3 install -r requirements.txt

Quick install:

$ cd espressopp
$ cmake -B builddir -DCMAKE_INSTALL_PREFIX=/where/to/install/espressopp .
$ cmake --build builddir
$ cmake --install builddir
$ export PYTHONPATH=/where/to/install/espressopp/lib/python3*/site-packages:${PYTHONPATH}

After building go to the examples directory and have a look at the Python scripts.

You can also use Pipenv, simply after compilation call in the root directory

$ pipenv install
$ pipenv shell

then you can go to examples and have a look at the Python scripts.

CMake options

You can customize the build process by applying following CMake flags

  • WITH_XTC - build E++ with support of dumping trajectory to GROMACS xtc files (default: OFF).
  • CMAKE_INSTALL_PREFIX - where the E++ should be installed.
  • CMAKE_CXX_FLAGS - put specific compilation flags.

Then, the flags can be used in cmake

$ cmake -B builddir -DWITH_XTC=ON -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_CXX_FLAGS=-O3 .
$ cmake --build builddir

How to install E++ in some Linux distributions

Ubuntu

$ apt-get -qq install -y build-essential openmpi-bin libfftw3-dev python3-dev libboost-all-dev git python3-mpi4py cmake wget python3-numpy ipython3 clang llvm ccache python3-pip doxygen sphinx-common python3-matplotlib graphviz texlive-latex-base texlive-latex-extra texlive-latex-recommended ghostscript libgromacs-dev clang-format curl latexmk libhdf5-dev python3-h5py sudo

$ cd espressopp
$ cmake -B builddir .
$ cmake --build builddir

Fedora

$ dnf install -y make cmake wget git gcc-c++ doxygen python-devel openmpi-devel environment-modules python-pip clang llvm compiler-rt ccache findutils boost-devel boost-python3-devel python-sphinx fftw-devel python-matplotlib texlive-latex-bin graphviz boost-openmpi-devel ghostscript python3-mpi4py-openmpi texlive-hyphen-base texlive-cm texlive-cmap texlive-ucs texlive-ec gromacs-devel hwloc-devel lmfit-devel ocl-icd-devel hdf5-devel python-h5py atlas hdf5 liblzf python-six python-nose python-numpy
$ cd espressopp
$ cmake -B builddir .
$ cmake --build builddir

Documentation

http://espressopp.github.io

Reporting issues

Report bugs on the GitHub issues site

espressopp's People

Contributors

acfogarty avatar brandest avatar espressopp-bot avatar gdeichmann avatar govarguz avatar jkrajniak avatar jnvance avatar jsmrek avatar junghans avatar niktre avatar pdebuyl avatar pgemuende avatar ppkk avatar songbin6280 avatar stuehn avatar tbereau avatar vitst avatar xzhh avatar xzzx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

espressopp's Issues

LatticeBoltzmann.cpp doesn't compile with boost-1.54

Compile error:

[ 1817s] /home/abuild/rpmbuild/BUILD/espressopp-1.9.4/src/integrator/LatticeBoltzmann.cpp: In member function 'void espressopp::integr
ator::LatticeBoltzmann::copyForcesFromHalo()':
[ 1817s] /home/abuild/rpmbuild/BUILD/espressopp-1.9.4/src/integrator/LatticeBoltzmann.cpp:1443:21: error: no matching function for call to 'boost::mpi::environment::environment()'
[ 1817s]     mpi::environment env;
[ 1817s]                      ^
[ 1817s] /home/abuild/rpmbuild/BUILD/espressopp-1.9.4/src/integrator/LatticeBoltzmann.cpp:1443:21: note: candidates are:
[ 1817s] In file included from /usr/include/boost/mpi/collectives/gather.hpp:18:0,
[ 1817s]                  from /usr/include/boost/mpi/collectives/all_gather.hpp:18,
[ 1817s]                  from /usr/include/boost/mpi/collectives.hpp:536,
[ 1817s]                  from /usr/include/boost/mpi.hpp:23,
[ 1817s]                  from /home/abuild/rpmbuild/BUILD/espressopp-1.9.4/src/include/types.hpp:29,
[ 1817s]                  from /home/abuild/rpmbuild/BUILD/espressopp-1.9.4/src/include/python.hpp:28,
[ 1817s]                  from /home/abuild/rpmbuild/BUILD/espressopp-1.9.4/src/integrator/LatticeBoltzmann.cpp:21:
[ 1817s] /usr/include/boost/mpi/environment.hpp:81:3: note: boost::mpi::environment::environment(int&, char**&, bool)
[ 1817s]    environment(int& argc, char** &argv, bool abort_on_exception = true);
[ 1817s]    ^
[ 1817s] /usr/include/boost/mpi/environment.hpp:81:3: note:   candidate expects 3 arguments, 0 provided
[ 1817s] /usr/include/boost/mpi/environment.hpp:48:22: note: boost::mpi::environment::environment(const boost::mpi::environment&)
[ 1817s]  class BOOST_MPI_DECL environment : noncopyable {
[ 1817s]                       ^
[ 1817s] /usr/include/boost/mpi/environment.hpp:48:22: note:   candidate expects 1 argument, 0 provided

Details here (from https://build.opensuse.org/package/show/science/python-espressopp)

getting error on installation of espresso++ software

Hello,
I am installing espresso++ software on a cluster locally, I do not have root password, then i got the error in make step after complete 76% which is given bellow-

/UHOME/nabia.dcy2014/espresso/espressopp-1.9.4.1/src/interaction/CoulombKSpaceP3M.hpp:510: error: expected type-specifier before ‘fftw_complex’
/UHOME/nabia.dcy2014/espresso/espressopp-1.9.4.1/src/interaction/CoulombKSpaceP3M.hpp:510: error: expected ‘>’ before ‘fftw_complex’
/UHOME/nabia.dcy2014/espresso/espressopp-1.9.4.1/src/interaction/CoulombKSpaceP3M.hpp:510: error: expected ‘(’ before ‘fftw_complex’
/UHOME/nabia.dcy2014/espresso/espressopp-1.9.4.1/src/interaction/CoulombKSpaceP3M.hpp:510: error: ‘fftw_complex’ was not declared in this scope
/UHOME/nabia.dcy2014/espresso/espressopp-1.9.4.1/src/interaction/CoulombKSpaceP3M.hpp:510: error: expected primary-expression before ‘>’ token
/UHOME/nabia.dcy2014/espresso/espressopp-1.9.4.1/src/interaction/CoulombKSpaceP3M.hpp:510: error: expected ‘)’ before ‘;’ token
/UHOME/nabia.dcy2014/espresso/espressopp-1.9.4.1/src/interaction/CoulombKSpaceP3M.hpp:512: error: ‘plan’ was not declared in this scope
/UHOME/nabia.dcy2014/espresso/espressopp-1.9.4.1/src/interaction/CoulombKSpaceP3M.hpp:512: error: ‘fftw_execute’ was not declared in this scope
/UHOME/nabia.dcy2014/espresso/espressopp-1.9.4.1/src/interaction/CoulombKSpaceP3M.hpp: In member function ‘bool espressopp::interaction::CoulombKSpaceP3M::_computeForce(espressopp::CellList)’:
/UHOME/nabia.dcy2014/espresso/espressopp-1.9.4.1/src/interaction/CoulombKSpaceP3M.hpp:534: error: ‘in_array’ was not declared in this scope
/UHOME/nabia.dcy2014/espresso/espressopp-1.9.4.1/src/interaction/CoulombKSpaceP3M.hpp:534: error: expected type-specifier before ‘fftw_complex’
/UHOME/nabia.dcy2014/espresso/espressopp-1.9.4.1/src/interaction/CoulombKSpaceP3M.hpp:534: error: expected ‘>’ before ‘fftw_complex’
/UHOME/nabia.dcy2014/espresso/espressopp-1.9.4.1/src/interaction/CoulombKSpaceP3M.hpp:534: error: expected ‘(’ before ‘fftw_complex’
/UHOME/nabia.dcy2014/espresso/espressopp-1.9.4.1/src/interaction/CoulombKSpaceP3M.hpp:534: error: ‘fftw_complex’ was not declared in this scope
/UHOME/nabia.dcy2014/espresso/espressopp-1.9.4.1/src/interaction/CoulombKSpaceP3M.hpp:534: error: expected primary-expression before ‘>’ token
/UHOME/nabia.dcy2014/espresso/espressopp-1.9.4.1/src/interaction/CoulombKSpaceP3M.hpp:534: error: expected ‘)’ before ‘;’ token
/UHOME/nabia.dcy2014/espresso/espressopp-1.9.4.1/src/interaction/CoulombKSpaceP3M.hpp:536: error: ‘plan’ was not declared in this scope
/UHOME/nabia.dcy2014/espresso/espressopp-1.9.4.1/src/interaction/CoulombKSpaceP3M.hpp:536: error: ‘fftw_execute’ was not declared in this scope
make[2]: *** [src/CMakeFiles/_espressopp.dir/interaction/CoulombKSpaceP3M.cpp.o] Error 1
make[1]: *** [src/CMakeFiles/_espressopp.dir/all] Error 2
make: *** [all] Error 2

What i have to do, Can anyone please help?
Thanks

Add support for writing trajectories in H5MD file format (HDF)

Two solutions are on the table, the one that uses PyBuffer, h5py and pyh5md (#52) to write data and the one that use directly HDF5 API (#60).
Perhaps we should decide which way to go. Personally, I like the pure approach with HDF5 API, although it is a bit of pain to use it.
Colleagues from espressomd uses h5xx library so maybe this could be a solution, to have some sort of abstraction above HDF5 API.

.so file part of installation?

Similar to #161 I was putting E++ in EasyBuild and I noticed that the generated _espressopp.so file isn't part of the installation step. Sorry if this is a silly question but I would've thought that you would have to bring this library with your installation?

In general I don't see an installation step outlined in your documentation and I wonder if I am missing something? I can successfully import the installed python module, I just wonder if there is a runtime error waiting to pop up (and I don't have a test case to try it out)?

Clean up dirty code base

There is a lot of abandon commented code in the code base e.g. MDIntegrator or VerletList has whole blocks of it; some leftovers after print debugging which are not necessary correct if you uncomment them.
Therefore it is quite painful for newcomers to read it and to spot any bug.

Number of idle/sleeping threads?

Hi,
a general question.

Today I was running some single-node experiments (here a node has 64 cores).
I called the examples/polymer_melt/polymer_melt.py script via:

mpirun -np 64 python polymer_melt.py

where the replica factors were set to xdim=ydim=zdim=5, which "generates" 5M particles.

Looking at the report from the job scheduler I noticed a high number of threads were generated:

Resource usage summary:
.........
Max Threads : 4102
........

Monitoring the job showed the same using the "ps" command.
It looks like 64 threads are created per MPI rank; one is running and doing work while the others are sleeping and seem to do nothing at the moments I sampled.

I then tried to change the number of cores used to see how things were changing.
Running with 24 cores for example showed a number of threads for python polymer_melt.py that is 24 x 64, i.e. 64 threads per MPI rank, again only one per MPI rank doing the work and the other sleeping.

Is this a known fact? Has this been seen by someone else before or I'm the only one seeing this. Just this question I had since I never looked into that before.
Although it's not a problem, I was a little bit surprised by the high number and why 64.

Attached the brief sample from "ps" on the node running the program.
psoutfile2.txt

Change of module name

Hi,

Following the discussion on the mailing-list http://theweb.mpip-mainz.mpg.de/pipermail/espp-users/2014-June/000047.html about espresso, espressomd and espressopp I suggested that http://espressomd.org/ be espressomd and http://www.espresso-pp.de/ be espressopp for the module names. espressomd did the move some time ago espressomd/espresso@2eace0a

I have applied a blind sed -i -e 's/espresso/espressopp/g' to all files and there was very little fixes to make. Should espressopp move to its unambiguous name?

Potential LJcos

Hi,

I would like to use the potential LJcos and there is some parameter there which I do not understand. In LJcos.hpp there is the following explanation:
real auxCoef; // This is temporary solution for empty potential. This problem
// appears when there is a potential for, for example, 00 and
// 01, but no potential for 11. The loop in interaction
// template will call potential(1, 1) anyway if these particles are
// within the range. Thus one have to set coefficient to 0,
// in order to get zero forces.

what are "00", "01" and "11"? Is this still up-to-date?

installing espresso++ software error

Hello,
I am installing espresso++ software on a cluster then i got the error in installation given bellow

CMake Error at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:97 (MESSAGE):
Could NOT find FFTW3 (missing: FFTW3_LIBRARIES FFTW3_INCLUDES)
Call Stack (most recent call first):
/usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:288 (_FPHSA_FAILURE_MESSAGE)
cmake/FindFFTW3.cmake:20 (find_package_handle_standard_args)
CMakeLists.txt:71 (find_package)

-- Configuring incomplete, errors occurred!
What i have to do, Can anyone please help?
Thanks

Unit testing

We have some unit testing in espresso++ but it looks like no one except @MrTheodor is really adding new tests. Recently I was trying to write some tests for a big bunch of new code I want to push. Just wanted to check what should we think about when writing a unit test in espresso++?

I was thinking:

  • test for backward compatibility when someone else pushes new code
  • test should be fast
  • not have big input or output files
  • cover as much of the class as possible, including different conditional pathways

Anything else?

Release control strategy

Hi everybody.
I just want to sum up some things happening in the last time here. Starting from 1st of November I will be a full-time ESPResSo++ developer, who should take care of the project, mainly in these areas:

  • Management (release control, how-to practices, etc.)
  • Software development (new and standard methods, new architectures)
  • Visibility (docs, website, publications)

I want that we start a bit earlier and develop a proper (not perfect) release control and communication strategy. In the last couple of days I was playing with the tab "Projects", with Milestones, Labels, etc. that may have annoyed you. Of course, I was also trying to talk to some of you to share ideas. So here I would like to present and DISCUSS the strategy that at the moment seems to be the most promising one.

Communication between developers about features we're working on

What's the best way to let each other know about features we're working on that we haven't pushed to the main repository yet?

At any given time, I'm working on various features that other people might want to use (e.g. recently implemented the RATTLE algorithm) but there's usually a long delay between implementation in my local repository and making a pull request to the main repository. It would be stupid if someone else went to the trouble of implementing the same features in the meantime.

@niktre made a roadmap in the Wiki, but I'm not sure if anyone else is consulting it, regularly or ever?

Compute of LJ potential cause NaN

It is somehow related to the discussion under #31 about Array2D. Mainly this piece of code will fail because LJ._computeForceRaw will get distSqr = 0 and it will return force NaN.
The very simple solution is to have a flag in Potential.hpp (initialized) that by default will be set to false and only if potential object is constructed with proper parameters it will be changed to true. Force and energy would be computed only if this flag is set to true. Or, changes in Array2D so that for a non-defined potential it will use Zero potential.

import espressopp  # pylint:disable=F0401

try:
    import MPI  # pylint: disable=F0401
except ImportError:
    from mpi4py import MPI

box = (10.0, 10.0, 10.0)
skin = 0.3
rc = 2.5
dt = 0.0025

system = espressopp.System()
system.rng = espressopp.esutil.RNG()
system.bc = espressopp.bc.OrthorhombicBC(system.rng, box)
system.skin = skin
nodeGrid = espressopp.tools.decomp.nodeGrid(MPI.COMM_WORLD.size)
cellGrid = espressopp.tools.decomp.cellGrid(box, nodeGrid, rc, skin)
print(nodeGrid, cellGrid)
system.storage = espressopp.storage.DomainDecomposition(system, nodeGrid, cellGrid)
integrator = espressopp.integrator.VelocityVerlet(system)
integrator.dt = dt

particles = [
 (0, espressopp.Real3D(2, 2, 2), 0),
 (1, espressopp.Real3D(2, 2, 2), 1),  # this overlap
 (2, espressopp.Real3D(3, 2, 2), 0)
 ]
system.storage.addParticles(particles, 'id', 'pos', 'type')

system.storage.decompose()
verletlist = espressopp.VerletList(system, cutoff=1.16)
lj = espressopp.interaction.VerletListLennardJones(verletlist)
# Only interaction between types: 0 and 0
lj.setPotential(type1=0, type2=0, potential=espressopp.interaction.LennardJones(sigma=1.0, epsilon=1.0, cutoff=1.16))
system.addInteraction(lj)

for i in range(2):
    print system.storage.getParticle(1).f
    espressopp.tools.analyse.info(system, integrator)
    integrator.run(10)

forking and pull requesting

I am just wondering, whether there will be some systematic information why did we change to the new strategy of development via forking and pull requesting (the argument "everybody does this way" does not matter since we are not an enterprise oriented onto product selling, aren't we)? It would be actually very useful to know among other things that code reviewing is important and which pull requests shall be assigned to specific developers and which to any..

Is there any solid understanding how everything supposed to work now or it is implicitly assumed that non-master developers are aware of such things by default/education/common_sense/Force/magical_power?

Problems when running ConstrainCOM in parallel

I had a try at the new ConstrainCOM feature. However, running in parallel seems to be problematic. With me, the code runs into a deadlock every time I initialize a new FixedLocalTupleListConstrainCOM object in python using multiple nodes.
I ran a short printf-debugging of the c++ constructor and it seems that num_of_subchain has different values on each node before being passed to boost::mpi::all_reduce in line 127 in FixedLocalTupleComListInteractionTemplate.hpp .
@hidekb

v1.9.4 tag is missing

Make a tag on the version from which the 1.9.4 tarball was made and push it to github.

Espresso++ in Spack: report of first experiences

A while ago, I wrote the E++ recipe for the Spack package manager that has been committed to the main repo at spack/spack#2602 and was addressing #130 .

Briefly I wanted to report the first real experience with that, which some people found pretty cool and easy to just run, as sometimes there have been building issues with dependencies.

Once one git clone Spack and put the bin dir into the PATH to get autocompletion, with a couple of lines for configuration one is basically good to go and just do:

spack install espressopp

and espressopp will be installed and ALL dependencies will be taken care of.
Notice that you can specify basically ANY (that satisfy version requirements) compiler version, espressopp version (only recents are in, you can look the code), boost, mpi, python, cmake, etc. on the command line.
In this case it will install espressopp 1.9.4.1 as it's the newest stable version (one can also use spack install espressopp@develop to install the master branch version) and create module files, which are commonly used in many Linux HPC clusters to provide software for people.
Once is done with all dependencies and the main package one can just do

module load espressopp/1.9.4.1-gcc-4.4.7-i4fik4h
and automagically all the right dependencies used for building the version are autoloaded:

Autoloading python/2.7.13-gcc-4.4.7-xv7nrjp
Autoloading bzip2/1.0.6-gcc-4.4.7-lhojphl
Autoloading ncurses/6.0-gcc-4.4.7-omk5gcj
Autoloading zlib/1.2.11-gcc-4.4.7-sjoltdl
Autoloading openssl/1.1.0e-gcc-4.4.7-qj3ldci
Autoloading sqlite/3.8.5-gcc-4.4.7-xdbttkj
Autoloading readline/6.3-gcc-4.4.7-lqzgyaq
Autoloading openmpi/2.0.2-gcc-4.4.7-tfjhl2r
Autoloading hwloc/1.11.5-gcc-4.4.7-nzvbmqd
Autoloading libpciaccess/0.13.4-gcc-4.4.7-pu3ftqd
Autoloading fftw/3.3.5-gcc-4.4.7-hwt4dbn
Autoloading boost/1.63.0-gcc-4.4.7-r64dsvs
Autoloading py-mpi4py/2.0.0-gcc-4.4.7-je5mjdv

Then on the shell to quickly check that the espressopp module is seen:
[padua@boron ~]$ python
Python 2.7.13 (default, Feb 23 2017, 00:48:12)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-17)] on linux2
Type "help", "copyright", "credits" or "license" for more information.

import espressopp
Warning: numpy module not available
exit()

I did runs with Lennard Jones and they run fine to completion.

P.S.: I didn't know how to label this as there was no "report" label, so I just picked two... ;)

MPI4PY issue on macbook

cmake went normally with -DEXTERNAL_BOOST=OFF -DEXTERNAL_MPI4PY=OFF

then, at the very end of "make install" i get this:
Linking CXX shared library ../_espressopp.so
[100%] Built target _espressopp
[100%] Building C object contrib/mpi4py/CMakeFiles/MPI.dir/mpi4py-1.3/src/MPI.c.o
Linking C shared library MPI.so
Undefined symbols for architecture x86_64:
"_MPI_Type_create_f90_complex", referenced from:
___pyx_pf_6mpi4py_3MPI_8Datatype_18Create_f90_complex in MPI.c.o
"_MPI_Type_create_f90_integer", referenced from:
___pyx_pf_6mpi4py_3MPI_8Datatype_16Create_f90_integer in MPI.c.o
"_MPI_Type_create_f90_real", referenced from:
___pyx_pf_6mpi4py_3MPI_8Datatype_17Create_f90_real in MPI.c.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [contrib/mpi4py/MPI.so] Error 1
make[1]: *** [contrib/mpi4py/CMakeFiles/MPI.dir/all] Error 2
make: *** [all] Error 2

is this my laptop's problem or something is wrong with mpi4py?

import espressopp

Hi,
When I import espressopp in python then I got the message given below-

import espressopp
--------------------------------------------------------------------------
[[59756,1],0]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
  Host: pd

Another transport will be used instead, although this may result in
lower performance.

I want to know that it is an error/issue or what ? My code is working fine, let me know what I have to do.
Our cluster is a 1+5 node and each have octa core and dual processor.
Thank you

src/io classes: append flag ignored in loops?

Hi, I was running some tests for a prototype for writing H5MD and notice one thing that I'm sure everyone is aware of.
If one wants to save a given configuration at the very end of the simulation everything is fine with flag append set to True.
If one wants to save a configuration every X timesteps (more than once, let's say two) and one decides that he/she want to append to the same file the various timesteps, it appears to be ignored and one will see only the last timestep and not also the previous ones since those were all overwritten by the last one.
If instead one decides that doesn't want to append, everything's ok of course, since a new file with another name will be created by the FileBackup() routine.

Is this normal that basically the append flag is "ignored" in the loop case?
What do you expect?

Probably this is well known to the core developers and the daily users, but I don't think this is the correct behavior.

Of course probably everyone wants to have a file per timestep so everyone sets the append to false or change the filename according to the timestep by themselves; still that case should be documented that it doesn't seem to work when used in the aforementioned way within the loops.

Seems to be the case for all the classes (DumpGRO, DumpXYZ and DumpGROAdress)

Someone else have noticed that? Or this is normal?

SystemMonitor: define measuring frequency per observable

Current state:

  • every observable is computed with the frequency of ExtAnalyze extension
    • data are stored in the csv file

TODO:

  • the frequency should be defined per observable
  • the freq in ExtAnalyze should be a min(observable_1, observable_2..) or SystemMonitor should work independent of ExtAnalyze and connect directly to signal in the integrator.
    • HDF5 output

Header file including itself?

I might be tired, and this might have a reason to be this way, but why in

src/include/types.hpp , line 33

the header includes itself? Any reasons for that?
In most cases shouldn't be this way, unless coded properly and with care,etc. and there's a serious reason for doing that.
Please confirm if this should not happen and I will provide the fix.
Otherwise, I would like to hear the reasons.

Support for changing periodicity of the system

Currently, only full periodic system can be simulated within ESPP and it is not possible to switch off periodicity in x,y,z directions.
Implementation in other packages:

  • LAMMPS: boundary command
  • Espresso: setmd periodic 1 1 1

The SlabBC class allows non-periodic system in given direction however this does not influence the DomainDecomposition module.

Migrate deprecated boost signals template

Currently the signals are defined with templates signal0, signal2. This is required because the template support in old compilers does not allows to have variable number of arguments.
Here from boost library:

Version 1.40 also introduces a variadic templates implementation of Signals2, which is used when Boost detects compiler support for variadic templates (variadic templates are a new feature of C++11). This change is mostly transparent to the user, however it does introduce a few visible tweaks to the interface as described in the following.

The following library features are deprecated, and are only available if your compiler is NOT using variadic templates (i.e. BOOST_NO_CXX11_VARIADIC_TEMPLATES is defined by Boost.Config).

The "portable syntax" signal and slot classes, i.e. signals2::signal0, signal1, etc.

The arg1_type, arg2_type, etc. member typedefs in the signals2::signal and signals2::slot classes. They are replaced by the template member classes signals2::signal::arg and signals2::slot::arg.

The change will be very easy, e.g.
boost::signals2::signal2 <void, ParticleList&, class OutBuffer&>
to
boost::signals2::signal<void (ParticleList&, class OutBuffer&)>

This change will cause that the compiler has to support c++1x standard.

http://www.boost.org/doc/libs/1_61_0/doc/html/signals2.html

Travis building fails

Hi,
I am not sure what is happening but all my travis compilations fail with these two messages:

Coverity Scan analysis selected for branch master.
Coverity Scan API access denied. Check $PROJECT_NAME and $COVERITY_SCAN_TOKEN.

[https://travis-ci.org/niktre/espressopp/jobs/117255851]

What does it want from me? :-)

running ESPP in Garching

Hi,
I've managed to compile ESPP in Garching on their HPC. To do so, I did:

module load git/2.1
module load cmake/3.2
module load fftw/gcc/3.3.4   
module load gcc/4.9

cmake -DEXTERNAL_BOOST=OFF -DEXTERNAL_MPI4PY=OFF -DFFTW3_INCLUDES=$FFTW_HOME/include -DFFTW3_LIBRARIES=$FFTW_HOME/lib/libfftw3.a .

The problem is, that when I'm starting the script, upon importing espressopp, I get:

SyntaxError: invalid syntax
Traceback (most recent call last):
  File "start_lb.py", line 3, in <module>
    import espressopp
  File "/ptmp/nikt/parall_espressopp/espressopp/__init__.py", line 65, in <module>
    from espressopp import esutil, bc, storage, integrator, interaction, analysis, tools, standard_system, external, check, io
  File "/ptmp/nikt/parall_espressopp/espressopp/storage/__init__.py", line 26, in <module>
    from espressopp.storage.DomainDecomposition import *
  File "/ptmp/nikt/parall_espressopp/espressopp/storage/DomainDecomposition.py", line 49, in <module>
    from espressopp.tools import decomp
  File "/ptmp/nikt/parall_espressopp/espressopp/tools/__init__.py", line 31, in <module>
    from espressopp.tools.convert import *
  File "/ptmp/nikt/parall_espressopp/espressopp/tools/convert/__init__.py", line 23, in <module>
    import gromacs
  File "/ptmp/nikt/parall_espressopp/espressopp/tools/convert/gromacs.py", line 25, in <module>
    from topology_helper import *
  File "/ptmp/nikt/parall_espressopp/espressopp/tools/convert/topology_helper.py", line 177
    pot = espressopp.interaction.DihedralRB(**{k: v for k, v in self.parameters.iteritems()})

Does anyone have an idea if this syntax in topology_helper.py is fine?

memory leak in fastwritexyz?

I've noticed a memory leak when I use fastwritexyz to dump md configuration. For my supersmall system of 2000 particles the memory grows by about 0.4 Mb every time I'm using it.
Can anybody confirm this behaviour?

Python 2 and Python 3. No xrange() function in Python 3

Related to #97 , I know that probably that change has been done for speed reasons most probably, especially in the case of repeated loops or many many iterations (e.g. for idx in xrange(num_particles)).
But I wanted to point out that this change is ok until we keep running with Python 2.
If we ever plan in the far away future to run with Python 3, this would need to be changed when running with Python 3.
The reason, which I think many might be familiar, is that there's no xrange() in Python 3 since its place has been taken by range() (the way was working xrange() in contrast to range() in Python 2 is equivalent to range() in Python 3 and there's no xrange()) and more features have been added.
So almost (because extra features, so can add extra overhead; xrange looks still faster in repeated loops on my system; for # iteration ~10000 speed difference seems very small) similar behavior and speed (cum grano salis) for that in Python 3 are achieved by just using range().

Just for reference, I link to the source code for the implementation of the range() functions in Python 2/3; you can see in the Python 3 version there's no mention of xrange.
https://hg.python.org/cpython/file/2.7/Objects/rangeobject.c
https://hg.python.org/cpython/file/3.3/Objects/rangeobject.c

And to this nice posts on SO:
http://stackoverflow.com/questions/94935/what-is-the-difference-between-range-and-xrange-functions-in-python-2-x
http://stackoverflow.com/questions/15014310/why-is-there-no-xrange-function-in-python3

E++ package for Spack package manager for HPC

Hi all,

yesterday I wrote the E++ package recipe for the Spack package manager for HPC (https://github.com/LLNL/spack).
Actually I'm almost done, I can build on OSX and Linux (Debian) but I want to test it when also other options (like docs, specific versions of the libraries, compilers) are desired by the user and try to run one example; still I have to understand a couple of things.

I was thinking about getting into the main tree. Thoughts?
Otherwise it will be available from my fork.

The non-updated version of what I put together yesterday can be found here:
https://github.com/fedepad/spack/tree/features/espressopp

but still needs some work.
Since many scientific and non packages are there or getting there I think would be a nice addition.

Problem with running espressopp.tools.vmd

Hello,
when i am running espresso++ command
espressopp.tools.vmd.connect(system)
then i got the error which is given below
The new particle position is:
Traceback (most recent call last):
File "espresso.py", line 50, in
espressopp.tools.vmd.connect(system)
File "/export/apps/espressopp/espressopp-1.9.4.1/espressopp/tools/vmd.py", line 144, in connect
subprocess.Popen([vmd_path, '-e', 'vmd.tcl'])
File "/share/apps/python-2.7/lib/python2.7/subprocess.py", line 390, in init
errread, errwrite)
File "/share/apps/python-2.7/lib/python2.7/subprocess.py", line 1024, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory

Does it need a clean vmd install?
What i have to do, Can anyone please help?
Thanks

Bonded interactions between CG particles

In AdResS, we have particles on CG and AT level. FixedTupleListAdress maps from the CG particle to the AT particles it contains. Currently, we can have bonds between two CG particles using FixedPairList, and bonds between two AT particles in the same CG particle using FixedPairListAdress. But we can't have bonds between two AT particles in two different CG particles. Same remarks apply for Triples and Quadruples.

I'm working on some new Fixed Lists now to solve this. It need some changes to Storage/DomainDecomposition to be able to lookup ghost AT particles.

(@niktre This is for the roadmap!)

Drop some coverage builds

I seems ccache doesn't play well with coverage and hence these builds take 26min instead of the usual 7min.

We are currently doing 8 builds (2 compiler * 2 mpi variants * 2 distributions), but 2 builds might be enough as mpi and distribution won't change the coverage rate anyhow.

codecov in docker

@junghans
I've found in codecov docs that it is possible to run tests inside a Docker container see here. Could you explain the difference with our current codecov strategy?

Regarding espresso++on nodes

Hi,
I want to know that how to run/compile/configure espresso++ code on nodes, My codes are working good on master node. I have to install everything (same as in master code) related to espresso++ or what I have to do ? Please let me know.
Thank you

Build broken with gcc-5.1

FAILED: /usr/lib64/ccache/bin/x86_64-pc-linux-gnu-g++  -D_espressopp_EXPORTS  -DNDEBUG -O2 -pipe -march=native -fomit-frame-pointer  -fPIC -I/usr/include/python2.7 -I/var/tmp/portage/sci-phy
sics/espresso++-9999/work/espresso++-9999/src -I/usr/lib64/python2.7/site-packages/mpi4py/include -Isrc -I/var/tmp/portage/sci-physics/espresso++-9999/work/espresso++-9999/src/include -MMD -
MT src/CMakeFiles/_espressopp.dir/storage/bindings.cpp.o -MF "src/CMakeFiles/_espressopp.dir/storage/bindings.cpp.o.d" -o src/CMakeFiles/_espressopp.dir/storage/bindings.cpp.o -c /var/tmp/po
rtage/sci-physics/espresso++-9999/work/espresso++-9999/src/storage/bindings.cpp
In file included from /var/tmp/portage/sci-physics/espresso++-9999/work/espresso++-9999/src/storage/Storage.hpp:34:0,
                 from /var/tmp/portage/sci-physics/espresso++-9999/work/espresso++-9999/src/storage/DomainDecomposition.hpp:26,
                 from /var/tmp/portage/sci-physics/espresso++-9999/work/espresso++-9999/src/storage/DomainDecompositionNonBlocking.hpp:26,
                 from /var/tmp/portage/sci-physics/espresso++-9999/work/espresso++-9999/src/storage/bindings.cpp:23:
/var/tmp/portage/sci-physics/espresso++-9999/work/espresso++-9999/src/FixedTupleListAdress.hpp:60:30: error: ‘signal2’ in namespace ‘boost::signals2’ does not name a template type
             boost::signals2::signal2 <void, std::vector<longint>&, class OutBuffer&>
                              ^
/var/tmp/portage/sci-physics/espresso++-9999/work/espresso++-9999/src/FixedTupleListAdress.hpp:62:30: error: ‘signal2’ in namespace ‘boost::signals2’ does not name a template type
             boost::signals2::signal2 <void, ParticleList&, class InBuffer&>
                              ^
In file included from /var/tmp/portage/sci-physics/espresso++-9999/work/espresso++-9999/src/storage/DomainDecomposition.hpp:26:0,
                 from /var/tmp/portage/sci-physics/espresso++-9999/work/espresso++-9999/src/storage/DomainDecompositionNonBlocking.hpp:26,
                 from /var/tmp/portage/sci-physics/espresso++-9999/work/espresso++-9999/src/storage/bindings.cpp:23:
/var/tmp/portage/sci-physics/espresso++-9999/work/espresso++-9999/src/storage/Storage.hpp:235:24: error: ‘signal0’ in namespace ‘boost::signals2’ does not name a template type
       boost::signals2::signal0 <void> onParticlesChanged;
                        ^
/var/tmp/portage/sci-physics/espresso++-9999/work/espresso++-9999/src/storage/Storage.hpp:236:24: error: ‘signal2’ in namespace ‘boost::signals2’ does not name a template type
       boost::signals2::signal2 <void, ParticleList&, class OutBuffer&> 
                        ^
/var/tmp/portage/sci-physics/espresso++-9999/work/espresso++-9999/src/storage/Storage.hpp:238:24: error: ‘signal2’ in namespace ‘boost::signals2’ does not name a template type
       boost::signals2::signal2 <void, ParticleList&, class InBuffer&> 
                        ^
/var/tmp/portage/sci-physics/espresso++-9999/work/espresso++-9999/src/storage/Storage.hpp:244:24: error: ‘signal0’ in namespace ‘boost::signals2’ does not name a template type
       boost::signals2::signal0 <void> onTuplesChanged;

For the whole log see: https://gist.github.com/b9e7968cf51fa75e6410

TotalVelocity class in analysis

I think it is confusing that TotalVelocity class computes the center of mass velocity and not a net velocity. My suggestions:

  1. Rename TotalVelocity to CMVelocity (CM for center-of-mass) or to SystemVelocity.

  2. The compute method should return Real3D and not void

ERROR on Ubuntu 16.10

Dear all,
When install ESPResSo++ on Ubuntu 16.10, the error happens in "make" step:
"cmake ." step is OK.
-- Found MPI_C: /usr/lib/openmpi/lib/libmpi.so
-- Found MPI_CXX: /usr/lib/openmpi/lib/libmpi_cxx.so;/usr/lib/openmpi/lib/libmpi.so
-- Found FFTW3: /usr/lib/x86_64-linux-gnu/libfftw3.so
-- Found PythonInterp: /usr/bin/python2 (found suitable version "2.7.12", minimum required is "2")
-- PYTHON_INCLUDE_PATH = /usr/include/python2.7
-- PYTHON_VERSION = 2.7
-- PYTHON_LIBDIR = /usr/lib
-- PYTHON_LIBRARIES = /usr/lib/x86_64-linux-gnu/libpython2.7.so
-- Looking for 3 include files sys/time.h, ..., unistd.h
-- Looking for 3 include files sys/time.h, ..., unistd.h - found
-- Boost version: 1.61.0
-- Found the following Boost libraries:
-- mpi
-- python
-- serialization
-- system
-- filesystem
-- mpi4py version: 2.0.0
-- Found MPI4PY: /usr/local/lib/python2.7/dist-packages/mpi4py/MPI.so (found suitable version "2.0.0", minimum required is "2.0")
CMake Warning at CMakeLists.txt:211 (message):
Not building system documentation because Doxygen not found.

CMake Warning at CMakeLists.txt:246 (message):
Not building user documentation because Doxygen not found.

-- Configuring done
-- Generating done
-- Build files have been written to: /data/interpoco

ERROR message:
Scanning dependencies of target symlink
[ 0%] Built target symlink
-- Found Git: /usr/bin/git (found version "2.9.3")
Scanning dependencies of target Array4DTest
[ 0%] Building CXX object testsuite/Array4DTest/CMakeFiles/Array4DTest.dir/Array4DTest.cpp.o
Current revision is a0e5cee
[ 0%] Built target gitversion
Scanning dependencies of target _espressopp
[ 1%] Building CXX object src/CMakeFiles/_espressopp.dir/FixedListComm.cpp.o
In file included from /data/interpoco/src/storage/Storage.hpp:34:0,
from /data/interpoco/src/FixedListComm.cpp:23:
/data/interpoco/src/FixedTupleListAdress.hpp:60:30: error: ‘signal2’ in namespace ‘boost::signals2’ does not name a template type
boost::signals2::signal2 <void, std::vector&, class OutBuffer&>
^~~~~~~
/data/interpoco/src/FixedTupleListAdress.hpp:62:30: error: ‘signal2’ in namespace ‘boost::signals2’ does not name a template type
boost::signals2::signal2 <void, ParticleList&, class InBuffer&>
^~~~~~~
In file included from /data/interpoco/src/FixedListComm.cpp:23:0:
/data/interpoco/src/storage/Storage.hpp:235:24: error: ‘signal0’ in namespace ‘boost::signals2’ does not name a template type
boost::signals2::signal0 onParticlesChanged;
^~~~~~~
/data/interpoco/src/storage/Storage.hpp:236:24: error: ‘signal2’ in namespace ‘boost::signals2’ does not name a template type
boost::signals2::signal2 <void, ParticleList&, class OutBuffer&>
^~~~~~~
/data/interpoco/src/storage/Storage.hpp:238:24: error: ‘signal2’ in namespace ‘boost::signals2’ does not name a template type
boost::signals2::signal2 <void, ParticleList&, class InBuffer&>
^~~~~~~
/data/interpoco/src/storage/Storage.hpp:244:24: error: ‘signal0’ in namespace ‘boost::signals2’ does not name a template type
boost::signals2::signal0 onTuplesChanged;
^~~~~~~
/data/interpoco/src/FixedListComm.cpp: In constructor ‘espressopp::FixedListComm::FixedListComm(boost::shared_ptrespressopp::storage::Storage)’:
/data/interpoco/src/FixedListComm.cpp:40:25: error: ‘class espressopp::storage::Storage’ has no member named ‘beforeSendParticles’; did you mean ‘removeAllParticles’?
con1 = storage->beforeSendParticles.connect
^~~~~~~~~~~~~~~~~~~
/data/interpoco/src/FixedListComm.cpp:42:25: error: ‘class espressopp::storage::Storage’ has no member named ‘afterRecvParticles’; did you mean ‘recvParticles’?
con2 = storage->afterRecvParticles.connect
^~~~~~~~~~~~~~~~~~
/data/interpoco/src/FixedListComm.cpp:44:25: error: ‘class espressopp::storage::Storage’ has no member named ‘onParticlesChanged’; did you mean ‘IdParticleMap’?
con3 = storage->onParticlesChanged.connect
^~~~~~~~~~~~~~~~~~
src/CMakeFiles/_espressopp.dir/build.make:62: recipe for target 'src/CMakeFiles/_espressopp.dir/FixedListComm.cpp.o' failed
make[2]: *** [src/CMakeFiles/_espressopp.dir/FixedListComm.cpp.o] Error 1
CMakeFiles/Makefile2:123: recipe for target 'src/CMakeFiles/_espressopp.dir/all' failed
make[1]: *** [src/CMakeFiles/_espressopp.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 1%] Linking CXX executable Array4DTest
[ 1%] Built target Array4DTest
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2

How can I deal with it?
Thank you.
Best regatds,
Zidan

Issues with different combinations of boost, gcc, apple clang, etc

Hi! I wasn't sure whether to re-open #8 or start a new issue..
I have a pleasure of using a MacBook and therefore have additional fun compiling ESPResSo++, but it is interesting NOT only for Mac users:
Boost: external 1.59.0
MPI4PY: internal
c / c++: from Apple (clang 7.0.0)
c / c++ flags: -std=c99 / -std=c++03

The error I get when compiling:
Linking CXX shared library ../_espressopp.so
Undefined symbols for architecture x86_64:
"boost::system::system_category()", referenced from:
__GLOBAL__sub_I_bindings.cpp in bindings.cpp.o
__GLOBAL__sub_I_DumpGRO.cpp in DumpGRO.cpp.o
__GLOBAL__sub_I_DumpGROAdress.cpp in DumpGROAdress.cpp.o
__GLOBAL__sub_I_DumpXYZ.cpp in DumpXYZ.cpp.o
"boost::system::generic_category()", referenced from:
__GLOBAL__sub_I_bindings.cpp in bindings.cpp.o
__GLOBAL__sub_I_DumpGRO.cpp in DumpGRO.cpp.o
__GLOBAL__sub_I_DumpGROAdress.cpp in DumpGROAdress.cpp.o
__GLOBAL__sub_I_DumpXYZ.cpp in DumpXYZ.cpp.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [_espressopp.so] Error 1
make[1]: *** [src/CMakeFiles/_espressopp.dir/all] Error 2
make: *** [all] Error 2

If I switch to internal Boost, everything is fine!

Some tests take too long

     Start  4: pi_water
 4/24 Test  #4: pi_water .........................   Passed  228.97 sec
      Start  5: polymer_melt_tabulated
 5/24 Test  #5: polymer_melt_tabulated ...........   Passed  107.37 sec

Tests should take a sec or so.

Deprecating usage of python routines: espressopp.tools.writexyz() and espressopp.tools.fastwritexyz()

Dear all,

around a couple of months ago I ran benchmarks of LJ and PM simulations on an HPC cluster at different scales, using around 2.56M particles. Below attached two images which refer to PM simulation with 2.56M particles.

What I've seen is that dumping (GPFS file system used) with routines implemented only at the python level is slower (expected) respect to the case where both C++ core and python interface are involved.
What really was odd was seeing espressopp.tools.writexyz() take hours to dump the file as you can see
(and this is only one time step, config saved only at the end of the simulation) in the graph.

For this reason I suggest to warn and deprecate about the usage of:

espressopp.tools.writexyz() and maybe also espressopp.tools.fastwritexyz()

since that time is considered idle or inefficient in which the computation doesn't go forward. Try to multiply that time by the number of timesteps you want to save and you see a problem.

Attached also a figure about the "order of magnitude" of some of the built-in routines for writing in Espresso++ on that cluster (run is on one node, 64 cores). Take the numbers as good representation, not as "final", but still very interesting considering the order of magnitude of things.

espressopp_tests_pythonwritexyz

cpp_and_pythonlevelroutines

Readme file

Isn't it confusing that in #143 the sign # stays before the opening bracket (why do we need brackets there?) and note before cores?

http://www.espresso-pp.de/ does not work

Web page does not work (404). Also ping on espresso-pp.de return ping: unknown host espresso-pp.de

Perhaps the manual could be hosted on https://readthedocs.org/ and the home page on github.io ?

Hard-coded timer array size causes random seg fault

In VelocityVerlet.hpp, the array timeForceComp is hard-coded to size of 100. If a system has more interactions than the size of timeForceComp, random segmentation faults will occur.

Temporary hack solution: increase size of array so it's less likely the problem will occur.

Long-term solution: apparently @govarguz has a new and better timer setup that will someday be integrated into VelocityVerlet class.

The timers in VelocityVerlet are hard-coded for the bead-spring system, containing only interactions LJ+FENE+angle, so need to be rewritten someday anyway.

@stuehn

See PR #155

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.