GithubHelp home page GithubHelp logo

openmm / openmm Goto Github PK

View Code? Open in Web Editor NEW
1.4K 82.0 499.0 120.98 MB

OpenMM is a toolkit for molecular simulation using high performance GPU code.

CMake 0.73% Shell 0.12% Python 9.44% HTML 0.01% Makefile 0.05% TeX 0.11% C++ 69.84% C 11.31% Cuda 0.90% Batchfile 0.10% PowerShell 0.01% Rich Text Format 7.04% SWIG 0.24% Jinja 0.01% CSS 0.02% Cython 0.07%
simulation molecular-dynamics

openmm's Introduction

GH Actions Status Conda Anaconda Cloud Badge

OpenMM: A High Performance Molecular Dynamics Library

Introduction

OpenMM is a toolkit for molecular simulation. It can be used either as a stand-alone application for running simulations, or as a library you call from your own code. It provides a combination of extreme flexibility (through custom forces and integrators), openness, and high performance (especially on recent GPUs) that make it truly unique among simulation codes.

Getting Help

Need Help? Check out the documentation and discussion forums.

openmm's People

Contributors

andysim avatar bdenhollander avatar chayast avatar craabreu avatar frabjous5 avatar jaimergp avatar jchodera avatar jing-huang avatar jlmaccal avatar joaorodrigues avatar kyleabeauchamp avatar leeping avatar leucinw avatar mark-mb avatar mikemhenry avatar mj-harvey avatar mjschnie avatar olllom avatar peastman avatar philipturner avatar proteneer avatar rafwiewiora avatar rmcgibbo avatar saurabhbelsare avatar sherm1 avatar smikes avatar sunhwan avatar swails avatar thtrummer avatar z-gong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openmm's Issues

Fixing problems in PDB files

I'm starting to work on a tool for fixing problems in PDB files. The goal is to have something that can load a file downloaded straight from RCSB, fix any problems with it in a completely automated way, and produce a new file that's ready to load into a simulation.

I need to identify all the types of problems we should look for, and figure out how to deal with each one. Here's what I've thought of so far:

  • Nonstandard atom or residue names. We already deal with this pretty well. Also, this mainly comes up in files created by other programs, not ones downloaded from RCSB.
  • Missing hydrogen atoms. We can already deal with this.
  • Missing heavy atoms (usually the ends of flexible sidechains). I think this should be pretty easy to handle.
  • Missing residues (usually in flexible loops). This is a bigger challenge, but I'll see what I can do.
  • Residues or molecules for which we don't have force field parameters. This is beyond the scope of what I have in mind, and this isn't really a problem with the PDB file. It just means you need to find force field parameters for the thing you want to simulate.

What else should I look for?

Better interoperability with AMBER

Currently, OpenMM can create System objects by reading AMBER prmtop/inpcrd or parameterizing proteins with AMBER forcefields from PDB files, but going the other direction (writing AMBER prmtop/inpcrd files or writing PDB files that AmberTools LEaP can read) is not possible. I am opening this issue to discuss the best way to support bidirectional exchange of systems with AMBER and AmberTools.

One simple way to allow systems set up in OpenMM to be imported into AMBER would be to ensure that OpenMM can write PDB files that AMBER can read. The current PDBFile scheme actually does support residue and atom name translation tables, but hard-codes a standard PDB output schema. This schema eliminates the protonation state sensitive residue naming and uses hydrogen atom names unknown to AMBER. Instead, perhaps the user could request that the schema from the AMBER forcefield XML files be used instead? These PDB files could then be read into LEaP and reparameterized with the same forcefield. One may also have to be cautious of atom ordering: LEaP may be insensitive to this, but there might be good reasons one wants the atom ordering within residues to be the same as in AMBER.

Allowing PDBFile objects to be created from OpenMM Topology objects and position arrays (or State objects) would also be extremely useful. Currently, the only way to create a PDBFile object is to read a PDB file in the constructor.

Another means of interoperability, in addition to writing PDB files AMBER can read, would be to extend the PrmtopFile and InpcrdFile classes to allow the writing of prmtop/inpcrd files, and the creation of PrmtopFile and InpcrdFile objects from Topology and System objects (for prmtop) and State objects or position/velocity arrays (for inpcrd).

What do you think? Does this sound like a reasonable way to extend functionality?

Truncated octahedron

I had a discussion with Leeping about this. Tinker can use a truncated octahedron to result in a 2x speedup (since the volume of the truncated octahedron is half that of the smallest enclosing cube). It actually wouldn't be too hard to implement from a purely algorithmic point of view. We wouldn't be able to support anisotropic barostats (but otherwise isotropic NPT,NVT,NVE should still all be valid), and this wouldn't be ideal for simulations that are intrinsically rectangular (eg membrane simulations).

The hairy part is probably adding the API so that it's supported all of our kernels.

Nonbonded force between atom groups

This is a commonly requested feature, so I'd like to open a discussion on how it should work. The idea is to create a nonbonded force that, instead of letting every atom interact with every other atom, only considers the interaction between the atoms in one group and the atoms in another group. There are lots of uses for this. Here are some examples:

  • For analysis. For example, you might want to compute the interaction energy between solute and solvent, but exclude all interactions between two solute atoms or two solvent atoms.
  • For efficiency. Suppose you're running a simulation where everything is held fixed except a small number of atoms around the active site. Most of the interactions (anything between two immobile atoms) will never change, so recomputing them every time step is inefficient. It might be faster if you could only compute the interactions that involve at least one atom that can move.
  • Certain types of restraints, such as for NMR refinement, take this form. You have two small clusters of atoms, and you want to compute an interaction between every atom in the first cluster and every atom in the second cluster.

What other uses would it have?

An obvious solution is to create a version of CustomNonbondedForce that lets you specify pairs of atom groups. For each pair of groups you specified, the interaction would be computed between every atom in the first group and every atom in the second group. (But if an atom appeared in both groups, presumably the self interaction should be skipped?) This could be a new custom force, or just a new feature added to CustomNonbondedForce so it could work in either mode (all pair interactions, or only interactions between groups).

Warn if unused flags are present.

So currently, users can specify arbitrary flags when creating a system:

system = forcefield.createSystem(pdb.topology, stuff=1.0, typo=10.0)

Perhaps we should print a warning (or raise an exception) if there are flags that are unused? I think this might help users identify issues that arise from misnamed arguments. I suspect this might be harder to implement than I imagine, as we have to someone keep track of which flags eventually get used.

cc @tjlane

add removeAtom, removeResidue, removeChain, removeBond to app.Topology

Would other people use these methods? I find myself needing to implement these methods right now for my project -- if this is functionality that other people would use, it might be good to put it into the python app.

@tjlane: you were implementing this for your evaporated water droplets too, if I remember correctly?

Implement AMD as CustomForce

For some applications (namely Hamiltonian Replica Exchange), it could be nice have an AMD implementation that operates as a separate force on the system.

The advantage of this approach is that you have a nice way to include / exclude the AMD force. Furthermore, it makes it easier to monitor the AMD force using the existing Reporters.

PS: I think this is a low priority. It could even be something we hand off to someone else to implement.

Constraints involving massless particles

Dear all,

I would like to constrain an entire molecule during an MD simulation to a fix position in space. However, if that molecule is rigid, the default way of setting the atoms' masses to zero yields an exception (see the attached example).

Is it possible to implement a check such that all particles within a constraint group may have zero mass?

Thank you all in advance!

Christoph

Example:

import simtk.openmm.app as app
import simtk.openmm as openmm
import simtk.unit as unit

temperature = 300.0 * unit.kelvin
timestep = 2.0 * unit.femtosecond
collision = 1.0 / unit.picosecond
tolerance = 1.0e-8

pdb = app.PDBFile( 'tip3p.pdb' ) # taken from ...python2.7/site-packages/simtk/openmm/app/data/
ff = app.ForceField( 'tip3p.xml' )
system = ff.createSystem( pdb.getTopology(), nonbondedMethod=app.PME, nonbondedCutoff=9*unit.angstrom )
integrator = openmm.LangevinIntegrator( temperature, collision, timestep )
integrator.setConstraintTolerance( tolerance )

### set the masses of the first water molecule to zero
system.setParticleMass( 0, 0.0 )
system.setParticleMass( 1, 0.0 )
system.setParticleMass( 2, 0.0 )

context = openmm.Context( system, integrator )
context.setPositions( pos )

### enforce the constraints
context.applyConstraints( tolerance )

Output:

Traceback (most recent call last):
  File "constraint-test.py", line 25, in <module>
    context = openmm.Context( system, integrator )
  File "/home/mi/schiffm/local/opt/python/openmm-git/lib/python2.7/site-packages/simtk/openmm/openmm.py", line 3505, in __init__
    this = _openmm.new_Context(*args)
Exception: A constraint cannot involve a massless particle

Atom subsets for Reporters

For a lot of situations, it might be nice to be able to report only a subset of atoms. For example, I might want protein coordinates at a faster frequency than solvent coordinates.

To achieve this, it might be nice for reporters to take a list of atoms to select during output.

PS: @rmcgibbo and I have a lot of Python trajectory code here: https://github.com/rmcgibbo/mdtraj This could help you guys avoid re-writing parsers that we already have.

TestCpuPme: Illegal instruction (core dumped)

So this is on my Ubuntu 12.04 machine.

Via print statements, I've found that this is happening here:

    double energy = pme.finishComputation(io);

I'm currently using the package repository version of FFTW. I wonder if the issue could be on the FFTW side of things. Not sure at this point.

Parameter names are shared across custom force instances

I just ran into the following problem, which took me a long time to track down.

I added two different instances of CustomExternalForce to my system, each with a different energy expression. However, these two forces both had a global parameter with the same name, in this case k. This lead to a huge amount of confusion as the value of k set for the first force is used in both forces, due to the string substitution magic in the internals of this class.

This is very surprising behavior. It is, of course, easy to work around, but it might be worth mangling the parameter names in some way or at least documenting this behavior clearly (unless it is already, in which case I should RTFM).

I don't know if this behavior is also present in the other custom force classes.

Is CUDA 5.5 supported on Mac OS X yet?

When compiling on Mac OS X 10.8.4 (using clang) with CUDA 5.5, the build terminates with the following error:

[ 32%] Building CXX object platforms/cuda/sharedTarget/CMakeFiles/OpenMMCUDA.dir/__/src/CudaKernelSources.cpp.o
Linking CXX shared library ../../../libOpenMMCUDA.dylib
ld: file not found: @rpath/CUDA.framework/Versions/A/CUDA for architecture i386
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [libOpenMMCUDA.dylib] Error 1
make[1]: *** [platforms/cuda/sharedTarget/CMakeFiles/OpenMMCUDA.dir/all] Error 2
make: *** [all] Error 2

nvcc info:

[LSKI1497:~/code/openmm-git/openmm] choderaj% nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2012 NVIDIA Corporation
Built on Thu_May_16_15:24:25_PDT_2013
Cuda compilation tools, release 5.5, V5.5.0

OpenMM CUDA on Blue Waters produces moderate force errors

I just installed OpenMM on Blue Waters, and the testInstallation.py script produced the following output:

leeping@nid10319:~/src/OpenMM/examples$ python testInstallation.py 
[unset]:_pmi_alps_get_apid:alps_app_lli_put_request failed
[unset]:_pmi_alps_get_appLayout:pmi_alps_get_apid returned with error: Bad file descriptor
There are 2 Platforms available:

1 Reference - Successfully computed forces
2 CUDA - Successfully computed forces

Median difference in forces between platforms:

Reference vs. CUDA: 0.237024

I'm not sure about the warnings or the force differences. It seems like a difference of 0.237 is a nontrivial amount. What do you think?

Compared to the same version running on my local box:

leeping@not0rious:~/src/OpenMM/examples$ python testInstallation.py 
There are 3 Platforms available:

1 Reference - Successfully computed forces
2 OpenCL - Successfully computed forces
3 CUDA - Successfully computed forces

Median difference in forces between platforms:

Reference vs. OpenCL: 2.18784e-05
Reference vs. CUDA: 2.1868e-05
OpenCL vs. CUDA: 5.30477e-07

I also tested the AMOEBA plugin and double precision. On Blue Waters, it always produces an error on the order of 0.1 . Have you seen something like this before?

mm.X_swigregister

In the python modules, there are lots of entries like the following:

import simtk.openmm as mm
mm.AmoebaBondForce_swigregister

Is there some way to hide these to simplify the namespace?

Setting precision from python when platform is determined at runtime

Currently, it's kind of tricky to set the precision when the platform is determined at runtime. If you call the Simulation constructor without the platform supplied, no platform properties are parsed.

So, the only way from python to do this is to call getSpeed on all of the platforms, e.g.

[...]
precision = 'mixed'

fastestPlatform, fastestPlatformSpeed = -1, -1
for i in range(mm.Platform.getNumPlatforms()):
    p = mm.Platform.getPlatform(i)
    s = p.getSpeed()
    if s > speed:
        fastestPlatform = p
        fastestPlatformSpeed = s

if fastest.getName() == 'Reference' and precision != 'Double':
    error('Reference platform always does calculations in double.')
properties = {'%sPrecision' % fastestPlatform.getName().title(): precision}

simulation = Simulation(topology, system, integrator, fastestPlatform, properties)
[...]

Is there a cleaner way? Perhaps the simulation constructor could take a new precision keyword arg?

Barostat

When RPMD integrator and MC barostat is used simultaneously the simulation blows up, it will be cool if this problem can be fixed. Thanks

Sincerely,
Frank

Creation of multiple CUDA Contexts can lead to crash

In my MD script, sometimes I need to run a short sequence of simulations on different systems. Specifically, the first one is a condensed phase simulation, and the second one is a gas phase simulation (just a single molecule). This requires creating multiple Contexts, and I found that creation of the second Context on the CUDA platform can cause the script to crash.

I have seen this problem on a few machines, but not all; the latest such error occurred on Blue Waters. So far I have been able to work around the problem by using the reference platform for the gas phase simulation, but it is probably worth getting to the bottom of this.

My script is here: https://simtk.org/websvn/wsvn/forcebalance/src/data/npt.py

Traceback (most recent call last):
  File "npt.py", line 1200, in <module>
    main()
  File "npt.py", line 976, in main
    mData, mXyzs, _trash, _crap, mPotentials, mKinetics, _nah, _dontneed, mSim, mEDA = run_simulation(mpdb, mSettings, pbc=False, Trajectory=False)
  File "npt.py", line 461, in run_simulation
    simulation, system = create_simulation_object(pdb, settings, pbc, "mixed")
  File "npt.py", line 444, in create_simulation_object
    simulation = Simulation(mod.topology, system, integrator, platform)
  File "/u/sciteam/leeping/local/lib/python2.7/site-packages/simtk/openmm/app/simulation.py", line 77, in __init__
    self.context = mm.Context(system, integrator, platform)
  File "/u/sciteam/leeping/local/lib/python2.7/site-packages/simtk/openmm/openmm.py", line 10903, in __init__
    this = _openmm.new_Context(*args)
Exception: Error initializing Context: CUDA_ERROR_INVALID_DEVICE (101) at /u/sciteam/leeping/src/OpenMM/platforms/cuda/src/CudaContext.cpp:136

Organize Issues into Milestones

I think it would be a great idea to organize the various issues listed here into Milestones for upcoming releases, so that we have a more up-to-date sense of which features and fixes are on track for which releases.

Register OpenMM with the Python package index (PyPI)

It would be great to register OpenMM with PyPI so that it (and its dependencies, if properly reflected in setup.py) can be installed automatically via easy_install or pip install.

Registration is simple, and involves

  1. Create an account with PyPI: https://pypi.python.org/pypi?%3Aaction=register_form
  2. Use python setup.py register to register the package: http://docs.python.org/2/distutils/packageindex.html
  3. Upload the package with python setup.py sdist upload (more complex stuff is needed for Windows)

Has anyone successfully compiled on Titan?

I'm not having much luck. Everything dies with an Illegal Instruction. This is the gdb backtrace for TestReferenceHarmonicBondForce. The line that it's dying at, periodicBoxVectors[0] = Vec3(2, 0, 0);, seems pretty innocuous to me. :)

$ gdb TestReferenceHarmonicBondForce 
GNU gdb (GDB) SUSE (7.0-0.4.16)
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-suse-linux".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /autofs/na3_home1/tjlane/robert/build_openmm/TestReferenceHarmonicBondForce...done.
(gdb) run
Starting program: /autofs/na3_home1/tjlane/robert/build_openmm/TestReferenceHarmonicBondForce 
Missing separate debuginfo for /lib64/ld-linux-x86-64.so.2
Try: zypper install -C "debuginfo(build-id)=c1807b5762068e6c5f4a6a0ed48d9d4469965351"
Missing separate debuginfo for /lib64/libdl.so.2
Try: zypper install -C "debuginfo(build-id)=f607b21f9a513c99bba9539050c01236d19bf22b"
Missing separate debuginfo for /opt/cray/pmi/default/lib64/libpmi.so.0
Try: zypper install -C "debuginfo(build-id)=e324e49cc7681e4e073964dab58547a20152bcc9"
Missing separate debuginfo for /lib64/libpthread.so.0
Try: zypper install -C "debuginfo(build-id)=f69d3b06516c61cfab7d00c9ef86c41936dfc017"
[Thread debugging using libthread_db enabled]
Missing separate debuginfo for /lib64/libm.so.6
Try: zypper install -C "debuginfo(build-id)=4e9fa1a2c1141fc0123a142783efd044c40bdaaf"
Missing separate debuginfo for /lib64/libc.so.6
Try: zypper install -C "debuginfo(build-id)=f7b8fc2bc1d68899a2cb561ac8e16092228223e3"
Missing separate debuginfo for /lib64/libz.so.1
Try: zypper install -C "debuginfo(build-id)=4c05d1eb180f9c02b81a0c559c813dada91e0ca4"
Missing separate debuginfo for /lib64/librt.so.1
Try: zypper install -C "debuginfo(build-id)=d44cbcbbcbdc9ed66abdcd82fa04fb4140bc155e"
 LIBDMAPP WARNING: Unable to open kgni version file /sys/class/gemini/kgni0/version errno 2
Running test...

Program received signal SIGILL, Illegal instruction.
OpenMM::System::System (this=0x7fffffff8630) at /ccs/home/tjlane/robert/source/OpenMM5.1-Source/openmmapi/src/System.cpp:41
41      periodicBoxVectors[0] = Vec3(2, 0, 0);
(gdb) bt
#0  OpenMM::System::System (this=0x7fffffff8630) at /ccs/home/tjlane/robert/source/OpenMM5.1-Source/openmmapi/src/System.cpp:41
#1  0x0000000000407e30 in testBonds () at /ccs/home/tjlane/robert/source/OpenMM5.1-Source/platforms/reference/tests/TestReferenceHarmonicBondForce.cpp:53
#2  0x000000000040a7e2 in main () at /ccs/home/tjlane/robert/source/OpenMM5.1-Source/platforms/reference/tests/TestReferenceHarmonicBondForce.cpp:95

CustomNonbondedForce doesn't have updateParametersInContext() in Python API

Hi,

it seems like updateParametersInContext() is not ported by SWIG to CustomNonbondedForce. Applying dir() onto an instance of a CustomNonbondedForce gives the following list:

['CutoffNonPeriodic', 'CutoffPeriodic', 'NoCutoff', 'class', 'copy', 'deepcopy', 'del', 'delattr', 'dict', 'doc', 'format', 'getattr', 'getattribute', 'hash', 'init', 'module', 'new', 'reduce', 'reduce_ex', 'repr', 'setattr', 'sizeof', 'str', 'subclasshook', 'swig_destroy', 'swig_getmethods', 'swig_setmethods', 'weakref', '_s', 'addExclusion', 'addFunction', 'addGlobalParameter', 'addParticle', 'addPerParticleParameter', 'getCutoffDistance', 'getEnergyFunction', 'getExclusionParticles', 'getForceGroup', 'getFunctionParameters', 'getGlobalParameterDefaultValue', 'getGlobalParameterName', 'getNonbondedMethod', 'getNumExclusions', 'getNumFunctions', 'getNumGlobalParameters', 'getNumParticles', 'getNumPerParticleParameters', 'getParticleParameters', 'getPerParticleParameterName', 'setCutoffDistance', 'setEnergyFunction', 'setExclusionParticles', 'setForceGroup', 'setFunctionParameters', 'setGlobalParameterDefaultValue', 'setGlobalParameterName', 'setNonbondedMethod', 'setParticleParameters', 'setPerParticleParameterName', 'this']

Thanks,
Anton.

segfault

The current GitHub head appears to segfault in certain cases.

A simple example appears below segfaults on both OS X and Linux with CUDA 5.5.

import simtk.openmm as openmm
import simtk.unit as units

import numpy

def WaterBox(constrain=True, mm=None, nonbonded_method=None, filename=None, charges=True, box_edge=None, cutoff=None):
    """
    Create a test system containing a periodic box of TIP3P water.

    Flexible bonds and angles are always added, and constraints are optional (but on by default).
    Addition of flexible bond and angle terms doesn't affect constrained dynamics, but allows for minimization to work properly.

    OPTIONAL ARGUMENTS

    filename (string) - name of file containing water coordinates (default: 'watbox216.pdb')
    mm (OpenMM implementation) - name of simtk.openmm implementation to use (default: simtk.openmm)
    flexible (boolean) - if True, will add harmonic OH bonds and HOH angle between
    constrain (boolean) - if True, will also constrain OH and HH bonds in water (default: True)
    nonbonded_method
    box_edge (simtk.unit.Quantity with units compatible with nanometers) - edge length for cubic box [should be greater than 2*cutoff] (default: 2.3 nm)
    cutoff  (simtk.unit.Quantity with units compatible with nanometers) - nonbonded cutoff (default: 0.9 * units.nanometers)

    RETURNS

    system (System)
    coordinates (numpy array)

    EXAMPLES

    Create a 216-water system.

    >>> [system, coordinates] = WaterBox()

    TODO

    * Allow size of box (either dimensions or number of waters) to be specified, replicating equilibrated waterbox to fill these dimensions.

    """
    import simtk.openmm.app as app

    if not box_edge:
        box_edge = 2.3 * units.nanometers

    if not cutoff:
        cutoff = 0.9*units.nanometers

    # Load forcefield for solvent model.
    ff =  app.ForceField('tip3p.xml')

    # Create empty topology and coordinates.
    top = app.Topology()
    pos = units.Quantity((), units.angstroms)

    # Create new Modeller instance.
    m = app.Modeller(top, pos)

    # Add solvent to specified box dimensions.
    boxSize = units.Quantity(numpy.ones([3]) * box_edge/box_edge.unit, box_edge.unit)
    m.addSolvent(ff, boxSize=boxSize)

    # Get new topology and coordinates.
    newtop = m.getTopology()
    newpos = m.getPositions()

    # Convert positions to numpy.
    positions = units.Quantity(numpy.array(newpos / newpos.unit), newpos.unit)

    # Create OpenMM System.

    if nonbonded_method:
        nonbondedMethod  = nonbonded_method
    else:
        # Use periodic system.
        nonbondedMethod = app.CutoffPeriodic

    if constrain:
        constraints = app.HBonds
    else:
        constraints = None

    system = ff.createSystem(newtop, nonbondedMethod=nonbondedMethod, nonbondedCutoff=cutoff, constraints=constraints, rigidWater=True, removeCMMotion=False)

    return [system, positions]

# MAIN

[reference_system, coordinates] = WaterBox()

timestep = 1.0 * units.femtosecond
reference_integrator = openmm.VerletIntegrator(timestep)
reference_context = openmm.Context(reference_system, reference_integrator)
print "Computing potential energy..."
reference_state = reference_context.getState(getEnergy=True)
reference_potential = reference_state.getPotentialEnergy()
print reference_potential
print "Done."

Output:

[LSKI1497:yank/yank.svn/src] choderaj% python segfault_example.py
Computing potential energy...
Segmentation fault

Type specificity of Context.getIntegrator()

Context.getIntegrator() returns instances of mm.Integrator, not the appropriate integrator subclass. This makes introspection of the system difficult. I'm not actually sure that this can be fixed -- it depends on some tricky swig typemap stuff. I'm looking at it now.

Here's an example.

import simtk.openmm as mm

system = mm.System()
system.addParticle(0.0)
integrator = mm.VerletIntegrator(0.002)

context = mm.Context(system, integrator)

print isinstance(context.getIntegrator(), mm.VerletIntegrator)
print type(context.getIntegrator())

produces

False
<class 'simtk.openmm.openmm.Integrator'>

Nonblocking version of Simulation.step?

I'm thinking about adding a nonblocking version step() that would use threading under the hood and return a Future, with methods like wait() and isFinished(). The use case is setting up a bunch of short simulations on different GPUs on the same box, all from a single python app.

I think this is pretty reasonable. Does anyone know if there are any particular thread-safety issues that I should watch out for? Assuming that I created an context in thread A, would there be any problems calling integrator.step() from thread B?

Add support for constant pH simulation

I am opening a new issue to start a discussion of how we can have OpenMM support constant-pH simulation, which would allow protonation states of molecules in the system (both proteins and small molecules) to change during the simulation.

I propose we adopt a modification of the Monte Carlo scheme of Mongan and Case described in Reference [1]. In this scheme, a topology containing all possible protons is created, and groups of charges on titratable groups are modified during the course of the run to reflect changes in protonation state as protons are added or removed.

For implicit solvent simulations, the scheme goes like this:

  1. Select the number $m$ of titratable groups that will be modified with probability $P(m)$.
  2. Select $m$ titratable groups to be modified with uniform probability.
  3. For each titratable group to be modified, select a new protonation state with uniform probability.
  4. Compute the initial potential energy.
  5. Modify the protonation states to their new states by changing charges.
  6. Compute the final potential energy.
  7. Accept or reject with a modified Metropolis criteria that incorporates the reference pKas for the titratable groups and the simulation pH.

For explicit solvent simulations, this scheme must be modified such that Steps 4-6 are split up over $N$ steps of velocity Verlet dynamics, and the total work $W$ is accumulated instead of the potential energy difference:

  1. Select the number $m$ of titratable groups that will be modified with probability $P(m)$.
  2. Select $m$ titratable groups to be modified with uniform probability.
  3. For each titratable group to be modified, select a new protonation state with uniform probability.
  4. Run $N$ steps of velocity Verlet dynamics where we alternate between:
  • updating the charges by $1/N$ the total charge change, accumulating the potential energy change as work $W$
  • run one step of velocity Verlet dynamics
    1. Accept or reject with a modified Metropolis criteria that incorporates the reference pKas for the titratable groups and the simulation pH.
      This scheme is essentially a modification of the scheme proposed in Reference [2] using NCMC (Reference [3]). Some minor details are skipped in this overview sketch.

The implicit solvent scheme is a special case of the explicit solvent scheme where $N$ is set to $0$ steps of velocity Verlet. Therefore, both schemes can be handled by the same interface.

Some additional tools are required to set up these simulations:

  • A tool to create a Topology object with desired titratable amino acids converted to titratable forms of these residues, along with a list of the titratable groups and associated possible charge states and reference pKas.
  • A tool to calibrate the reference free energy differences for terminally-blocked versions of these titratable amino acids given the specific simulation details (implicit solvent model or PME/RF parameters, cutoff), since reference values are dependent on the precise simulation details.
  • A way to analyze a simulation trajectory to extract information about the protonation state history.

I had started to implement this functionality through a pure Python implementation (using Force::updateParametersInContext()), and have a very basic working version. We essentially have to decide whether this functionality should be at the pure Python level, or might be something we want to implement at one of the C++ API layers, and what degree of functionality we want at the C++ level.

My current Python implementation requires the system be set up for simulation by the AMBER tools, generating corresponding prmtop and cpin files:

# Load and build the AMBER system.
import simtk.openmm.app as app
inpcrd = app.AmberInpcrdFile(inpcrd_filename)
prmtop = app.AmberPrmtopFile(prmtop_filename)
system = prmtop.createSystem(implicitSolvent=app.OBC2, nonbondedMethod=app.NoCutoff, constraints=app.HBonds)

# Initialize Monte Carlo titration.
mc_titration = MonteCarloTitration(system, temperature, pH, prmtop, cpin_filename)

# Run constant pH dynamics.
for iteration in range(niterations):
    # Run some dynamics.
    integrator.step(nsteps)

    # Attempt protonation state changes.
    mc_titration.update(context)

One simple way we could make this efficient and minimally complex would be to create a C++ MonteCarloTitrationForce object that handles the updating of protonation states on the fly, avoiding the need for external calls to mc_titration.update(context). A Python tool could still help configure the MonteCarloTitrationForce object from AMBER prmtop files or OpenMM Topology objects, but the protonation state changes would be handled automatically during integration.

This same functionality could also be extended to handle Monte Carlo switching between tautomeric states of small molecules, though this might require allowing Lennard-Jones parameters to change as well.

References

[1] Mongan J, Case DA, and McCammon JA. Constant pH molecular dynamics in generalized Born implicit solvent. J Comput Chem 25:2038, 2004. DOI

[2] Stern HA. Molecular simulation with variable protonation states at constant pH. JCP 126:164112, 2007. DOI

[3] Nonequilibrium candidate Monte Carlo is an efficient tool for equilibrium simulation. PNAS 108:E1009, 2011. DOI

PDB Reporter that takes care of all state details

Suppose I want to equilibrate my system in the NPT ensemble. Is there currently a good way to create a new PDB that has the correct box? To me, the only way right now is for the user to manually extract the box volume and manually set it in the pdb topology, then write the PDB.

Overall, this is kind of a pain.

One solution to this:

  1. A "last" PDB reporter that automatically creates and updates the XYZ coordinates and box vectors according to the specified frequency. The PDB would be stored in memory and optionally dump to disk--overwriting the previously stored conformation.

To me, this is useful in several situations:

  1. You want to have a handle, in memory or on disk, on the last conformation that was simulated.
  2. You want to extract the equilibrated PDB and box vectors.

Let me know if I'm not seeing some better way of doing this task.

Slow RESPA Performance

I've got a simple test case (BPTI) where RESPA seems to be 3X slower than Verlet. Is this expected? I was expecting only a modest (~10%) slowdown.

import numpy as np 
import simtk.openmm as mm
import simtk.openmm.app as app
import simtk.unit as u

pdb = app.PDBFile('native.pdb')
forcefield = app.ForceField('amber10.xml',"tip3p.xml")
system = forcefield.createSystem(pdb.topology, nonbondedMethod=app.PME,nonbondedCutoff=0.90 * u.nanometer, constraints=app.HAngles)

grps = ((0,1),)

integrator = mm.VerletIntegrator(0.002 * u.picoseconds)

simulation = app.Simulation(pdb.topology, system, integrator)
simulation.context.setPositions(pdb.positions)

%time simulation.step(2000)
import numpy as np 
import simtk.openmm as mm
import simtk.openmm.app as app
import simtk.unit as u
import respa

pdb = app.PDBFile('native.pdb')
forcefield = app.ForceField('amber10.xml',"tip3p.xml")
system = forcefield.createSystem(pdb.topology, nonbondedMethod=app.PME,nonbondedCutoff=0.90 * u.nanometer, constraints=app.HAngles)

grps = ((0,1),)

integrator = respa.MTSIntegrator(0.002 * u.picoseconds, grps)

simulation = app.Simulation(pdb.topology, system, integrator)
simulation.context.setPositions(pdb.positions)

%time simulation.step(2000)

In [12]: %time simulation.step(2000)
CPU times: user 3.02 s, sys: 1.17 s, total: 4.19 s
Wall time: 4.21 s

In [13]: %time simulation.step(2000)
CPU times: user 3.92 s, sys: 1.28 s, total: 5.20 s
Wall time: 11.93 s

More Sophisticated PME FFT Performance Tuning

So I was inspired by the large variation in FFT performance as a function of grid size:

cufft

I wanted to know whether we could significantly improve performance by additional tuning of our FFT parameters. The answer is yes. Here are the results:

pme_tuneing

I'm plotting the time required to simulate BPTI for 20,000 steps. The nonbondedCutoff is 0.9nm for all cases. The current grid size selected by OpenMM is 45, so I normalized all values relative to that one.

What you should notice is that grid sizes of 46 and 48 achieve significant (8%, 10%) performance improvements over the current value of 45. Because these are finer spacings than 45, this speedup comes at a gain of accuracy.

Here are two additional points: using a grid size of 32 leads to a 20% speedup, although this is at the cost of reduced accuracy. On the other hand, a grid size of 64 leads to only a 6% slowdown--but with correspondingly higher accuracy. If we were to optimize the performance as a function of both grid size and nonbondedCutoff, we should be able to achieve a performance improvement somewhere between 10% and 20%.

I propose that we add a PME tuning function to the python API. This would be similar to the gromacs tool g_tune_pme, but would operate in the python app.

I also think this type of empirical tuning is crucial given the huge heterogeneity of hardware that people run on--it's nice to be able to automatically optimize the code for your particular hardware setup.

OpenMM System lacks remove methods

Something up for discussion, currently the System class has methods AddForce, AddParticle, AddConstraint, but there lacks corresponding methods to remove them. The only way to do it sensibly is removing them from system.xml files manually (this may not be practical for various reasons.) Should we add support for remove as well?

OpenCL: Automatically detecting the fastest OpenCLPlatform

I just installed the Intel CPU OpenCL SDK (it's actually pretty easy, but there are some annoying things for Ubuntu since Intel only distributes .rpm files). I put up instructions in a gist on how to do it though.

Anyways, OpenMM doesn't appear to select the fastest device across OpenCLPlatforms. My "Intel CPU" is now the zero-th OpenCL platform, and it gets selected by default unless I supply properties={'OpenCLPlatformIndex': '1'} to the Context constructor.

It would be sweet if the logic that already picks the fastest OpenCL device also iterated across the available OpenCL platforms, if OpenCLPlatformIndex is not specified.

Access to OpenMM version number via Python API?

Is there a way to access the OpenMM version number via the Python API?

Setting, for example, simtk.openmm.version to something appropriate, or adding a simtk.openmm.getVersion() method would be helpful.

Discussing the CUDA version requirement

Hi everyone,

In my latest issue with the forces being off, I looked at Justin's issue where one result of the discussion was that the next release should require CUDA 5.5 (i.e. no longer be backward compatible with old versions of CUDA). I'd like to bring this up for discussion again for one main reason: It can be difficult to get the NVidia drivers updated on clusters and supercomputers where OpenMM is being used.

In most environments where I am using OpenMM on a moderate to large scale, it is on a cluster or supercomputer where I don't have root permission. In those cases, I have to appeal to the sysadmin to reinstall the NVidia drivers, and this has been known to take months. Administrators can be reluctant to reboot compute nodes because it kills running jobs , and from my own experience reboots often raise unexpected issues for nodes that have been up for a long time. Thus, planned compute node shutdowns are usually announced in advance and do not happen often.

On the Certainty cluster, one year ago I requested for CUDA to be updated to 4.2, and the corresponding driver update (which requires rebooting the nodes) took three months. The ICME cluster took six months to be updated and perhaps a dozen emails to the admin from multiple people, and it now supports CUDA 5.0.

If we require CUDA 5.5 for the next OpenMM version, the situation I am worried about is that none of the clusters will have the required driver version (319), and the GPU nodes on the clusters will need to be updated and rebooted before we can run the jobs again. As I mentioned above, on some clusters this could take months, and if OpenMM is released on an annual schedule and we always require the newest CUDA version, it could lead to a situation where there are 2 months out of the year where OpenMM is not usable on a significant fraction of cluster GPU resources.

Because of this, I'd like to bring up the possibility of preserving backward compatibility for CUDA versions, for perhaps one year's worth of releases, or just continuing to support version 5.0 . This would give us a few months to persuade the sysadmins to update the NVidia drivers and install the newest CUDA, and we would still be able to run our jobs while waiting for them. Hopefully we could do this in a way that does not add undue stress to the development process.

Let me know what you think.

RNG Error when combining RESPA with thermostat

I think we've observed this problem before, but here it is. When we use RESPA with a thermostat, the RNG seeds get out of sync and the simulation fails to build:

system = prm.createSystem(constraints=app.HAngles, nonbondedCutoff=0.85 * u.nanometer, nonbondedMethod=app.PME)
In [24]: simulation = app.Simulation(prm.topology, system, integrator)
---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
<ipython-input-24-19e339e9d801> in <module>()
----> 1 simulation = app.Simulation(prm.topology, system, integrator)

~/opt/lib/python2.7/site-packages/simtk/openmm/app/simulation.pyc in __init__(self, topology, system, integrator, platform, platformProperties)
     73         if platform is None:
     74             ## The Context containing the current state of the simulation
---> 75             self.context = mm.Context(system, integrator)
     76         elif platformProperties is None:
     77             self.context = mm.Context(system, integrator, platform)

~/opt/lib/python2.7/site-packages/simtk/openmm/openmm.pyc in __init__(self, *args)
   9047 
   9048 
-> 9049         this = _openmm.new_Context(*args)
   9050         try: self.this.append(this)
   9051         except: self.this = this

Exception: CudaIntegrationUtilities::initRandomNumberGenerator(): Requested two different values for the random number seed

Provide functionality to query available devices

We launch multiple OpenMM processes using mpi4py and need to enure that when multiple processes start on the same node, they each create contexts on a different GPU.

As far as I can tell, there currently isn't an API in OpenMM that can return a list of valid devices for the current platform. We currently examine CUDA_VISIBLE_DEVICES directly, but it would be nice if there was an OpenMM API that wrapped around the corresponding cuda or opencl APIs to query devices.

Reporting the pressure

Hi,

I am runnning simulations with the MC barostat. How can I log the pressure?

Best,

Benjamin

OS X compilation issues

Has anyone had trouble compiling the git head on OS X 10.8.4 using the system compiler (clang)?

Linking CXX shared library ../../../libOpenMMCUDA.dylib
ld: file not found: @rpath/CUDA.framework/Versions/A/CUDA for architecture i386
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [libOpenMMCUDA.dylib] Error 1
make[1]: *** [platforms/cuda/sharedTarget/CMakeFiles/OpenMMCUDA.dir/all] Error 2
make: *** [all] Error 2
[LSKI1497:~/code/openmm-git/openmm] choderaj% clang --version
Apple LLVM version 4.2 (clang-425.0.28) (based on LLVM 3.2svn)
Target: x86_64-apple-darwin12.4.0
Thread model: posix

Example systems crash

I am unable to run any of the protein in water examples from the examples directory. Specifically:

simulateAmber.py
simulateGromacs.py
simulatePdb.py

They all fail with NaN after about 20 steps. There is nothing unusual about the trajectories, they just suddenly fail. These same systems run fine with 5.1 compiled from source. If I have time, I will try to git bisect to see when it starts to break.

From HEAD, I am able to run all of the "hello" examples and argon-chemical-potential.py. My own tests using implicit solvent are fine.

cuda platform
retina mbp
gt 650m

OpenMM cannot load a larger inpcrd/prmtop

Hi there,

When building examples for the OpenMM command line program, I found that it can't load one of the larger AMBER example files. First, it was unable to load the coordinates because the .inpcrd file had floats that were running together. I was able to work around this by replacing the last decimal of precision with a space.

Next, it wasn't able to load the .prmtop file with a more cryptic error message, which I wasn't able to solve.

The files are here:

https://dl.dropboxusercontent.com/u/5381783/dppc_128.inpcrd
https://dl.dropboxusercontent.com/u/5381783/dppc_128.prmtop

First error:

>>> inpcrd = app.AmberInpcrdFile('dppc_128.inpcrd')
Traceback (most recent call last):
  File "/home/leeping/local/bin/openmm", line 5, in <module>
    pkg_resources.run_script('openmm-cmd==0.1', 'openmm')
  File "/home/leeping/.local/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/pkg_resources.py", line 489, in run_script
  File "/home/leeping/.local/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/pkg_resources.py", line 1207, in run_script
  File "/home/leeping/local/lib/python2.7/site-packages/openmm_cmd-0.1-py2.7.egg/EGG-INFO/scripts/openmm", line 1018, in <module>
    openmm.start()
  File "/home/leeping/local/lib/python2.7/site-packages/openmm_cmd-0.1-py2.7.egg/EGG-INFO/scripts/openmm", line 766, in start
    topology = self.general.get_topology()
  File "/home/leeping/local/lib/python2.7/site-packages/openmm_cmd-0.1-py2.7.egg/EGG-INFO/scripts/openmm", line 253, in get_topology
    self.load_coords()
  File "/home/leeping/local/lib/python2.7/site-packages/openmm_cmd-0.1-py2.7.egg/EGG-INFO/scripts/openmm", line 134, in load_coords
    self.inpcrd_file = app.AmberInpcrdFile(self.coords)
  File "/home/leeping/.local/lib/python2.7/site-packages/simtk/openmm/app/amberinpcrdfile.py", line 57, in __init__
    results = amber_file_parser.readAmberCoordinates(file, read_velocities=loadVelocities, read_box=loadBoxVectors)
  File "/home/leeping/.local/lib/python2.7/site-packages/simtk/openmm/app/internal/amber_file_parser.py", line 891, in readAmberCoordinates
    coordinates.append(mm.Vec3(float(elements.pop(0)), float(elements.pop(0)), float(elements.pop(0))))
ValueError: invalid literal for float(): -20.5102618-135.5159901

Second error:

>>> prmtop = app.AmberPrmtopFile('dppc_128.prmtop')
Traceback (most recent call last):
  File "/home/leeping/local/bin/openmm", line 5, in <module>
    pkg_resources.run_script('openmm-cmd==0.1', 'openmm')
  File "/home/leeping/.local/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/pkg_resources.py", line 489, in run_script
  File "/home/leeping/.local/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/pkg_resources.py", line 1207, in run_script
  File "/home/leeping/local/lib/python2.7/site-packages/openmm_cmd-0.1-py2.7.egg/EGG-INFO/scripts/openmm", line 1018, in <module>
    openmm.start()
  File "/home/leeping/local/lib/python2.7/site-packages/openmm_cmd-0.1-py2.7.egg/EGG-INFO/scripts/openmm", line 766, in start
    topology = self.general.get_topology()
  File "/home/leeping/local/lib/python2.7/site-packages/openmm_cmd-0.1-py2.7.egg/EGG-INFO/scripts/openmm", line 253, in get_topology
    self.load_coords()
  File "/home/leeping/local/lib/python2.7/site-packages/openmm_cmd-0.1-py2.7.egg/EGG-INFO/scripts/openmm", line 145, in load_coords
    self.prmtop_file = app.AmberPrmtopFile(self.prmtop)
  File "/home/leeping/.local/lib/python2.7/site-packages/simtk/openmm/app/amberprmtopfile.py", line 75, in __init__
    prmtop = amber_file_parser.PrmtopLoader(file)
  File "/home/leeping/.local/lib/python2.7/site-packages/simtk/openmm/app/internal/amber_file_parser.py", line 127, in __init__
    self._raw_format[self._flags[-1]] = (format, m.group(1), m.group(2), m.group(3), m.group(4))
AttributeError: 'NoneType' object has no attribute 'group'

Cutoffs with GB produces severe artifacts

The current implementation of cutoffs in the GB code produces severe artifacts that are not present in the equivalent Amber simulation.

  • Even with reasonable cutoffs (25 Angstrom), the protein tears itself apart (RMSD > 8 Angstrom) on the 10s of ps timescale. When the cutoff is very large and all atom pairs are within the cutoff distance, then there is no problem.
  • This is a problem with both the gbsObc and customGB-based forces.
  • This doesn't appear to be a problem with cutoffs per se, as vacuum simulations with cutoffs are actually far more stable than GB simulations with cutoffs.
  • This problem isn't specific to the CUDA platform. The reference platform produces the same results. I have not been able to try Open CL, but would be surprised if it doesn't have the same issue.
  • This problem is present in 5.0, 5.1, and the current HEAD.
  • If I run the equivalent calculation in pmemd---with an identical topology and mdin parameters to mimic openmm---the calculation is stable.

I will look into how Amber handles cutoffs in GB and see if I can track down the difference. (I can't assign myself to the issue.)

testInstallation.py fails when OpenCL and CUDA compete for same device in exclusive mode

On a box with 6 Tesla C2075s, I observe a failure for testInstallation.py.

rmcgibbo@node020 ~/local/openmm/examples
$ python baseTestInstallation.py 
There are 3 Platforms available:

1 Reference - Successfully computed forces
2 CUDA - Successfully computed forces
3 OpenCL - Error computing forces with OpenCL platform

Median difference in forces between platforms:

Reference vs. CUDA: 2.18562e-05

Printing the exception on line 34, I see Error initializing context: clCreateContext (-33), which is CL_INVALID_DEVICE.

If I edit line 25 to iterate over the platforms in reverse order, for i in range(numPlatforms)[::-1], I instead see an error on the CUDA platform, Error initializing Context: CUDA_ERROR_INVALID_DEVICE (101) at /home/rmcgibbo/local/openmm/platforms/cuda/src/CudaContext.cpp:136, and OpenCL works fine.

If I add platform properties asking OpenCL and CUDA to use different devices, properties = {'OpenCLDeviceIndex': '0', 'CudaDeviceIndex': '1'}, then it runs just fine.

The output from NVIDIA_CUDA-5.0_Samples/1_Utilities/deviceQuery/deviceQuery is:

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 6 CUDA Capable device(s)

Device 0: "Tesla C2075"
  CUDA Driver Version / Runtime Version          5.5 / 5.0
  CUDA Capability Major/Minor version number:    2.0
  Total amount of global memory:                 5375 MBytes (5636554752 bytes)
  (14) Multiprocessors x ( 32) CUDA Cores/MP:    448 CUDA Cores
  GPU Clock rate:                                1147 MHz (1.15 GHz)
  Memory Clock rate:                             1566 Mhz
  Memory Bus Width:                              384-bit
  L2 Cache Size:                                 786432 bytes
  Max Texture Dimension Size (x,y,z)             1D=(65536), 2D=(65536,65535), 3D=(2048,2048,2048)
  Max Layered Texture Size (dim) x layers        1D=(16384) x 2048, 2D=(16384,16384) x 2048
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 32768
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1536
  Maximum number of threads per block:           1024
  Maximum sizes of each dimension of a block:    1024 x 1024 x 64
  Maximum sizes of each dimension of a grid:     65535 x 65535 x 65535
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Enabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Bus ID / PCI location ID:           18 / 0
  Compute Mode:
     < Exclusive (only one host thread in one process is able to use ::cudaSetDevice() with this device) >

[...five more devices that look just like this...]

The problem is obviously the compute mode being exclusive -- the first context "owns" the GPU and is not necessarily deleted / garbage collected before the second one is made.

Is it possible to query whether the device is in an available state? Or instead, how about if (a) clGetDeviceIDs / cudaGetDeviceCount return more than one device and (b) the user does not supply a CudaDeviceIndex / OpenCLDeviceIndex and (c) clCreateContext / cuCtxCreate fails on the first attempt, then we should iterate through and attempt to create the context on one of the remaining device before erroring to the user?

issue with addSolvent()

Hi,

In trying to add solvent to my PDB file, I keep getting the following error:

     File "/home/jnapoli/INSTALL/epd-7.3-2-rh5-x86_64/lib/python2.7/site-packages/simtk/openmm/app/modeller.py",         line 400, in addSolvent
        atomPos += center-box/2
     TypeError: unsupported operand type(s) for /: 'tuple' and 'int'

I'm using version 5.1.

Thanks!
Joe

CPU-PME breaks build with older GCC versions

Hi there,

I got the error when building the latest version of OpenMM on the "Certainty" cluster, in particular when compiling the CPU PME:

unrecognized command line option "-msse4.1"

Certainty uses a pretty old version of gcc (4.1.2). Fortunately the compute nodes have version 4.4.6, though I was surprised that wasn't the default. Once I changed the version of gcc to 4.4.6, the build worked.

Since the user's manual states that gcc versions 4.0 through 4.5 have been tested, it sounds like our options are: (1) update the supported gcc versions, or (2) catch this specific case that is causing this issue.

Nosetests for python App

I wonder if people would find it useful to have integration tests comparing various calculations performed in the python app to the "gold standard" implementations.

Thus, we could compare energies and forces against AMBER. We might compare AMOEBA calculations against Tinker. I think this would be very powerful to convince people of the correctness of our results.

If possible, we would have this automated on Travis or something similar, at least for platforms that are compatible with their platform. The platform shouldn't really matter here, assuming that the OpenMM tests are already thorough.

Another advantage of this is that making these sorts of tests should be easy enough that the community could contribute them back via PR. Thus, if someone cares are about the correctness of a DNA calculation, for example, they could create their system using both AMBER and OpenMM, compare the results, and contribute the comparison code back as a test.

This is pretty much required in some publications, so we might want to have a platform for facilitating this.

GromacsTopFile.createSystem() fails for a rather large example

Hi there,

The GromacsTopFile.createSystem() method fails for a rather large topology that I was using for an OpenMM example. I verified that grompp does successfully build the .tpr file here.

https://dl.dropboxusercontent.com/u/5381783/gmxfail.tar.bz2

In [2]: top = GromacsTopFile('topol.top')

In [3]: top.createSystem()
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-3-d13415455b1a> in <module>()
----> 1 top.createSystem()

/home/leeping/local/lib/python2.7/site-packages/simtk/openmm/app/gromacstopfile.pyc in createSystem(self, nonbondedMethod, nonbondedCutoff, constraints, rigidWater, implicitSolvent, soluteDielectric, solventDielectric, ewaldErrorTolerance, removeCMMotion)
    501                         params = self._bondTypes[types[::-1]][3:5]
    502                     else:
--> 503                         raise ValueError('No parameters specified for bond: '+fields[0]+', '+fields[1])
    504                     # Decide whether to use a constraint or a bond.
    505                     useConstraint = False

ValueError: No parameters specified for bond: 1, 2

OpenCL issues

I'm trying to run on AMD CPUs with OpenCL and am getting the following for the DHFR test system

$ python test.py implicit true OpenCL
['test.py', 'implicit', 'true', 'OpenCL']
Setting of real/effective user Id to 0/0 failed
FATAL: Error inserting fglrx (/lib/modules/2.6.32-34-generic/updates/dkms/fglrx.ko): Operation not permitted
Error! Fail to load fglrx kernel module! Maybe you can switch to root user to load kernel module directly
No protocol specified
UNREACHABLE executed!
Aborted

I found the following in the AMD documentation that suggests the software is running on the cpu:

Executing samples on Linux using the CPU runtime reports the following 
message, but continues to execute as expected :
FATAL: Module fglrx not found.
Error! Fail to load fglrx kernel module! Maybe you can switch to root user to load 
kernel module directly

Any idea what's going wrong?

Some other details:

  • I installed the openMM binaries for linux on an ubuntu desktop
  • The Reference Platform works fine.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.