GithubHelp home page GithubHelp logo

patflick / mxx Goto Github PK

View Code? Open in Web Editor NEW
74.0 9.0 17.0 616 KB

C++11 Message Passing

Home Page: http://patflick.github.io/mxx

License: Apache License 2.0

CMake 0.40% C++ 99.00% Python 0.39% Shell 0.21%
mpi parallel reduction c-plus-plus sorting sort distributed cpp cpp11

mxx's Introduction

mxx

Build Status Build Status codecov Apache 2.0 License

mxx is a C++/C++11 template library for MPI. The main goal of this library is to provide two things:

  1. Simplified, efficient, and type-safe C++11 bindings to common MPI operations.
  2. A collection of scalable, high-performance standard algorithms for parallel distributed memory architectures, such as sorting.

As such, mxx is targeting use in rapid C++ and MPI algorithm development, prototyping, and deployment.

Features

  • All functions are templated by type. All MPI_Datatype are deducted from the C++ type given to the function.
  • Custom reduction operations as lambdas, std::function, functor, or function pointer.
  • Send/Receive and Collective operations take size_t sized input and automatically handle sizes larger than INT_MAX.
  • Plenty of convenience functions and overloads for common MPI operations with sane defaults (e.g., super easy collectives: std::vector<size_t> allsizes = mxx::allgather(local_size)).
  • Automatic type mapping of all built-in (int, double, etc) and other C++ types such as std::tuple, std::pair, and std::array.
  • Non-blocking operations return a mxx::future<T> object, similar to std::future.
  • Google Test based MPI unit testing framework
  • Parallel sorting with similar API than std::sort (mxx::sort)

Planned / TODO

  • Parallel random number engines (for use with C++11 standard library distributions)
  • More parallel (standard) algorithms
  • Wrappers for non-blocking collectives
  • serialization/de-serialization of non contiguous data types (maybe)
  • macros for easy datatype creation and handling for custom/own structs and classes
  • Implementing and tuning different sorting algorithms
  • Communicator classes for different topologies
  • mxx::env similar to boost::mpi::env for wrapping MPI_Init and MPI_Finalize
  • full-code and intro documentations
  • Increase test coverage: codecov.io

Status

Currently mxx is a small personal project at early stages, with lots of changes still going on. However, feel free to contribute.

Examples

Collective Operations

This example shows the main features of mxx's wrappers for MPI collective operations:

  • MPI_Datatype deduction according to the template type
  • Handling of message sizes larger than INT_MAX (everything is size_t enabled)
  • Receive sizes do not have to be specified
  • convenience functions for std::vector, both for sending and receiving
    // local numbers, can be different size on each process
    std::vector<size_t> local_numbers = ...;
    // allgather the local numbers, easy as pie:
    std::vector<size_t> all_numbers = mxx::allgatherv(local_numbers, MPI_COMM_WORLD);

Reductions

The following example showcases the C++11 interface to reductions:

    #include <mxx/reduction.hpp>

    // ...
    // lets take some pairs and find the one with the max second element
    std::pair<int, double> v = ...;
    std::pair<int, double> min_pair = mxx::allreduce(v,
                           [](const std::pair<int, double>& x,
                              const std::pair<int, double>& y){
                               return x.second > y.second ? x : y;
                           });

What happens here, is that the C++ types are automatically matched to the appropriate MPI_Datatype (struct of MPI_INT and MPI_DOUBLE), then a custom reduction operator (MPI_Op) is created from the given lambda, and finally MPI_Allreduce called for the given parameters.

Sorting

Consider a simple example, where you might want to sort tuples (int key,double x, double y) by key key in parallel using MPI. Doing so in pure C/MPI requires quite a lot of coding (~100 lines), debugging, and frustration. Thanks to mxx and C++11, this becomes as easy as:

    typedef std::tuple<int, double, double> tuple_type;
    std::vector<tuple_type> data(local_size);
    // define a comparator for the tuple
    auto cmp = [](const tuple_type& x, const tuple_type& y) {
                   return std::get<0>(x) < std::get<0>(y); }

    // fill the vector ...

    // call mxx::sort to do all the heavy lifting:
    mxx::sort(data.begin(), data.end(), cmp, MPI_COMM_WORLD);

In the background, mxx performs many things, including (but not limited to):

  • mapping the std::tuple to a MPI type by creating the appropriate MPI datatype (i.e., MPI_Type_struct).
  • distributing the data if not yet done so
  • calling std::sort as a local base case, in case the communicator consists of a single processor, mxx::sort will fall-back to std::sort
  • in case the data size exceeds the infamous MPI size limit of MAX_INT, mxx will not fail, but continue to work as expected
  • redistributing the data so that it has the same distribution as given in the input to mxx::sort

Alternatives?

To our knowledge, there are two noteworthy, similar open libraries available.

  1. boost::mpi offers C++ bindings for a large number of MPI functions. As such it corresponds to our main goal 1. Major drawbacks of using boost::mpi are the unnecessary overhead of boost::serialization (especially in terms of memory overhead). boost::mpi also doesn't support large message sizes (> INT_MAX), and the custom reduction operator implementation is rather limited.
  2. mpp offers low-overhead C++ bindings for MPI point-to-point communication primitives. As such, this solutions shows better performance than boost::mpi, but was never continued beyond point-to-point communication.

Authors

  • Patrick Flick

Installation

Since this is a header only library, simply copy and paste the mxx folder into your project, and you'll be all set.

Dependencies

mxx requires a C++11 compatible compiler. mxx currently works with MPI-2 and MPI-3. However, some collective operations and sorting will work on data sizes >= 2 GB only with MPI-3.

Compiling

Not necessary. This is a header only library. There is nothing to compile.

Building tests

The tests can be compiled using cmake:

mkdir build && cd build
cmake ../ && make

Running the tests (with however many processes you want).

mpirun -np 13 ./bin/test-all

Licensing

Our code is licensed under the Apache License 2.0 (see LICENSE). The licensing does not apply to the ext folder, which contains external dependencies which are under their own licensing terms.

mxx's People

Contributors

asrivast28 avatar cjain7 avatar davidsblom avatar eipi10ydz avatar mquevill avatar pabloferz avatar patflick avatar tcpan avatar tkonolige avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mxx's Issues

mxx::min_element and mxx::max_element

vector version, even with small vectors of doubles, produced invalid read error under valgrind.

tested on compbio with nprocs = 4 and 128, can reproduce error. with nprocs = 2, did not observe error.

[1,4]<stderr>:==54670== Invalid read of size 8
[1,4]<stderr>:==54670==    at 0x4BE9C7: std::vector<std::pair<double, int>, std::allocator<std::pair<double, int> > > mxx::max_element<double>(std::vector<double, std::allocator<double> > const&, mxx::comm const&)::{lambda(std::pair<doubl
e, int> const&, std::pair<double, int> const&)#1}::operator()(std::pair<double, int> const&, std::pair<double, int> const&) const (in /home/tpan7/build/kmerind-gcc-rel/bin/testFASTQ_load)
[1,4]<stderr>:==54670==    by 0x4E1393: void mxx::custom_op<std::pair<double, int>, true>::custom_function<std::vector<std::pair<double, int>, std::allocator<std::pair<double, int> > > mxx::max_element<double>(std::vector<double, std::all
ocator<double> > const&, mxx::comm const&)::{lambda(std::pair<double, int> const&, std::pair<double, int> const&)#1}>(std::vector<std::pair<double, int>, std::allocator<std::pair<double, int> > > mxx::max_element<double>(std::vector<doubl
e, std::allocator<double> > const&, mxx::comm const&)::{lambda(std::pair<double, int> const&, std::pair<double, int> const&)#1}, void*, std::vector<std::pair<double, int>, std::allocator<std::pair<double, int> > > mxx::max_element<double>
(std::vector<double, std::allocator<double> > const&, mxx::comm const&)::{lambda(std::pair<double, int> const&, std::pair<double, int> const&)#1}, int*) (reduction.hpp:269)
[1,4]<stderr>:==54670==    by 0x4E465F: _ZNSt5_BindIFPFvZN3mxx11max_elementIdEESt6vectorISt4pairIT_iESaIS5_EERKS2_IS4_SaIS4_EERKNS0_4commEEUlRKS3_IdiESH_E_PvSJ_PiESI_St12_PlaceholderILi1EESN_ILi2EESN_ILi3EEEE6__callIvJOSJ_SU_OSK_EJLm0ELm1
ELm2ELm3EEEES4_OSt5tupleIJDpT0_EESt12_Index_tupleIJXspT1_EEE (functional:1074)
[1,4]<stderr>:==54670==    by 0x4E4090: _ZNSt5_BindIFPFvZN3mxx11max_elementIdEESt6vectorISt4pairIT_iESaIS5_EERKS2_IS4_SaIS4_EERKNS0_4commEEUlRKS3_IdiESH_E_PvSJ_PiESI_St12_PlaceholderILi1EESN_ILi2EESN_ILi3EEEEclIJSJ_SJ_SK_EvEET0_DpOT_ (fun
ctional:1133)
[1,4]<stderr>:==54670==    by 0x4E39EB: std::_Function_handler<void (void*, void*, int*), std::_Bind<void (*({lambda(std::pair<double, int> const&, std::pair<double, int> const&)#1}, std::_Placeholder<1>, void (*)(std::vector<std::pair<do
uble, int>, std::allocator<std::pair<double, int> > > mxx::max_element<double>(std::vector<double, std::allocator<double> > const&, mxx::comm const&)::{lambda(std::pair<double, int> const&, std::pair<double, int> const&)#1}, void*, void*,
 int*)<2>, void (*)(std::vector<std::pair<double, int>, std::allocator<std::pair<double, int> > > mxx::max_element<double>(std::vector<double, std::allocator<double> > const&, mxx::comm const&)::{lambda(std::pair<double, int> const&, std:
:pair<double, int> const&)#1}, void*, void*, int*)<3>))(std::vector<std::pair<double, int>, std::allocator<std::pair<double, int> > > mxx::max_element<double>(std::vector<double, std::allocator<double> > const&, mxx::comm const&)::{lambda
(std::pair<double, int> const&, std::pair<double, int> const&)#1}, void*, void*, int*)> >::_M_invoke(std::_Any_data const&, void*&&, std::_Any_data const&, int*&&) (functional:1871)
[1,4]<stderr>:==54670==    by 0x4C3DD5: std::function<void (void*, void*, int*)>::operator()(void*, void*, int*) const (functional:2271)
[1,4]<stderr>:==54670==    by 0x4E12FC: mxx::custom_op<std::pair<double, int>, true>::mpi_user_function(void*, void*, int*, ompi_datatype_t**) (reduction.hpp:278)
[1,4]<stderr>:==54670==    by 0xB15B1B5: ompi_coll_tuned_allreduce_intra_recursivedoubling (in /usr/local/modules/openmpi/1.10.2/lib/openmpi/mca_coll_tuned.so)
[1,4]<stderr>:==54670==    by 0x4E8840A: PMPI_Allreduce (in /usr/local/modules/openmpi/1.10.2/lib/libmpi.so.12.0.2)
[1,4]<stderr>:==54670==    by 0x4DB81D: void mxx::allreduce<std::pair<double, int>, std::vector<std::pair<double, int>, std::allocator<std::pair<double, int> > > mxx::max_element<double>(std::vector<double, std::allocator<double> > const&
, mxx::comm const&)::{lambda(std::pair<double, int> const&, std::pair<double, int> const&)#1}>(std::pair<double, int> const*, unsigned long, std::pair<double, int>*, std::vector<std::pair<double, int>, std::allocator<std::pair<double, int
> > > mxx::max_element<double>(std::vector<double, std::allocator<double> > const&, mxx::comm const&)::{lambda(std::pair<double, int> const&, std::pair<double, int> const&)#1}, mxx::comm const&) (reduction.hpp:362)
[1,4]<stderr>:==54670==    by 0x4D440B: std::vector<std::pair<double, int>, std::allocator<std::pair<double, int> > > mxx::allreduce<std::pair<double, int>, std::vector<std::pair<double, int>, std::allocator<std::pair<double, int> > > mxx
::max_element<double>(std::vector<double, std::allocator<double> > const&, mxx::comm const&)::{lambda(std::pair<double, int> const&, std::pair<double, int> const&)#1}>(std::pair<double, int> const*, unsigned long, std::vector<std::pair<do
uble, int>, std::allocator<std::pair<double, int> > > mxx::max_element<double>(std::vector<double, std::allocator<double> > const&, mxx::comm const&)::{lambda(std::pair<double, int> const&, std::pair<double, int> const&)#1}, mxx::comm con
st&) (reduction.hpp:373)
[1,4]<stderr>:==54670==    by 0x4C9A3D: std::vector<std::pair<double, int>, std::allocator<std::pair<double, int> > > mxx::allreduce<std::pair<double, int>, std::vector<std::pair<double, int>, std::allocator<std::pair<double, int> > > mxx
::max_element<double>(std::vector<double, std::allocator<double> > const&, mxx::comm const&)::{lambda(std::pair<double, int> const&, std::pair<double, int> const&)#1}>(std::vector<std::pair<double, int>, std::allocator<std::pair<double, i
nt> > > const&, std::vector<std::pair<double, int>[1,4]<stderr>:, std::allocator<std::pair<double, int> > > mxx::max_element<double>(std::vector<double, std::allocator<double> > const&, mxx::comm const&)::{lambda(std::pair<double, int> co
nst&, std::pair<double, int> const&)#1}, mxx::comm const&) (reduction.hpp:384)
[1,4]<stderr>:==54670==  Address 0x935eea8 is 40 bytes inside a block of size 44 alloc'd
[1,4]<stderr>:==54670==    at 0x4A06A2E: malloc (vg_replace_malloc.c:270)
[1,4]<stderr>:==54670==    by 0xB15AD36: ompi_coll_tuned_allreduce_intra_recursivedoubling (in /usr/local/modules/openmpi/1.10.2/lib/openmpi/mca_coll_tuned.so)
[1,4]<stderr>:==54670==    by 0x4E8840A: PMPI_Allreduce (in /usr/local/modules/openmpi/1.10.2/lib/libmpi.so.12.0.2)
[1,4]<stderr>:==54670==    by 0x4DB81D: void mxx::allreduce<std::pair<double, int>, std::vector<std::pair<double, int>, std::allocator<std::pair<double, int> > > mxx::max_element<double>(std::vector<double, std::allocator<double> > const&
, mxx::comm const&)::{lambda(std::pair<double, int> const&, std::pair<double, int> const&)#1}>(std::pair<double, int> const*, unsigned long, std::pair<double, int>*, std::vector<std::pair<double, int>, std::allocator<std::pair<double, int
> > > mxx::max_element<double>(std::vector<double, std::allocator<double> > const&, mxx::comm const&)::{lambda(std::pair<double, int> const&, std::pair<double, int> const&)#1}, mxx::comm const&) (reduction.hpp:362)
[1,4]<stderr>:==54670==    by 0x4D440B: std::vector<std::pair<double, int>, std::allocator<std::pair<double, int> > > mxx::allreduce<std::pair<double, int>, std::vector<std::pair<double, int>, std::allocator<std::pair<double, int> > > mxx
::max_element<double>(std::vector<double, std::allocator<double> > const&, mxx::comm const&)::{lambda(std::pair<double, int> const&, std::pair<double, int> const&)#1}>(std::pair<double, int> const*, unsigned long, std::vector<std::pair<do
uble, int>, std::allocator<std::pair<double, int> > > mxx::max_element<double>(std::vector<double, std::allocator<double> > const&, mxx::comm const&)::{lambda(std::pair<double, int> const&, std::pair<double, int> const&)#1}, mxx::comm con
st&) (reduction.hpp:373)
[1,4]<stderr>:==54670==    by 0x4C9A3D: std::vector<std::pair<double, int>, std::allocator<std::pair<double, int> > > mxx::allreduce<std::pair<double, int>, std::vector<std::pair<double, int>, std::allocator<std::pair<double, int> > > mxx
::max_element<double>(std::vector<double, std::allocator<double> > const&, mxx::comm const&)::{lambda(std::pair<double, int> const&, std::pair<double, int> const&)#1}>(std::vector<std::pair<double, int>, std::allocator<std::pair<double, i
nt> > > const&, std::vector<std::pair<double, int>, std::allocator<std::pair<double, int> > > mxx::max_element<double>(std::vector<double, std::allocator<double> > const&, mxx::comm const&)::{lambda(std::pair<double, int> const&, std::pai
r<double, int> const&)#1}, mxx::comm const&) (reduction.hpp:384)
[1,4]<stderr>:==54670==    by 0x4BEB75: std::vector<std::pair<double, int>, std::allocator<std::pair<double, int> > > mxx::max_element<double>(std::vector<double, std::allocator<double> > const&, mxx::comm const&) (reduction.hpp:561)
[1,4]<stderr>:==54670==    by 0x4AD145: unsigned long read_file<bliss::io::FASTQParser, false>(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, mxx::comm const&) (BenchmarkFileLoader.cpp:146)
[1,4]<stderr>:==54670==    by 0x4A48C1: void testIndex<bliss::io::FASTQParser, false>(mxx::comm const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_tra
its<char>, std::allocator<char> >) (BenchmarkFileLoader.cpp:367)
[1,4]<stderr>:==54670==    by 0x49E08F: main (BenchmarkFileLoader.cpp:483)
[1,4]<stderr>:==54670== 

Creating a struct datatype for use in collective?

Hello,

I'm wondering what's the correct way to define a custom struct datatype to be later used in a collective.

I'm my case I have structs defined as:

MyStruct{
float a;
float b;
int c;
};

that I would like to collect to the root using gatherv.

I know how to do this using plain MPI, but I'm wondering what's the process for mxx in order to avoid the error

static assertion failed: Type `T` is not a `trivial` type and is thus not supported for mxx send/recv operations. This type needs one of the following to be supported as trivial datatype: specialized build_datatype<T>, a member function `datatype`, or global function `make_datatype(Layout& l, T&)`

The error describes my options but I haven't been able to find examples of how to make mxx aware of custom datatypes.

mxx:bcast

Currently there's no corresponding function for MPI_Bcast in mxx.

mxx::comm.split with MPI_UNDEFINED

MPI_Comm_split with MPI_UNDEFINED as color would produce a subcommunicator == MPI_COMM_NULL.

currently, mxx::comm.split(color) when supplied with MPI_UNDEFINED does not appear to handle this case properly. Subsequent calls (even when logic is branched with "if (color == x)") results in an MPI error/exception.

how to conduct send receive in mxx

Dear sir:
I can not understand how to conduct the send-receive operations in mxx, which function do it.
does mxx support unblock-send receive.
Regards

MPI compliant custom reductions

See here: open-mpi/ompi#1462

Two options really:

  1. Use MPI_Pack/Unpack for MPI compliant copying of the in/inout buffers inside the custom reduction operation.
  2. Or use MPI_Type_get_envelope and manually copy over each member as they appear in the MPI datatype. I could also cache the envelope information myself inside the mxx::datatype and provide a MPI_Datatype safe copy operation in that class (e.g. copy member by member).

Placeholders namespace is ignored by Boost

If you are trying to use mxx and Boost together, Boost will ignore the std::placeholders namespace in the below snippet of code.

            // create user function
            using namespace std::placeholders;
            m_user_func = std::bind(custom_op::custom_function<Func>,
                                  std::forward<Func>(func), _1, _2, _3, _4);

However, using the namespace explicitly resolves the conflict.

            // create user function
            m_user_func = std::bind(custom_op::custom_function<Func>,
                                    std::forward<Func>(func),
                                    std::placeholders::_1,
                                    std::placeholders::_2,
                                    std::placeholders::_3,
                                    std::placeholders::_4);

or, using an alias,

            // create user function
            namespace ph = std::placeholders;
            m_user_func = std::bind(custom_op::custom_function<Func>,
                                    std::forward<Func>(func),
                                    ph::_1, ph::_2, ph::_3, ph::_4);

Send/Recv functions

TODO:

  • Properly define interfaces to send/recv, isend/irecv, async_send, async_recv, etc
  • add send/recv functions to test suite (see eg #14 )

compilation error of file `datatypes.h`

I get the following error when compiling with gcc 5.4.0:

In file included from ../thirdParty/mxx/include/mxx/comm_fwd.hpp:33:0,
                 from ../thirdParty/mxx/include/mxx/sort.hpp:33,
                 from ElRBFMeshMotionSolver.C:7:
../thirdParty/mxx/include/mxx/datatypes.hpp: In function ‘std::ostream& mxx::operator<<(std::ostream&, const mxx::datatype_name&)’:
../thirdParty/mxx/include/mxx/datatypes.hpp:300:15: error: no match for ‘operator<<’ (operand types are ‘std::ostream {aka std::basic_ostream<char>}’ and ‘const char [2]’)
     return os << "(" << n.mpi_name << "," << n.c_name << "," << n.typeid_name << ")";
               ^
../thirdParty/mxx/include/mxx/datatypes.hpp:299:22: note: candidate: std::ostream& mxx::operator<<(std::ostream&, const mxx::datatype_name&)
 inline std::ostream& operator<<(std::ostream& os, const datatype_name& n) {
                      ^
../thirdParty/mxx/include/mxx/datatypes.hpp:299:22: note:   no known conversion for argument 2 from ‘const char [2]’ to ‘const mxx::datatype_name&’
In file included from /lrz/mnt/sys.x86_64/compilers/gcc/5.4.0/include/c++/5.4.0/memory:82:0,
                 from ../thirdParty/mxx/include/mxx/future.hpp:30,
                 from ../thirdParty/mxx/include/mxx/comm_fwd.hpp:32,
                 from ../thirdParty/mxx/include/mxx/sort.hpp:33,
                 from ElRBFMeshMotionSolver.C:7:
/lrz/mnt/sys.x86_64/compilers/gcc/5.4.0/include/c++/5.4.0/bits/shared_ptr.h:66:5: note: candidate: template<class _Ch, class _Tr, class _Tp, __gnu_cxx::_Lock_policy _Lp> std::basic_ostream<_CharT, _Traits>& std::operator<<(std::basic_ostream<_CharT, _Traits>&, const std::__shared_ptr<_Tp, _Lp>&)
     operator<<(std::basic_ostream<_Ch, _Tr>& __os,
     ^
/lrz/mnt/sys.x86_64/compilers/gcc/5.4.0/include/c++/5.4.0/bits/shared_ptr.h:66:5: note:   template argument deduction/substitution failed:
In file included from ../thirdParty/mxx/include/mxx/comm_fwd.hpp:33:0,
                 from ../thirdParty/mxx/include/mxx/sort.hpp:33,
                 from ElRBFMeshMotionSolver.C:7:
../thirdParty/mxx/include/mxx/datatypes.hpp:300:18: note:   mismatched types ‘const std::__shared_ptr<_Tp, _Lp>’ and ‘const char [2]’
     return os << "(" << n.mpi_name << "," << n.c_name << "," << n.typeid_name << ")";
                  ^
In file included from /lrz/mnt/sys.x86_64/compilers/gcc/5.4.0/include/c++/5.4.0/string:52:0,
                 from /lrz/mnt/sys.x86_64/compilers/gcc/5.4.0/include/c++/5.4.0/stdexcept:39,
                 from /lrz/mnt/sys.x86_64/compilers/gcc/5.4.0/include/c++/5.4.0/array:38,
                 from /lrz/mnt/sys.x86_64/compilers/gcc/5.4.0/include/c++/5.4.0/tuple:39,
                 from /lrz/mnt/sys.x86_64/compilers/gcc/5.4.0/include/c++/5.4.0/functional:55,
                 from ../thirdParty/mxx/include/mxx/comm_fwd.hpp:28,
                 from ../thirdParty/mxx/include/mxx/sort.hpp:33,
                 from ElRBFMeshMotionSolver.C:7:
/lrz/mnt/sys.x86_64/compilers/gcc/5.4.0/include/c++/5.4.0/bits/basic_string.h:5172:5: note: candidate: template<class _CharT, class _Traits, class _Alloc> std::basic_ostream<_CharT, _Traits>& std::operator<<(std::basic_ostream<_CharT, _Traits>&, const std::__cxx11::basic_string<_CharT, _Traits, _Alloc>&)
     operator<<(basic_ostream<_CharT, _Traits>& __os,

This can simply be fixed by including the header file at the top of the file datatypes.h

#include <iostream>

compilation error on debian:testing with gcc 6.1.1

A compilation error occurred when compiling with debian:testing using gcc 6.1.1. The latest version of ubuntu works just fine though.

Here is the log:

In file included from ../thirdParty/mxx/include/mxx/samplesort.hpp:31:0,
                 from ../thirdParty/mxx/include/mxx/sort.hpp:34,
                 from ElRBFMeshMotionSolver.C:7:
../thirdParty/mxx/include/mxx/bitonicsort.hpp: In function 'void mxx::impl::bitonic_merge(_Iterator, _Iterator, _Compare, const mxx::comm&, int, int, int)':
../thirdParty/mxx/include/mxx/bitonicsort.hpp:103:34: error: there are no arguments to 'log' that depend on a template parameter, so a declaration of 'log' must be available [-fpermissive]
     int p2 = pow(2, ceil(log(size)/log(2)));
                                  ^
../thirdParty/mxx/include/mxx/bitonicsort.hpp:103:34: note: (if you use '-fpermissive', G++ will accept your code, but allowing the use of an undeclared name is deprecated)
../thirdParty/mxx/include/mxx/bitonicsort.hpp:103:41: error: there are no arguments to 'log' that depend on a template parameter, so a declaration of 'log' must be available [-fpermissive]
     int p2 = pow(2, ceil(log(size)/log(2)));
                                         ^
../thirdParty/mxx/include/mxx/bitonicsort.hpp: In function 'void mxx::impl::bitonic_sort_rec(_Iterator, _Iterator, _Compare, const mxx::comm&, int, int, bool)':
../thirdParty/mxx/include/mxx/bitonicsort.hpp:127:34: error: there are no arguments to 'log' that depend on a template parameter, so a declaration of 'log' must be available [-fpermissive]
     int p2 = pow(2, ceil(log(size)/log(2)));
                                  ^
../thirdParty/mxx/include/mxx/bitonicsort.hpp:127:41: error: there are no arguments to 'log' that depend on a template parameter, so a declaration of 'log' must be available [-fpermissive]
     int p2 = pow(2, ceil(log(size)/log(2)));
                                         ^
../thirdParty/mxx/include/mxx/bitonicsort.hpp: In instantiation of 'void mxx::impl::bitonic_sort_rec(_Iterator, _Iterator, _Compare, const mxx::comm&, int, int, bool) [with _Iterator = __gnu_cxx::__normal_iterator<std::tuple<int, double, double, double>*, std::vector<std::tuple<int, double, double, double> > >; _Compare = Foam::ElRBFMeshMotionSolver::solve()::<lambda(const tuple_type&, const tuple_type&)>]':
../thirdParty/mxx/include/mxx/bitonicsort.hpp:166:27:   required from 'void mxx::bitonic_sort(_Iterator, _Iterator, _Compare, const mxx::comm&) [with _Iterator = __gnu_cxx::__normal_iterator<std::tuple<int, double, double, double>*, std::vector<std::tuple<int, double, double, double> > >; _Compare = Foam::ElRBFMeshMotionSolver::solve()::<lambda(const tuple_type&, const tuple_type&)>]'
../thirdParty/mxx/include/mxx/samplesort.hpp:175:17:   required from 'std::vector<typename std::iterator_traits<_Iter>::value_type> mxx::impl::sample_block_decomp(_Iterator, _Iterator, _Compare, size_t, const mxx::comm&) [with _Iterator = __gnu_cxx::__normal_iterator<std::tuple<int, double, double, double>*, std::vector<std::tuple<int, double, double, double> > >; _Compare = Foam::ElRBFMeshMotionSolver::solve()::<lambda(const tuple_type&, const tuple_type&)>; typename std::iterator_traits<_Iter>::value_type = std::tuple<int, double, double, double>; size_t = long unsigned int]'
../thirdParty/mxx/include/mxx/samplesort.hpp:343:46:   required from 'void mxx::impl::samplesort(_Iterator, _Iterator, _Compare, const mxx::comm&) [with _Iterator = __gnu_cxx::__normal_iterator<std::tuple<int, double, double, double>*, std::vector<std::tuple<int, double, double, double> > >; _Compare = Foam::ElRBFMeshMotionSolver::solve()::<lambda(const tuple_type&, const tuple_type&)>; bool _Stable = false]'
../thirdParty/mxx/include/mxx/sort.hpp:41:49:   required from 'void mxx::sort(_Iterator, _Iterator, _Compare, const mxx::comm&) [with _Iterator = __gnu_cxx::__normal_iterator<std::tuple<int, double, double, double>*, std::vector<std::tuple<int, double, double, double> > >; _Compare = Foam::ElRBFMeshMotionSolver::solve()::<lambda(const tuple_type&, const tuple_type&)>]'
ElRBFMeshMotionSolver.C:231:66:   required from here
../thirdParty/mxx/include/mxx/bitonicsort.hpp:127:29: error: 'log' was not declared in this scope, and no declarations were found by argument-dependent lookup at the point of instantiation [-fpermissive]
     int p2 = pow(2, ceil(log(size)/log(2)));

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.