GithubHelp home page GithubHelp logo

cppimport's Introduction

cppimport - Import C++ directly from Python!

Contributing and architecture

See CONTRIBUTING.md for details on the internals of cppimport and how to get involved in development.

Installation

Install with pip install cppimport.

A quick example

Save the C++ code below as somecode.cpp.

// cppimport
#include <pybind11/pybind11.h>

namespace py = pybind11;

int square(int x) {
    return x * x;
}

PYBIND11_MODULE(somecode, m) {
    m.def("square", &square);
}
/*
<%
setup_pybind11(cfg)
%>
*/

Then open a Python interpreter and import the C++ extension:

>>> import cppimport.import_hook
>>> import somecode #This will pause for a moment to compile the module
>>> somecode.square(9)
81

Hurray, you've called some C++ code from Python using a combination of cppimport and pybind11.

I'm a big fan of the workflow that this enables, where you can edit both C++ files and Python and recompilation happens transparently! It's also handy for quickly whipping together an optimized version of a slow Python function.

An explanation

Okay, now that I've hopefully convinced you on how exciting this is, let's get into the details of how to do this yourself. First, the comment at top is essential to opt in to cppimport. Don't forget this! (See below for an explanation of why this is necessary.)

// cppimport

The bulk of the file is a generic, simple pybind11 extension. We include the pybind11 headers, then define a simple function that squares x, then export that function as part of a Python extension called somecode.

Finally at the end of the file, there's a section I'll call the "configuration block":

<%
setup_pybind11(cfg)
%>

This region surrounded by <% and %> is a Mako code block. The region is evaluated as Python code during the build process and provides configuration info like compiler and linker flags to the cppimport build system.

Note that because of the Mako pre-processing, the comments around the configuration block may be omitted. Putting the configuration block at the end of the file, while optional, ensures that line numbers remain correct in compilation error messages.

Building for production

In production deployments you usually don't want to include a c/c++ compiler, all the sources and compile at runtime. Therefore, a simple cli utility for pre-compiling all source files is provided. This utility may, for example, be used in CI/CD pipelines.

Usage is as simple as

python -m cppimport build

This will build all *.c and *.cpp files in the current directory (and it's subdirectories) if they are eligible to be imported (i.e. contain the // cppimport comment in the first line).

Alternatively, you may specifiy one or more root directories or source files to be built:

python -m cppimport build ./my/directory/ ./my/single/file.cpp

Note: When specifying a path to a file, the header check (// cppimport) is skipped for that file.

Fine-tuning for production

To further improve startup performance for production builds, you can opt-in to skip the checksum and compiled binary existence checks during importing by either setting the environment variable CPPIMPORT_RELEASE_MODE to true or setting the configuration from within Python:

cppimport.settings['release_mode'] = True

Warning: Make sure to have all binaries pre-compiled when in release mode, as importing any missing ones will cause exceptions.

Frequently asked questions

What's actually going on?

Sometimes Python just isn't fast enough. Or you have existing code in a C or C++ library. So, you write a Python extension module, a library of compiled code. I recommend pybind11 for C++ to Python bindings or cffi for C to Python bindings. I've done this a lot over the years. But, I discovered that my productivity is slower when my development process goes from Edit -> Test in just Python to Edit -> Compile -> Test in Python plus C++. So, cppimport combines the process of compiling and importing an extension in Python so that you can just run import foobar and not have to worry about multiple steps. Internally, cppimport looks for a file foobar.cpp. Assuming one is found, it's run through the Mako templating system to gather compiler options, then it's compiled and loaded as an extension module.

Does cppimport recompile every time a module is imported?

No! Compilation should only happen the first time the module is imported. The C++ source is compared with a checksum on each import to determine if any relevant file has changed. Additional dependencies (e.g. header files!) can be tracked by adding to the Mako header:

cfg['dependencies'] = ['file1.h', 'file2.h']

The checksum is computed by simply appending the contents of the extension C++ file together with the files in cfg['sources'] and cfg['dependencies'].

How can I set compiler or linker args?

Standard distutils configuration options are valid:

cfg['extra_link_args'] = ['...']
cfg['extra_compile_args'] = ['...']
cfg['libraries'] = ['...']
cfg['include_dirs'] = ['...']

For example, to use C++11, add:

cfg['extra_compile_args'] = ['-std=c++11']

How can I split my extension across multiple source files?

In the configuration block:

cfg['sources'] = ['extra_source1.cpp', 'extra_source2.cpp']

cppimport isn't doing what I want, can I get more verbose output?

cppimport uses the standard Python logging tools. Please add logging handlers to either the root logger or the "cppimport" logger. For example, to output all debug level log messages:

root_logger = logging.getLogger()
root_logger.setLevel(logging.DEBUG)

handler = logging.StreamHandler(sys.stdout)
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
handler.setFormatter(formatter)
root_logger.addHandler(handler)

How can I force a rebuild even when the checksum matches?

Set:

cppimport.settings['force_rebuild'] = True

And if this is a common occurence, I would love to hear your use case and why the combination of the checksum, cfg['dependencies'] and cfg['sources'] is insufficient!

Note that force_rebuild does not work when importing the module concurrently.

Can I import my model concurrently?

It's (mostly) safe to use cppimport to import a module concurrently using multiple threads, processes or even machines! There's an exception if your filesystem does not support file locking - see the next section.

Before building a module, cppimport obtains a lockfile preventing other processors from building it at the same time - this prevents clashes that can lead to failure. Other processes will wait maximum 10 mins until the first process has built the module and load it. If your module does not build within 10 mins then it will timeout. You can increase the timeout time in the settings:

cppimport.settings['lock_timeout'] = 10*60 # 10 mins

You should not use force_rebuild when importing concurrently.

Acquiring the lock hangs or times out unexpectedly - what's going on?

Certain platforms (e.g. those running a Data Virtualization Service, DVS) do not support file locking. If you're on Linux with access to flock, you can test whether locking is supported (credit to this page):

touch testfile
flock ./testfile true && echo ok || echo nok

If locking is not supported, you can disable the file lock in the cppimport global settings:

cppimport.settings['use_filelock'] = False 

This setting must be changed before you import any code. By setting use_filelock=False, you become responsible for ensuring that only a single process (re)builds the package at a time. For example: if you're using mpi4py to run independent, communicating processes, here's how to protect the build:

from mpi4py import MPI
import cppimport, cppimport.import_hook
cppimport.settings["use_filelock"] = False

pid = MPI.COMM_WORLD.Get_rank()

if pid == 0:
    import somecode      # Process 0 compiles extension if needed 
MPI.COMM_WORLD.Barrier() # Remaining processes wait 
import somecode          # All processes use compiled extension 

How can I get information about filepaths in the configuration block?

The module name is available as the fullname variable and the C++ module file is available as filepath. For example,

<%
module_dir = os.path.dirname(filepath)
%>

How can I make compilation faster?

In single file extensions, this is a fundamental issue with C++. Heavily templated code is often quite slow to compile.

If your extension has multiple source files using the cfg['sources'] capability, then you might be hoping for some kind of incremental compilation. For the uninitiated, incremental compilation involves only recompiling those source files that have changed. Unfortunately this isn't possible because cppimport is built on top of the setuptools and distutils and these standard library components do not support incremental compilation.

I recommend following the suggestions on this SO answer. That is:

  1. Use ccache to reduce the cost of rebuilds
  2. Enable parallel compilation. This can be done with cfg['parallel'] = True in the C++ file's configuration header.

As a further thought, if your extension has many source files and you're hoping to do incremental compiles, that probably indicates that you've outgrown cppimport and should consider using a more complete build system like CMake.

Why does the import hook need "cppimport" on the first line of the .cpp file?

Modifying the Python import system is a global modification and thus affects all imports from any other package. As a result, when I first implemented cppimport, other packages (e.g. scipy) suddenly started breaking because import statements internal to those packages were importing C or C++ files instead of the modules they were intended to import. To avoid this failure mode, the import hook uses an "opt in" system where C and C++ files can specify they are meant to be used with cppimport by having a comment on the first line that includes the text "cppimport".

As an alternative to the import hook, you can use imp or imp_from_filepath. The cppimport.imp and cppimport.imp_from_filepath performs exactly the same operation as the import hook but in a slightly more explicit way:

foobar = cppimport.imp("foobar")
foobar = cppimport.imp_from_filepath("src/foobar.cpp")

By default, these explicit function do not require the "cppimport" keyword on the first line of the C++ source file.

Windows?

The CI system does not run on Windows. A PR would be welcome adding further Windows support. I've used cppimport with MinGW-w64 and Python 3.6 and had good success. I've also had reports that cppimport works on Windows with Python 3.6 and Visual C++ 2015 Build Tools. The main challenge is making sure that distutils is aware of your available compilers. Try out the suggestion here.

cppimport uses the MIT License

cppimport's People

Contributors

anntzer avatar bjornfor avatar evertheylen avatar fran6co avatar joshlk avatar linzuzeng avatar mityax avatar mivade avatar myd7349 avatar nrhinehart avatar oysstu avatar poke1024 avatar robertgolosynsky avatar stefanor avatar tbenthompson avatar thehamsta avatar vbharadwaj-bk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cppimport's Issues

Possibility of hot-reloading?

It there some way to make it possible to hot-reload changes to a .cpp file in a running python process (e.g. jupyter notebook)?

In particular, for development of high performance numerical computing code, it would be very useful to be able to have a jupyter notebook where data and other memory structures are loaded and set up, and then be able to use cppimport to import a .cpp file that is being developed for some heavy compute implementation. Right now, calling importlib.reload on a cppimport compiled/imported module does seem to trigger a re-compile (if the .cpp file has changed), but the new module is not actually loaded into the python process.

This is probably a python limitation, but there are (hacky) workarounds like this:
https://stackoverflow.com/questions/8295555/how-to-reload-a-python3-c-extension-module/8295590#8295590
I wonder if there would be a way to enable an optional "development" mode for cppimport where modules are loaded with these tricks that allow reloading of binary extensions?

It would even be great to get some pointers about how to go about implementing this so I can make a PR.

Implement relative imports

The code in find.py needs some changes to modify the search directories in the case of a relative import. Look at cppimport/import_hook.py and cppimport/find.py.

[Question] Can we supress cppimport error messages/warnings when buillding the bindings?

As mentioned in the #60 (comment), I was able to get cppimport working for a sample project using the cfg['include_dirs'].append(path)

I tried to implement the same method in Bitcoin Core for building the python bindings. The changes can be found here. The bindings are built properly, and they are working.

The problem is there are some error messages related to cppimport's inner workings are shown when these bindings are built. The error message can be found here.

A stackoverflow thread that is somewhat similar to this issue.

Race condition when multiple processes try to compile a module at once

Hi,

Great package by the way!

I've encountered an issue when multiple processes are spawned that all race to compile the same module. This can also occur when multiple processes are spawned on different hosts and share the same network filesystem. Such a situation is common when distributing work between multiple processes or hosts for AI or data analytics.

Here is a demonstration (in the shell):

echo '// cppimport
#include <pybind11/pybind11.h>

namespace py = pybind11;

int square(int x) {
    return x * x;
}

PYBIND11_MODULE(somecode, m) {
    m.def("square", &square);
}
/*
<%
setup_pybind11(cfg)
%>
*/' > somecode.cpp

echo 'import cppimport.import_hook
import somecode
somecode.square(9)' > test.py

rm somecode.cpython-*

for i in {1..100}; do python3 test.py & done

On my system around 4 out of 100 processes exit in an error. The shell output includes:

error: could not delete '/localdata/joshl/sandbox/somecode.cpython-36m-x86_64-linux-gnu.so': No such file or directory
...
Exit 1                  python3 test.py
...
Bus error               (core dumped) python3 test.py

These errors don't appear when the binary already exists.


To mitigate this issue in our applications we have used a file lock so that only one process attempts to compile the module at one time. A process first checks if the binary file exists, otherwise attempts to obtain the file lock. If it can't obtain the lock it waits until either the binary exists, can obtain the file lock or times out. Here is an example how it can be done (app code):

from cppimport.checksum import is_checksum_valid

binary_path = module_data['ext_path']
lock_path = binary_path + '.lock'

t = time()

while not (os.path.exists(binary_path) and is_checksum_valid(module_data)) and time() - t < timeout:
    try:
        with FileLock(lock_path, timeout=1):
            if os.path.exists(binary_path) and is_checksum_valid(module_data_new_path):
                break
            # BUILD BINARY
            template_and_build(filepath, module_data)
    except Timeout:
        logging.debug(f'{os.getpid()}: Could not obtain lock')
        sleep(1)

if not (os.path.exists(binary_path) and is_checksum_valid(module_data_new_path)):
    raise Exception(
        f'Could not compile binary as lock already taken and timed out. Lock file will be deleted: {lock_path}')

if os.path.exists(lock_path):
    with suppress(OSError):
        os.remove(lock_path)

It would be great if we could upstream the above to cppimport to prevent the race condition errors. If you are happy with this solution I could contribute the above to the appropriate place in cppimport.

Support standard kwargs to Extension

Currently, cppimport seems to rename certain keyword in cfg before passing them to setuptools.Extension, e.g. "compiler_args" -> "extra_compile_args" (

extra_compile_args = cfg.get('compiler_args', []),
; I tried using extra_compile_args, with no success). It would be appreciated if standard kwargs to setuptools.Extension were supported (instead?). While the standard kwargs may or may not be very well named, they are, well, standard, and I think it's clearer to keep using them rather than introducing your own set of synonyms.

Thanks for the greatly useful project!

IDEA: plugin system

I really like cppimport for quick tests and local work. However, there's one irritating part I always have to look up:

/*
<%
setup_pybind11(cfg)
%>
*/

This has two issues - one it's a special, parsed magic that requires changing the C++ code, and two, it's hard-coded into cppimport; if someone wanted to support something besides pybind11, it would need a new function like this. I'd like to propose a plugin system that allows pybind11 to declare what it needs for setup that could also be used elsewhere. I can help get the pybind11 part into the pybind11 package. :)

The idea I'm currently thinking of is the following. Users could use:

# Global
cppimport.plugin.pybind11.import_hook
import foobar

# Local
foobar = cppimport.plugin.pybind11.imp("foobar")
foobar = cppimport.plugin.pybind11.imp_from_filepath("src/foobar.cpp")

It would be also available in cpp mode to support "classic" usage:

/*
<%
cppimport.plugin.pybind11.setup(cfg)
%>
*/

A package (pybind11 in this case) would implement an entrypoint, cppimport.setup: pybind11 = .... When you access cppimport.plugin.<attr>, it looks for <attr> item in the cppimport.setup entry points. If found, it calls it with some to-be-determined API, possibly just "cfg" like the current setup_pybind11. No modification or special magic comments needed in the source code.

Thoughts? Also, a way to set config options would be important, which I haven't addressed above. I think import_hook could be replaced with something callable, like setup_import_hook, which would take cfg options. imp* could take configuration options too.

Windows support

Not a real issue, but I guess you'll be interested in the following:

cppimport seems to be working with Python 3.6 and Visual C++ 2015 Build Tools on Windows out of the box.

Indentation error in macosx

I keep getting the following indentation error in macosx 10.15. However, the same code runs fine in linux.

<%
cfg['compiler_args'] = ['-std=c++11']
cfg['include_dirs'] = ['/Users/amritpoudel/Desktop/HPC_LIB/HOMEBREW/opt/eigen/include/eigen3', '/Users/amritpoudel/Desktop/HPC_LIB/HOMEBREW/include/python3.7m',
'/Users/amritpoudel/Desktop/HPC_LIB/HOMEBREW/opt/python3/Frameworks/Python.framework/Versions/3.7/include/python3.7m']
cfg['libraries'] = ['/Users/amritpoudel/Desktop/HPC_LIB/HOMEBREW/opt/python3/Frameworks/Python.framework/Versions/3.7/lib']
setup_pybind11(cfg)
%>

#############################################

return compile(expr, filename, mode, PyCF_ONLY_AST)
File "", line 2
cfg['include_dirs'] = ['/Users/amritpoudel/Desktop/HPC_LIB/HOMEBREW/opt/eigen/include/eigen3',
^
IndentationError: unexpected indent

I don't quite understand why it complains in macosx. The same copy-pasted code runs fine in linux machine, so indentation shouldn't be an issue.

io.UnsupportedOperation: IOStream has no fileno.

Get the following error when I tried to run the somecode example.
The import importcpp statment went fine.
I get the error on the import somecode line :

Traceback (most recent call last):
  File "<frozen importlib._bootstrap>", line 890, in _find_spec
AttributeError: 'CppFinder' object has no attribute 'find_spec'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/miniconda/lib/python3.5/site-packages/cppimport/import_hook.py", line 30, in stdchannel_redirected
    oldstdchannel = os.dup(stdchannel.fileno())
  File "/miniconda/lib/python3.5/site-packages/ipykernel/iostream.py", line 306, in fileno
    raise UnsupportedOperation("IOStream has no fileno.")
io.UnsupportedOperation: IOStream has no fileno.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/miniconda/lib/python3.5/site-packages/cppimport/import_hook.py", line 241, in find_module
    build_plugin(fullname, filepath)
  File "/miniconda/lib/python3.5/site-packages/cppimport/import_hook.py", line 187, in build_plugin
    with stdchannel_redirected(sys.stdout, os.devnull):
  File "/miniconda/lib/python3.5/contextlib.py", line 59, in __enter__
    return next(self.gen)
  File "/miniconda/lib/python3.5/site-packages/cppimport/import_hook.py", line 36, in stdchannel_redirected
    if oldstdchannel is not None:
UnboundLocalError: local variable 'oldstdchannel' referenced before assignment

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-3-73fe0814eaed> in <module>()
----> 1 import somecode

ImportError: No module named 'somecode'

Submodule importing triggers recompile on each run

This occurs on python 2.7.12 and cppimport==17.7.24

I've created a little package to test submodule imports. The package __init__.py looks as follows:

import cppimport

mod = cppimport.imp("testcppimport.qux.foobar")
plort = mod.qux.foobar.plort

Now if I invoke plort in multiple interpreter runs, foobar.so is recompiled, even though it already exists.

python -c 'from testcppimport import plort; plort()'
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
Plort
python -c 'from testcppimport import plort; plort()'
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
Plort

Some debugging indicates that the call to find_module fails with ImportError No module named testcppimport.qux.foobar. So the find fails, foobar.so is recompiled, loaded and plort() is invoked.

The test package is here testcppimport.zip

specify compile options?

Is there a way to specify compile options?

I need to:

  • specify include path for pybind11
  • specify optimization options

Error SystemExit

Firstly, congratulations! This python module is very useful. My problem is:

After execute this code:
import cppimport
func = cppimport.imp("file_name")
func

This error appears:
SystemExit: error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.24.28314\bin\HostX86\x64\link.exe' failed with exit status 1120

I think that the problem is that file_name.cpp creates a file.exe and then it runs. Can it be resolved? I need run file.exe to update the data I need.

Thanks

cppimport for compiling in release mode

I have recently started using cppimport (a great library, very useful!) along with pybind11. However, I notice that cppimport seems to compile C++ code in debug mode using -g flag. Also, it does not seem to look for libraries provided in cfg['libraries']. Is there a way to compile in release mode?

<%
cfg['compiler_args'] = ['-std=c++11']
cfg['include_dirs'] = ['/hpc_lib/eigen', '/hpc_lib/pybind11/include',
'/hpc_lib/python-3.6.2/include', '/hpc_lib/openblas/include']
cfg['libraries'] = ['/hpc_lib/python-3.6.2/lib', '/hpc_lib/openblas/lib']
cfg['linker_args'] = ['-lopenblas', '-lgfortran', '-lpython3.6m']
setup_pybind11(cfg)
%>

I get the following warning and error during link time:

gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/hpc_lib/eigen -I/hpc_lib/pybind11/include -I/hpc_lib/python-3.6.2/include -I/hpc_lib/openblas/include -I/hpc_lib/python-3.6.2/include/python3.6m -c /src/.rendered.dwf_solver_cpp.cpp -o /tmp/tmpc8iscbum/src/.rendered.dwf_solver_cpp.o -std=c++11 -std=c++11
cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++
warning: no library file corresponding to '/hpc_lib/python-3.6.2/lib' found (skipping)

g++ -pthread -shared -rdynamic /tmp/tmpc8iscbum/src/.rendered.dwf_solver_cpp.o -lpython3.6m -o /tmp/tmpc8iscbum/dwf_solver_cpp.cpython-36m-x86_64-linux-gnu.so -lopenblas -lgfortran -lpython3.6m
warning: no library file corresponding to '/hpc_lib/openblas/lib' found (skipping)
/binutils/bin/ld: cannot find -lopenblas
collect2: error: ld returned 1 exit status

Why does the compiler not pick up the libraries? It seems to pick up fine the header files though.

I have those libraries correctly set in LD_LIBRARY_PATH.

Not working on cpp file that uses #include with angualar brackets

I have the following source cpp file. Creating a binding for this file with cppimport work perfectedly.

#include "../build/example2.h"

int add(int, int);

int add(int i, int j) {
    int a = MAX_VAL;
    return i*j + a;
}

if I use #include <build/example2.h> instead of #include "../build/example2.h" in the above program. I get the following error.

fatal error: /build/example2.h: No such file or directory
    1 | #include </build/example2.h>
      |          ^~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'x86_64-Linux-gnu-gcc' failed with exit status 1

NOTE: the binding.cpp for this file using cppimort is present in a different directory.

I think cppimport fails if the source file includes angular brackets (ex: #include<build/example.h>) instead of double-quotes. Is there a way to make cppimport work for angular brackets too?

Fail to run `python -m cppimport build`

Hi there~
When try to run %sh python -m cppimport build in databricks, it returns:
No module named cppimport.__main__; 'cppimport' is a package and cannot be directly executed

What the reason could be? Thanks very much!

cppimport==21.3.7
Python==3.8.10

[Windows] Compilation warnings: unknown option '-std=c++11'

Hello, first of all, great work on this tool! it works like a charm!

I'm a windows user, and the default example raises 2 warnings because of those flags I believe:

cfg["compiler_args"] = ["-std=c++11", "-fvisibility=hidden"] + cfg["compiler_args"]

cl : Command line warning D9002 : ignoring unknown option '-std=c++11'
cl : Command line warning D9002 : ignoring unknown option '-fvisibility=hidden'

I think msvc has its own set of compiler flags (like /std:c++11) and using those on windows might fix those warnings.
Thanks again for the good work!

Error: No such file or directory: 'VERSION'

When I try to install cppimport via pip on multiple machine I receive the same error message.

The error message is: FileNotFoundError: [Errno 2] No such file or directory: 'VERSION'

The Python versions are 3.6.9 and 3.7.0

Thanks

Paul

setup_pybind11(cfg) in /* */ or not?

I'm having issues with

/*
<%
setup_pybind11(cfg)
%>
*/

It appears that when I put it in /* / as described in the documentation, then it has no effect when the cpp file is imported from python. However, it all works fine from python when I don't use the / /, but then I can't run it anymore from visual studio. Any suggestions what I can do? Why do I have to remove the / */ to work it correctly from python?

Import header-only implementations.

Hello,

thanks for creating this super helpful project. 👍. I was wondering whether its possible to import header-only implementations.
I tried:

import cppimport
cppimport.config.file_exts.append(".h")

But I get

error: unknown file type '.h' (from '<path-on-my-system>/.rendered.ema.h')

Thanks for the great work 👍

How specify compiler?

How would I specify the path to compiler I would like to use?
Is the proper solution to set the CC env var prior to importing?
thanks!

README example fails with ImportError on OSX

Problem

When i try to run the example from the README i end up with the following trace

Python 2.7.10 (default, Jul 10 2015, 09:06:15)
[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import cppimport
>>> import somecode
Traceback (most recent call last):
  File "/private/tmp/pycpp/env/lib/python2.7/site-packages/cppimport/import_hook.py", line 230, in find_module
    build_plugin(fullname, filepath)
  File "/private/tmp/pycpp/env/lib/python2.7/site-packages/cppimport/import_hook.py", line 149, in build_plugin
    pybind11.get_include(),
  File "/private/tmp/pycpp/env/lib/python2.7/site-packages/pybind11/__init__.py", line 9, in get_include
    locations.distutils_scheme('pybind11', *args, **kwargs)['headers'])
AttributeError: 'module' object has no attribute 'distutils_scheme'

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named somecode

Any idea? Seems like it could also a bug in pybind11 :/. The cpp code is available in the same directory.

Thanks in advance, it's a cool thing :).

Added Carriage Return Characters in Rendered Output on Windows

I was following through the example in the readme, using Python 3.9, Windows 10, and MSVC2019. FWIW, I had to add a cfg['compiler_args'] = [] to the mako section to keep the compiler from barking at me. Once everything was working, I noticed that the compiler was giving me a warning that the source file was MacOS/Linux formatted. I opened the rendered cpp file and sure enough, every CR/LF was replaced with a CR/CR/LF.

I was able to fix the issue on my machine by adding the newline='' argument to the open() call on line 62 of templating.py.

I figure that since the buf object already has newlines in it, additional formatting is probably not required. Cheers.

Make setup_pybind11 include -fvisibility=hidden in extra_compile_args

Without that flag, one gets warnings per https://pybind11.readthedocs.io/en/stable/faq.html#someclass-declared-with-greater-visibility-than-the-type-of-its-field-someclass-member-wattributes.

It's probably just a matter of adding it after -std=c++11 in

cfg['compiler_args'] = ['-std=c++11'] + cfg['compiler_args']
(but before + cfg['compiler_args'] to allow a user to overwrite that if they really, really want to).

Visual Studio Support

I cannot seem to get cppimport to set the correct include paths for Visual Studio.
cfg['include_dirs'] = [pybind11.get_include(),'/c/Program\ Files/PCL\ 1.8.1/include/pcl-1.8', ...
I also tried the native path style string.
I'm getting:

fatal error C1083: Cannot open include file: 'pcl/point_types.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\\VC\\Tools\\MSVC\\14.15.26726\\bin\\HostX86\\x64\\cl.exe' failed with exit status 2

Any suggestions?

cpprun test

IIUC, cpprun entry point has been removed, however there's a test_cpprun() test that expects it there - and hence it would always fail.

Should it be removed from the tests as well?

Configuration is awkward

This is a really great project and I am interested in using it during development of extension modules. I would however prefer to use the more traditional methods of building extensions for releases in a lot of cases. The examples of configuration shown in the README are awkward in the sense that they use raw mako templates in the .cpp files directly which would result in failed builds if compiled directly. Some possible solutions:

  • Put the mako configuration in comments (I tested this and it works without issue)
  • Explicitly note in the README that the above is acceptable
  • Optionally take a keyword argument in cppimport.imp for configuration and bypass mako entirely

cppimport should put output files in a hidden directory

Perhaps I missed a configuration option to handle this, but currently cppimport puts all the generated files in the same place as the source.

I would much prefer if the files were stored in a .cppimport folder or something of the sort. Would it be possible to add a configuration setting to handle this?

compilation errors are reported at the wrong line

Right now, compilation errors are not reported at the "right" line of the source file as they actually refer to the line numbering of the "rendered" source. However, if the Mako templating part is indeed kept in comments and only used to set up the cfg dictionary, then you could choose to not copy the rendered source, but just use the original one (and just run Mako for side-effect purposes only). This would also allow getting rid of the .rendered. files, though that's only a minor benefit.

Would you consider making such a change?

Compiler logs have no correlation to actual lines at which errors occur.

When I compile and import a .cpp file using cppimport into python, if an error occurs, the lines given for where in the .cpp file the error occurred has no correlation to the actual lines in either the original file or the .rendered.NAME.cpp files. For example, if I get an error message saying there is an error at line 126, I can open both the original and .rendered files and find that line 126 is completely empty.

I am using the MSVC compiler with pybind11, and this happens with any error in any c++ file.

how to reduce .so dependencies?

for example in the cpp head I add a lot lib dependences:
cfg['libraries'] = 'opencv_core opencv_imgproc opencv_imgcodecs opencv_video opencv_videoio opencv_objdetect m z'.split()

but actually not all of them are used.
ldd cpy.so will output:
libopencv_core.so.3.2 => /conda/lib/libopencv_core.so.3.2 (0x00007fc2879b1000)
libopencv_imgproc.so.3.2 => /conda/lib/libopencv_imgproc.so.3.2 (0x00007fc286208000)
libopencv_imgcodecs.so.3.2 => /conda/lib/libopencv_imgcodecs.so.3.2 (0x00007fc285dcb000)
libopencv_video.so.3.2 => /conda/lib/libopencv_video.so.3.2 (0x00007fc285a56000)
libopencv_videoio.so.3.2 => /conda/lib/libopencv_videoio.so.3.2 (0x00007fc285825000)
libopencv_objdetect.so.3.2 => /conda/lib/libopencv_objdetect.so.3.2 (0x00007fc2855c8000)

But if I compile with g++ in Makefile. The unused library dependencies are removed automatically.
So how can I enable this in cppimport cfg?

I did tried flags -flto -Os from https://stackoverflow.com/questions/31354641/c-reduce-library-size-by-excluding-unnecessary-functions-programmatically . But it doesn't work.

How to import a cpp file from another dir?

The function (somecode = cppimport.imp("somecode")) can't find the cpp file if the file is not in the working dir.
The function (cppimport.imp_from_filepath) returns errors for all the arguments I tried (I could not find documentation).

What is the method for importing cpp files from directories diferrent from the working dir?

Can not execute the test.py in tests file

Does anyone has the same issue when executing the test.py in tests file?

The console gives me feedback like this:

Traceback (most recent call last):
File "/Users/jiaqimo/Downloads/cppimport-stable/cmake_example copy/tests/test.py", line 1, in
import cmake_example as m
ModuleNotFoundError: No module named 'cmake_example'

Add Windows to continuous integration system.

Simple, see the title!

The issue is not quite as easy as appending windows-latest to the list of OS options in the CI workflow. For some reason, the tests fail silently on Windows. Perhaps there is some kind of segfault occuring? I think further testing will require replicating something very similar to the Windows CI environment.

Using cppimport with Python3.5

When I run example provided in the read me, I get the following error:

>>> import cppimport
>>> somecode = cppimport.imp('somecode')
Traceback (most recent call last):
  File "/Users/kirit/anaconda/envs/python-prosodizer/lib/python3.5/site-packages/mako/lexer.py", line 209, in decode_raw_stream
    text = text.decode(parsed_encoding)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 67: ordinal not in range(128)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/anaconda/envs/python-prosodizer/lib/python3.5/site-packages/cppimport/importer.py", line 57, in imp
    template_and_build(filepath, module_data)
  File "/Users/kirit/anaconda/envs/python-prosodizer/lib/python3.5/site-packages/cppimport/importer.py", line 42, in template_and_build
    templating.run_templating(module_data)
  File "/Users/kirit/anaconda/envs/python-prosodizer/lib/python3.5/site-packages/cppimport/templating.py", line 38, in run_templating
    tmpl = mako.template.Template(filename = filepath, lookup = lookup)
  File "/Users/kirit/anaconda/envs/python-prosodizer/lib/python3.5/site-packages/mako/template.py", line 338, in __init__
    module = self._compile_from_file(path, filename)
  File "/Users/kirit/anaconda/envs/python-prosodizer/lib/python3.5/site-packages/mako/template.py", line 416, in _compile_from_file
    filename)
  File "/Users/kirit/anaconda/envs/python-prosodizer/lib/python3.5/site-packages/mako/template.py", line 705, in _compile_text
    generate_magic_comment=template.disable_unicode)
  File "/Users/kirit/anaconda/envs/python-prosodizer/lib/python3.5/site-packages/mako/template.py", line 685, in _compile
    node = lexer.parse()
  File "/Users/kirit/anaconda/envs/python-prosodizer/lib/python3.5/site-packages/mako/lexer.py", line 224, in parse
    self.filename)
  File "/Users/kirit/anaconda/envs/python-prosodizer/lib/python3.5/site-packages/mako/lexer.py", line 215, in decode_raw_stream
    0, 0, filename)
mako.exceptions.CompileException: Unicode decode operation of encoding 'ascii' failed in file '/Users/kirit/ZeroLabs/GRU/pybind_gru/somecode.cpp' at line: 0 char: 0

I believe this is because my Python system version is 2.7 and my Anaconda python version is 3.5.
I'm not sure how to fix this issue.

Any help would be deeply appreciated!

Build failure on Mac OS

Hi.
Your cppimport is great!!

However, the sample code's build fails on Mac OS.

My environment is
Mac OS:10.11.5
clang: Apple LLVM version 7.3.0 (clang-703.0.31)

Error message as below

In file included from /Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pybind11/pybind11.h:30:
In file included from /Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pybind11/attr.h:13:
In file included from /Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pybind11/cast.h:13:
In file included from /Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pybind11/pytypes.h:12:
/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pybind11/common.h:72:10: fatal error:
'unordered_set' file not found

include <unordered_set>

     ^

1 error generated.
error: command '/usr/bin/clang' failed with exit status 1

Do you know how to fix it?

Atsushi

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.