floopcz / tensorflow_cc Goto Github PK
View Code? Open in Web Editor NEWBuild and install TensorFlow C++ API library.
License: MIT License
Build and install TensorFlow C++ API library.
License: MIT License
I have frozen a tensorflow model in order to use it in a C++ application. Everything works fine if I freeze the model in my laptop with a GeForce 740m compute capablity 3.5. But if I freeze the same model in a GeForce 1080 compute capability 6.1 I receive this run time error:
Invalid argument: Cannot assign a device for operation 'CTCGreedyDecoder': Could not satisfy explicit device specification '/gpu:0' because no supported kernel for GPU devices is available.
Have you ever experienced this kind of issue? @FloopCZ
Does it mean that if I want to use a frozen model in a C++ application, I have to use GPUs with the same compute capibility for freezing and running the model?
Hi, We were planning to use this to configure tensorflow in our PPA. We would like to have some official release version for that. Is it okay to do github release or prerelease right now?
I am doing that in my forked branch temporarily but if its not too early, I would prefer release from official branch. Thanks!
I am very happy to install and run successfully TF with Cmake. Thank you so much. Now my team has an old project on Qt Creator. And I want to config TF on Qt.
This is my simple Qt config. However compiler can not found *.pb.h
QT -= core
QT -= gui
CONFIG += c++11
TARGET = tf_test
CONFIG += console
SOURCES += main.cpp
INCLUDEPATH += /usr/local/include/tensorflow
LIBS += /usr/local/lib/tensorflow_cc/libprotobuf.a /usr/local/lib/tensorflow_cc/libtensorflow_cc.so
This file tensorflow_cc/Dockerfiles/install-ubuntu.sh
can't be used for build shared lib, because of the following reason:
parameters are shift
by
$@
becomes empty in
Hi FloopCZ:
I successfully built tensorflow with your cmake script, but got "unrecognized command line option '-std=c++1z'" when building example.cpp.
c++1z, or "C++ 17", which is supported by gcc5.x+. It's an experimental standard right now...
My gcc version is 4.8.5, and for some reason I cannot upgrade it.
So why we need C++17? And how can I do to build the example file? After all, the official tensorflow requires gcc4.8+ only...
Looking for your help, and thanks for your contribution...
Hi Filip
Cool project, thanks.
I am testing it on CentOS 7 and :
`
du -sh /tmp/tfcc/include/tensorflow/tensorflow/contrib/makefile/downloads/*
3.0M /tmp/tfcc/include/tensorflow/tensorflow/contrib/makefile/downloads/absl
3.3M /tmp/tfcc/include/tensorflow/tensorflow/contrib/makefile/downloads/cub
17M /tmp/tfcc/include/tensorflow/tensorflow/contrib/makefile/downloads/eigen
612K /tmp/tfcc/include/tensorflow/tensorflow/contrib/makefile/downloads/fft2d
5.2M /tmp/tfcc/include/tensorflow/tensorflow/contrib/makefile/downloads/gemmlowp
6.3M /tmp/tfcc/include/tensorflow/tensorflow/contrib/makefile/downloads/googletest
2.0M /tmp/tfcc/include/tensorflow/tensorflow/contrib/makefile/downloads/nsync
870M /tmp/tfcc/include/tensorflow/tensorflow/contrib/makefile/downloads/protobuf
1.8M /tmp/tfcc/include/tensorflow/tensorflow/contrib/makefile/downloads/re2
`
Is it really needed to install all downloads specially when using the static lib ?
Cheers
W.
hi, I face a problem while I compile the newest version project.
cp: cannot copy a directory, '/home/chaowyc/tensorflow_cc/tensorflow_cc/build/tensorflow/tensorflow/co
ntrib/makefile/gen/protobuf', into itself, './tensorflow/contrib/makefile/gen/protobuf/protobuf'
the whole context is showed in the picture below:
I tried the last second commit version, but failed too
Hello,
First, thank you for creating the tensorflow_cc software module; so far, this appears to be the most succinct and functional solution for setting up the hard-to-find-decent-documentation-for software known as TensorFlow's C APIs.
Here is my problem:
I am attempting to use the shared tensorflow_cc library version within
a custom docker configuration in Ubuntu 16.04. I have used the
"ubuntu" dockerfile in the "Dockerfiles" folder of tensorflow_cc as a reference for setting
up the appropriate dependencies. And the example application builds
and runs just fine after I complete the process specified in your README.md.
However, the challenge comes when I attempt to make (after cmake'ing) my own project
which is attempting to leverage the C-API of TensorFlow, linking to your libraries.
Specifically, what happens is that "make" outputs many instances of the same error message:
#error This file was generated by a newer version of protoc which is
#error incompatible with your Protocol Buffer headers. Please update
#error your headers.
with regards to various protobuf compiler generated files, such as
/usr/local/include/tensorflow/bazel-genfiles/tensorflow/core/framework/tensor.pb.h.
And these files are associated with TensorFlow-related header files included in my
project, such as /usr/local/include/tensorflow/tensorflow/cc/ops/const_op.h.
I have verified that the version of Protobuf installed on my system:
3.5.0, matches that employed by tensorflow_cc (via much searching and
emitting of messages from the common.h files). (Is that version correct?)
Furthermore, I have installed and built the Google Protobuf library 3.5.0 from source to ensure the correct linkings. However, there seems to be another, hidden issue causing the protobuf mismatch error.
In my software, there unfortunately are a few other libraries required to exist
which have their own versions of protobuf. I have worked on
severing the ties to these wrong versions, but I am not 100% that I have done
the right thing in this regard utilizing my CMakeLists.txt file. Maybe you can help? Specifically,
how do I link against the correct tensorflow header files and/or the correct
protobuf compiler-generated files in my project? Your example application
shows how to set up the target_link_libraries()
command, but not
include_directories()
or target_include_directories()
, which I feel like
are still needed (maybe not?).
Here is a sample of the related components of my CMakeLists.txt:
find_package(TensorflowCC COMPONENTS Shared)
if(TensorflowCC_Shared_FOUND)
message(STATUS "FOUND TensorflowCC_Shared library")
else(TensorflowCC_Shared_FOUND)
message(STATUS "DID NOT FIND TensorflowCC_Shared library")
endif(TensorflowCC_Shared_FOUND)
find_package(Protobuf REQUIRED)
if(Protobuf_FOUND)
message(STATUS "PROTOBUF_INCLUDE_DIRS == ${PROTOBUF_INCLUDE_DIRS}") # this location looks fine
message(STATUS "PROTOBUF_LIBRARIES == ${PROTOBUF_LIBRARIES}") # this location looks fine
else(Protobuf_FOUND)
message(STATUS "Find_package(protobuf) failed")
endif(Protobuf_FOUND)
include_directories(${PROTOBUF_INCLUDE_DIRS})
include_directories(${CMAKE_CURRENT_BINARY_DIR})
add_executable( MY_TENSORFLOW_CPP_MODULE
my_tensorflow_module.cpp
${PROTO_SRCS}
${PROTO_HDRS} )
target_include_directories( MY_TENSORFLOW_CPP_MODULE PRIVATE
tensorflow_cc_shared
${PROTOBUF_INCLUDE_DIRS} )
target_link_libraries( MY_TENSORFLOW_CPP_MODULE PRIVATE
${PROTOBUF_LIBRARIES}
TensorflowCC::Shared
install(TARGETS MY_TENSORFLOW_CPP_MODULE DESTINATION bin)
Any ideas on what I might be missing or configuring incorrectly here?
Many thanks for your help!
-Robert
Hi, thanks for your sharing.
Follow your project i and run the example successfully. But when i wanted to do object detection with C++ API with static tensorflow lib. Error occus:
detection.cpp:(.text+0x882): undefined reference to tensorflow::Scope::NewRootScope()' detection.cpp:(.text+0xa49): undefined reference to
tensorflow::Scope::~Scope()'
detection.cpp:(.text+0xa98): undefined reference to tensorflow::Scope::WithOpName(std::string const&) const' detection:(.text+0xab2): undefined reference to
tensorflow::ops::Placeholder::Placeholder(tensorflow::Scope const&, tensorflow::DataType)'
and the main error come from here:
92 Status ReadTensorFromImageFile(const string& file_name,
93 const int input_height,
94 const int input_width,
95 const float input_mean,
96 const float input_std,
97 std::vector* out_tensors) {
98 auto root = tensorflow::Scope::NewRootScope();
99
100 string input_name = "file_reader";
101 string output_name = "normalized";
102
103 // read file_name into a tensor named input
104 Tensor input(tensorflow::DT_STRING, tensorflow::TensorShape());
105 TF_RETURN_IF_ERROR(ReadEntireFile(tensorflow::Env::Default(), file_name, &input));
106
107 // use a placeholder to read input data
108 auto file_reader = tensorflow::ops::Placeholder(root.WithOpName("input"), tensorflow::DataType::DT_STRING);
109 std::vector<std::pair<string, tensorflow::Tensor>> inputs = {{"input", input},};
...
It same some function do not included in *.a but i copy the header files from other project.
any suggestions to slove?
Hi there, thanks very much for the repo. I am on Ubuntu 16.04 with CUDA 8.0 and CUDNN 6.0.
I've been trying to compile the shared library but it seems that nvcc is trying to use gcc-6 which it is not compatible with. I've created the following symlinks but this did not fix the issue:
/opt/cuda -> /usr/local/cuda
/usr/bin/gcc-5 -> /usr/local/cuda/bin/gcc
/usr/bin/gcc-5 -> /usr/local/cuda/bin/g++
The plain gcc binary is 5.4:
$ gcc --version
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.5) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
After removing gcc-6 and g++-6 from my system, the bazel configure step prompts me:
Invalid gcc path. /usr/bin/gcc-6 cannot be found
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc-5]:
Can we force bazel to always ask which version of gcc to use for host compiler?
How to enable GPU support using CMake?
I know that if Bazel is used, there is a flag can be passed: --config=cuda
. But how to do it in CMake?
Hi, thanks for this very convenient way to compile tensorflow C++ library repo. I successfully build tensorflow static which is the default option. But when I build example, it says this error:
/usr/local/include/tensorflow/tensorflow/cc/ops/standard_ops.h:19:41: fatal error: tensorflow/cc/ops/array_ops.h: No such file or directory
compilation terminated.
The codes are simply the example, I paste my CMakeLists.txt here for investigation:
find_package(TensorflowCC REQUIRED)
include_directories(TensorflowCC_INCLUDE)
if (TensorflowCC_FOUND)
message(STATUS "We just found TensorFlow CC!!!!!!!")
endif ()
target_link_libraries(main ${OpenCV_LIBS} TensorflowCC::Static)
Any suggestion will be many appreciated!!!!!
It would be great to add CPack functionality in order to be able to distribute this as an already built installer and a standard package for Debian, Ubuntu, etc. See tensorflow/tensorflow#2412 (comment)
This line assumes a CPU later than Haswell architecture. I didn't notice this when I built the library, and my program died with the message "Illegal instruction (core dumped)".
Dear FloopCZ, your work is so amazing and it is reall helpful. But I got some errors when run make && make install
. Here are what I got in this step:
downloading http://mirror.bazel.build/github.com/google/nsync/archive/ad722c76c6e6653f66be2e1f69521b7f7517da55.tar.gz
gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error is not recoverable: exiting now
CMakeFiles/tensorflow_base.dir/build.make:106: recipe for target 'tensorflow-stamp/tensorflow_base-configure' failed
make[2]: *** [tensorflow-stamp/tensorflow_base-configure] Error 2
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/tensorflow_base.dir/all' failed
make[1]: *** [CMakeFiles/tensorflow_base.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2
To be honest, I really don't know much about cmake, so I don't know what happened here.
Really need your help...
My build of shared library for tf 1.4 was successful today.
cmake -DTENSORFLOW_STATIC=OFF -DTENSORFLOW_SHARED=ON ..
make
But bazel asked me for two yes/no inputs.
Please add following lines to build_tensorflow.sh to solve it:
export TF_NEED_S3=0
export TF_NEED_GDR=0
CC_PREFIX = ""
make[3]: warning: -jN forced in submake: disabling jobserver mode.
ar: creating /home/wxb/wxb/tensorflow_cc/tensorflow_cc/build/tensorflow/tensorflow/contrib/makefile/gen/lib/libtensorflow-core.a
./copy_links.sh: 4: ./copy_links.sh: realpath: not found
make[2]: *** [tensorflow-stamp/tensorflow_static-configure] Error 127
make[1]: *** [CMakeFiles/tensorflow_static.dir/all] Error 2
make: *** [all] Error 2
I was trying to link Tensorflow to my project by using Cmake (your method):
I did exactly what you explained in your tutorial, but I found this error:
////////////########///////////////
By not providing "FindTensorflowCC.cmake" in CMAKE_MODULE_PATH this project
has asked CMake to find a package configuration file provided by
"TensorflowCC", but CMake did not find one.
Could not find a package configuration file provided by "TensorflowCC" with
any of the following names:
TensorflowCCConfig.cmake
tensorflowcc-config.cmake
Add the installation prefix of "TensorflowCC" to CMAKE_PREFIX_PATH or set
"TensorflowCC_DIR" to a directory containing one of the above files. If
"TensorflowCC" provides a separate development package or SDK, be sure it has been installed.
////////////########///////////////
Could you please give me some Hints to link that.
thanks in advance
For this tensorflow Bazel build issue: 15492
I have to modify build_tensorflow.sh line 52
bazel build -c opt $cuda_config_opts --copt=${CC_OPT_FLAGS} tensorflow:libtensorflow_cc.so
into
bazel build -c opt $cuda_config_opts --incompatible_load_argument_is_label=false --copt=${CC_OPT_FLAGS} tensorflow:libtensorflow_cc.so
to get over the errors.
also I added --config=mkl
to support mkl here.
the header file cudnn.h from newest cudnn v7.04 will be installed to /usr/include rather than /usr/local/cuda/include (ubuntu 16.04). somehow it will cause some errors complaining no cudnn found.
I manually copy the cudnn.h into /usr/local/cuda/include, it can bypass this issue.
Hi Filip
I followed the steps on the main page (static lib, cpu) but when running the exampe the execution failed :
2018-03-02 15:49:15.631604: E tensorflow/core/common_runtime/session.cc:69] Not found: No session factory registered for the given session options: {target: "" config: } Registered factories are {}.
Not found: No session factory registered for the given session options: {target: "" config: } Registered factories are {}.
I found that if I link tf package, then opencv cannot read image, is there any solution?
Just a reminder that cmake is warning for a cmake_minimum_required:
CMake Warning (dev) in CMakeLists.txt:
No cmake_minimum_required command is present. A line of code such as
cmake_minimum_required(VERSION 3.9)
should be added at the top of the file. The version specified may be lower
if you wish to support older CMake versions for this project. For more
information run "cmake --help-policy CMP0000"
Could be worth it to add something like:
cmake_minimum_required(VERSION 3.9)
Cheers
It will be great to have a way to pass a target CPU architecture/additional C++ parameters to TF build.
I've tried to add them after "-std=c++11" in CMakeLists.txt but they seem to be overridden somewhere...
Thanks!
Dmitry
Hi,
I have built your project with GPU support and it works perfectly (Thank you!), but I wish to test my code on CPU too, Is there a way to do this with the current installation? If yes, How can I do it? What lines do I need to add to my code?
If no, How can I build tensorflow_cc in CPU-only mode? I wasn't able to find any such options in CmakeLists.txt. And Is it possible to have two installations of tensorflow_cc on one system?
I really appreciate any help as I'm new to TF and wasn't able to find anything useful by Googling (just a bunch of python stuff!). Thank you!
Now this may be a silly question, but how can I change the version of protoc that is used to generate the .pb.h files like
/usr/local/include/tensorflow/bazel-genfiles/tensorflow/core/lib/core/error_codes.pb.h
and
/usr/local/include/tensorflow/bazel-genfiles/tensorflow/core/framework/graph.pb.h
and the likes?
Currently it is using the 3.3.0 version of protobuf, but I need the files to be compatible with 2.6 runtime.
399590304 2月 7 18:02 libtensorflow_cc.so. stripped version.
As a reference, https://github.com/cjweeks/tensorflow-cmake generates about 180MB tensorflow_cc.so.
@FloopCZ Thank you for your project~
I have followed your installation guide trying to build shared library, but meet errors as follows:
compile_linux_protobuf.sh finished successfully!!! CUDA support disabled Warning: ignoring LD_PRELOAD in environment. .......... INFO: Starting clean (this may take a while). Consider using --async if the clean takes more than several minutes. Configuration finished Warning: ignoring LD_PRELOAD in environment. WARNING: Sandboxed execution is not supported on your system and thus hermeticity of actions cannot be guaranteed. See http://bazel.build/docs/bazel-user-manual.html#sandboxing for more information. You can turn off this warning via --ignore_unsupported_sandboxing. ERROR: /home/user/tensorflow_cc_dynamic/tensorflow_cc/build/tensorflow/third_party/py/python_configure.bzl:285:20: unexpected keyword 'environ' in call to repository_rule(implementation: function, *, attrs: dict or NoneType = None, local: bool = False). ERROR: com.google.devtools.build.lib.packages.BuildFileContainsErrorsException: error loading package '': Extension file 'third_party/py/python_configure.bzl' has errors. INFO: Elapsed time: 18.851s make[2]: *** [tensorflow-stamp/tensorflow_shared-configure] Error 1 make[1]: *** [CMakeFiles/tensorflow_shared.dir/all] Error 2 make: *** [all] Error 2
My server's system is centos7. One fact is that I have successfully built the static library as your guide, and run your demo code successfully. But when use the static library to create a session from a .pb file (as following bold sentence), the status.ok() is not true. So I guess the static library doesn't support this api. So I have to build shared library but don't know how to fix the problems above mentioned.
Status status = session->Create(graph_def);
Hope for your reply~
Hi,
When trying to build the shared library I get the following error
You have bazel 0.7.0 installed.
Configuration finished
WARNING: The lower priority option '-c opt' does not override the previous value '-c opt'
ERROR: /home/adamheavens/.cache/bazel/_bazel_adamheavens/87be3c52a121629cbd86bb264477a61d/external/io_bazel_rules_closure/closure/private/defs.bzl:27:16: The set
constructor for depsets is deprecated and will be removed. Please use the depset
constructor instead. You can temporarily enable the deprecated set
constructor by passing the flag --incompatible_disallow_set_constructor=false
ERROR: error loading package '': Extension file 'closure/private/defs.bzl' has errors
ERROR: error loading package '': Extension file 'closure/private/defs.bzl' has errors
INFO: Elapsed time: 0.977s
FAILED: Build did NOT complete successfully (0 packages loaded)
CMakeFiles/tensorflow_shared.dir/build.make:106: recipe for target 'tensorflow-stamp/tensorflow_shared-configure' failed
make[2]: *** [tensorflow-stamp/tensorflow_shared-configure] Error 1
CMakeFiles/Makefile2:104: recipe for target 'CMakeFiles/tensorflow_shared.dir/all' failed
make[1]: *** [CMakeFiles/tensorflow_shared.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2
Any ideas?
Thanks
Adam
I'm trying to build shared lib but i get this error:
cp: cannot copy a directory, '/tensorflow_cc/tensorflow_cc/build/tensorflow/tensorflow/contrib/makefile/gen/protobuf', into itself, './tensorflow/contrib/makefile/gen/protobuf/protobuf
CMakeFiles/tensorflow_shared.dir/build.make:106: recipe for target 'tensorflow-stamp/tensorflow_shared-configure' failed
make[2]: *** [tensorflow-stamp/tensorflow_shared-configure] Error 1
CMakeFiles/Makefile2:104: recipe for target 'CMakeFiles/tensorflow_shared.dir/all' failed
make[1]: *** [CMakeFiles/tensorflow_shared.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2
Hi,
Thank you a lot for the awesome work! @FloopCZ
I have managed to compile my code in C++ using cmake.
However am getting this error:
[libprotobuf FATAL external/protobuf/src/google/protobuf/stubs/common.cc:78] This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.3.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-ui6vjS/mir-0.26.3+16.04.20170605/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".) terminate called after throwing an instance of 'google::protobuf::FatalException' what(): This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.3.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-ui6vjS/mir-0.26.3+16.04.20170605/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".)
I am not very familiar to protobuf.
Can you kindly advise how should I work around this?
And may I know the version of the protobuf used in your tensorflow_cc?
Appreciated.
Hello, I'm using the static version of TensorflowCC, it works great when a build a binary, now I'm trying to build a shared library to dlopen() from another binary
the problem: I can not link it
ADD_LIBRARY(iadlmodel SHARED
iadlmodel.cc)
target_link_libraries(iadlmodel TensorflowCC::Static)
ERROR:
/usr/bin/ld: CMakeFiles/iadlmodel.dir/iadlmodel.cc.o: relocation R_X86_64_32 against `_ZTVN4iadl12TrainedModel5ModelE' can not be used when making a shared object; recompile with -fPIC
CMakeFiles/iadlmodel.dir/iadlmodel.cc.o: error adding symbols: Bad value
collect2: error: ld returned 1 exit status
CMakeFiles/iadlmodel.dir/build.make:95: recipe for target 'libiadlmodel.so' failed
is this posible ?
Hello,
First of all thank you for making this project possible as i need to create a c++ project using tensorflow for image recognition purpose.
I am trying to build this for my MacBook and everything was working great until i encounter this issue and have no idea how to proceed ahead. Would be happy to get some help..
Below is the result from the terminal:
You have bazel 0.7.0-homebrew installed.
Do you wish to build TensorFlow with Amazon S3 File System support? [Y/n]: n
No Amazon S3 File System support will be enabled for TensorFlow.
Do you wish to build TensorFlow with GDR support? [y/N]: y
GDR support will be enabled for TensorFlow.
Add "--config=mkl" to your bazel command to build with MKL support.
Please note that MKL on MacOS or windows is still not supported.
If you would like to use a local MKL instead of downloading, please set the environment variable "TF_MKL_ROOT" every time before build.
Configuration finished
............................................................
WARNING: The lower priority option '-c opt' does not override the previous value '-c opt'.
WARNING: /Users/Indrajit/Desktop/tensorflow_cc/tensorflow_cc/build/tensorflow/tensorflow/core/BUILD:1781:1: in includes attribute of cc_library rule //tensorflow/core:framework_headers_lib: '../../external/nsync/public' resolves to 'external/nsync/public' not below the relative path of its package 'tensorflow/core'. This will be an error in the future. Since this rule was created by the macro 'cc_header_only_library', the error might have been caused by the macro implementation in /Users/Indrajit/Desktop/tensorflow_cc/tensorflow_cc/build/tensorflow/tensorflow/tensorflow.bzl:1044:30.
INFO: Found 1 target...
INFO: From Compiling external/protobuf_archive/src/google/protobuf/compiler/js/embed.cc [for host]:
external/protobuf_archive/src/google/protobuf/compiler/js/embed.cc:37:12: warning: unused variable 'output_file' [-Wunused-const-variable]
const char output_file[] = "well_known_types_embed.cc";
^
1 warning generated.
INFO: From Executing genrule //tensorflow/core:version_info_gen [for host]:
error: could not expand include path '~/.gitcinclude'
fatal: bad config line 44 in file /usr/local/git/etc/gitconfig
ERROR: /private/var/tmp/_bazel_Indrajit/48a8070b1072bf03c6128508e891360c/external/nsync/BUILD:397:1: C++ compilation of rule '@nsync//:nsync_cpp' failed (Exit 1).
In file included from external/nsync/internal/note.c:16:
In file included from ./external/nsync//platform/c++11/platform.h:29:
In file included from /Library/Developer/CommandLineTools/usr/include/c++/v1/mutex:189:
In file included from /Library/Developer/CommandLineTools/usr/include/c++/v1/__mutex_base:17:
/Library/Developer/CommandLineTools/usr/include/c++/v1/__threading_support:156:1: error: unknown type name 'mach_port_t'
mach_port_t __libcpp_thread_get_port();
^
/Library/Developer/CommandLineTools/usr/include/c++/v1/__threading_support:300:1: error: unknown type name 'mach_port_t'
mach_port_t __libcpp_thread_get_port() {
^
/Library/Developer/CommandLineTools/usr/include/c++/v1/__threading_support:301:12: error: use of undeclared identifier 'pthread_mach_thread_np'
return pthread_mach_thread_np(pthread_self());
^
3 errors generated.
Target //tensorflow:libtensorflow_cc.so failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 107.849s, Critical Path: 8.81s
make[2]: *** [tensorflow-stamp/tensorflow_shared-configure] Error 1
make[1]: *** [CMakeFiles/tensorflow_shared.dir/all] Error 2
make: *** [all] Error 2
I am missing something over here ?
Thanks again!
Hi, thanks for this nice work helping wrap tensorflow_cc.
I am using the shared libary and got undefined reference of undefined reference to
tensorflow::internal::CheckOpMessageBuilder::NewString()'. This seems to be located in
libtensorflow_framework.so` which is not built by this wrapper. Is this a missed target that should be added?
Hi:
thank you for you work.
does it support on windows?
I use bazel 0.7
Linking failed with message
ERROR: /home/nchunosov/tmp/tensorflow_cc/tensorflow_cc/build/tensorflow/tensorflow/cc/BUILD:422:1: Linking of rule '//tensorflow/cc:ops/random_ops_gen_cc' failed (Exit 1).
/usr/bin/ld: warning: libcublas.so.9.1, needed by bazel-out/host/bin/_solib_local/_U_S_Stensorflow_Scc_Cops_Srandom_Uops_Ugen_Ucc___Utensorflow/libtensorflow_framework.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libcudnn.so.7, needed by bazel-out/host/bin/_solib_local/_U_S_Stensorflow_Scc_Cops_Srandom_Uops_Ugen_Ucc___Utensorflow/libtensorflow_framework.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libcufft.so.9.1, needed by bazel-out/host/bin/_solib_local/_U_S_Stensorflow_Scc_Cops_Srandom_Uops_Ugen_Ucc___Utensorflow/libtensorflow_framework.so, not found (try using -rpath or -rpath-link)
/usr/bin/ld: warning: libcurand.so.9.1, needed by bazel-out/host/bin/_solib_local/_U_S_Stensorflow_Scc_Cops_Srandom_Uops_Ugen_Ucc___Utensorflow/libtensorflow_framework.so, not found (try using -rpath or -rpath-link)
Those libs are exit in /usr/local/cuda/lib64
as symlinks
I've tried to add --action_env=LD_LIBRARY_PATH=/usr/local/cuda/lib64
to build script but it does not work.
Hi @FloopCZ
I m now trying to use the bazel build but looks like jemalloc is not friendly with my environment and anyway I m not fan of turning it on.
I see you set it to 1 in build_tensorflow.sh :
export TF_NEED_JEMALLOC=0
Would you mind adding a cmake option to allow us to disable jemalloc ?
Kind
ERROR: /.auto/home/wtambellini/.cache/bazel/_bazel_wtambellini/054a5fadb295714afbd5bb4af3628371/external/jemalloc/BUILD:17:1: C++ compilation of rule '@jemalloc//:jemalloc_impl' failed (Exit 1)
external/jemalloc/src/pages.c: In function 'je_pages_huge':
external/jemalloc/src/pages.c:203:30: error: 'MADV_HUGEPAGE' undeclared (first use in this function)
return (madvise(addr, size, MADV_HUGEPAGE) != 0);
^~~~~~~~~~~~~
external/jemalloc/src/pages.c:203:30: note: each undeclared identifier is reported only once for each function it appears in
external/jemalloc/src/pages.c: In function 'je_pages_nohuge':
external/jemalloc/src/pages.c:217:30: error: 'MADV_NOHUGEPAGE' undeclared (first use in this function)
return (madvise(addr, size, MADV_NOHUGEPAGE) != 0);
^~~~~~~~~~~~~~~
Hi. I also have an protobuf-3.5 installed by brew
. I don't know whether it has an influence on this error. However, I saw it download a new protobuf during tensorflow make process. So the protobuf installed by brew should not have influence on the error. Right?
Undefined symbols for architecture x86_64:
"google::protobuf::TextFormat::PrintToString(google::protobuf::Message const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >*)", referenced from:
tensorflow::WriteTextProto(tensorflow::Env*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, google::protobuf::Message const&) in env.o
"google::protobuf::MessageLite::ParseFromString(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)", referenced from:
tensorflow::ResourceHandle::ParseFromString(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in resource_handle.o
"google::protobuf::MessageFactory::InternalRegisterGeneratedFile(char const*, void (*)(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&))", referenced from:
tensorflow::protobuf_tensorflow_2fcore_2futil_2ftest_5flog_2eproto::AddDescriptorsImpl() in test_log.pb.o
tensorflow::protobuf_tensorflow_2fcore_2futil_2fsaved_5ftensor_5fslice_2eproto::AddDescriptorsImpl() in saved_tensor_slice.pb.o
tensorflow::protobuf_tensorflow_2fcore_2futil_2fmemmapped_5ffile_5fsystem_2eproto::AddDescriptorsImpl() in memmapped_file_system.pb.o
tensorflow::protobuf_tensorflow_2fcore_2futil_2fevent_2eproto::AddDescriptorsImpl() in event.pb.o
tensorflow::protobuf_tensorflow_2fcore_2fprotobuf_2ftensorflow_5fserver_2eproto::AddDescriptorsImpl() in tensorflow_server.pb.o
tensorflow::protobuf_tensorflow_2fcore_2fprotobuf_2fsaver_2eproto::AddDescriptorsImpl() in saver.pb.o
tensorflow::protobuf_tensorflow_2fcore_2fprotobuf_2fqueue_5frunner_2eproto::AddDescriptorsImpl() in queue_runner.pb.o
...
"google::protobuf::io::CodedOutputStream::WriteStringWithSizeToArray(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned char*)", referenced from:
google::protobuf::internal::WireFormatLite::WriteStringToArray(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned char*) in test_log.pb.o
google::protobuf::internal::WireFormatLite::WriteStringToArray(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned char*) in saved_tensor_slice.pb.o
google::protobuf::internal::WireFormatLite::WriteStringToArray(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned char*) in memmapped_file_system.pb.o
google::protobuf::internal::WireFormatLite::WriteStringToArray(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned char*) in event.pb.o
google::protobuf::internal::WireFormatLite::WriteBytesToArray(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned char*) in event.pb.o
google::protobuf::internal::WireFormatLite::WriteStringToArray(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned char*) in tensorflow_server.pb.o
google::protobuf::internal::WireFormatLite::WriteStringToArray(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned char*) in saver.pb.o
...
"google::protobuf::compiler::DiskSourceTree::MapPath(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)", referenced from:
tensorflow::MainImpl(int, char**) in gen_proto_text_functions.o
"google::protobuf::compiler::Importer::Import(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)", referenced from:
tensorflow::MainImpl(int, char**) in gen_proto_text_functions.o
"google::protobuf::internal::WireFormatLite::WriteBytes(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, google::protobuf::io::CodedOutputStream*)", referenced from:
tensorflow::CollectionDef_BytesList::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in meta_graph.pb.o
tensorflow::TensorProto::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in tensor.pb.o
tensorflow::AttrValue_ListValue::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in attr_value.pb.o
tensorflow::BytesList::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in feature.pb.o
"google::protobuf::internal::WireFormatLite::WriteString(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, google::protobuf::io::CodedOutputStream*)", referenced from:
tensorflow::BuildConfiguration::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in test_log.pb.o
tensorflow::RunConfiguration::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in test_log.pb.o
google::protobuf::internal::MapTypeHandler<(google::protobuf::internal::WireFormatLite::FieldType)9, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::Write(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, google::protobuf::io::CodedOutputStream*) in test_log.pb.o
tensorflow::QueueRunnerDef::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in queue_runner.pb.o
tensorflow::MetaGraphDef_MetaInfoDef::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in meta_graph.pb.o
tensorflow::CollectionDef_NodeList::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in meta_graph.pb.o
google::protobuf::internal::MapTypeHandler<(google::protobuf::internal::WireFormatLite::FieldType)9, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >::Write(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, google::protobuf::io::CodedOutputStream*) in meta_graph.pb.o
...
"google::protobuf::internal::WireFormatLite::WriteBytesMaybeAliased(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, google::protobuf::io::CodedOutputStream*)", referenced from:
tensorflow::Event::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in event.pb.o
tensorflow::TaggedRunMetadata::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in event.pb.o
tensorflow::TensorProto::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in tensor.pb.o
tensorflow::Summary_Image::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in summary.pb.o
tensorflow::Summary_Audio::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in summary.pb.o
tensorflow::Summary_Value::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in summary.pb.o
tensorflow::RemoteFusedGraphExecuteInfo::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in remote_fused_graph_execute_info.pb.o
...
"google::protobuf::internal::WireFormatLite::WriteStringMaybeAliased(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, google::protobuf::io::CodedOutputStream*)", referenced from:
tensorflow::EntryValue::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in test_log.pb.o
tensorflow::BenchmarkEntry::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in test_log.pb.o
tensorflow::BuildConfiguration::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in test_log.pb.o
tensorflow::CommitId::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in test_log.pb.o
tensorflow::CPUInfo::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in test_log.pb.o
tensorflow::GPUInfo::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in test_log.pb.o
tensorflow::PlatformInfo::SerializeWithCachedSizes(google::protobuf::io::CodedOutputStream*) const in test_log.pb.o
...
"google::protobuf::internal::WireFormatLite::ReadBytes(google::protobuf::io::CodedInputStream*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >*)", referenced from:
google::protobuf::internal::WireFormatLite::ReadString(google::protobuf::io::CodedInputStream*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >*) in test_log.pb.o
google::protobuf::internal::WireFormatLite::ReadString(google::protobuf::io::CodedInputStream*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >*) in saved_tensor_slice.pb.o
google::protobuf::internal::WireFormatLite::ReadString(google::protobuf::io::CodedInputStream*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >*) in memmapped_file_system.pb.o
tensorflow::Event::MergePartialFromCodedStream(google::protobuf::io::CodedInputStream*) in event.pb.o
google::protobuf::internal::WireFormatLite::ReadString(google::protobuf::io::CodedInputStream*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >*) in event.pb.o
tensorflow::TaggedRunMetadata::MergePartialFromCodedStream(google::protobuf::io::CodedInputStream*) in event.pb.o
google::protobuf::internal::WireFormatLite::ReadString(google::protobuf::io::CodedInputStream*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >*) in tensorflow_server.pb.o
...
"google::protobuf::internal::AssignDescriptors(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, google::protobuf::internal::MigrationSchema const*, google::protobuf::Message const* const*, unsigned int const*, google::protobuf::MessageFactory*, google::protobuf::Metadata*, google::protobuf::EnumDescriptor const**, google::protobuf::ServiceDescriptor const**)", referenced from:
tensorflow::protobuf_tensorflow_2fcore_2futil_2ftest_5flog_2eproto::(anonymous namespace)::protobuf_AssignDescriptors() in test_log.pb.o
tensorflow::protobuf_tensorflow_2fcore_2futil_2fsaved_5ftensor_5fslice_2eproto::(anonymous namespace)::protobuf_AssignDescriptors() in saved_tensor_slice.pb.o
tensorflow::protobuf_tensorflow_2fcore_2futil_2fmemmapped_5ffile_5fsystem_2eproto::(anonymous namespace)::protobuf_AssignDescriptors() in memmapped_file_system.pb.o
tensorflow::protobuf_tensorflow_2fcore_2futil_2fevent_2eproto::(anonymous namespace)::protobuf_AssignDescriptors() in event.pb.o
tensorflow::protobuf_tensorflow_2fcore_2fprotobuf_2ftensorflow_5fserver_2eproto::(anonymous namespace)::protobuf_AssignDescriptors() in tensorflow_server.pb.o
tensorflow::protobuf_tensorflow_2fcore_2fprotobuf_2fsaver_2eproto::(anonymous namespace)::protobuf_AssignDescriptors() in saver.pb.o
tensorflow::protobuf_tensorflow_2fcore_2fprotobuf_2fqueue_5frunner_2eproto::(anonymous namespace)::protobuf_AssignDescriptors() in queue_runner.pb.o
...
"google::protobuf::internal::fixed_address_empty_string", referenced from:
google::protobuf::internal::GetEmptyStringAlreadyInited() in resource_handle.o
google::protobuf::internal::GetEmptyStringAlreadyInited() in test_log.pb.o
google::protobuf::internal::GetEmptyStringAlreadyInited() in saved_tensor_slice.pb.o
google::protobuf::internal::GetEmptyStringAlreadyInited() in memmapped_file_system.pb.o
google::protobuf::internal::GetEmptyStringAlreadyInited() in event.pb.o
google::protobuf::internal::GetEmptyStringAlreadyInited() in tensorflow_server.pb.o
google::protobuf::internal::GetEmptyStringAlreadyInited() in saver.pb.o
...
"google::protobuf::Descriptor::FindFieldByName(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const", referenced from:
tensorflow::(anonymous namespace)::Generator::AppendParseMessageFunction(google::protobuf::Descriptor const&) in gen_proto_text_functions_lib.o
tensorflow::(anonymous namespace)::Generator::AppendFieldAppend(google::protobuf::FieldDescriptor const&) in gen_proto_text_functions_lib.o
"google::protobuf::MessageLite::AppendToString(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >*) const", referenced from:
tensorflow::port::EncodeResourceHandleList(tensorflow::ResourceHandle const*, long long, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >*) in tensor_coding.o
tensorflow::WriteBinaryProto(tensorflow::Env*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, google::protobuf::MessageLite const&) in env.o
"google::protobuf::MessageLite::SerializeAsString() const", referenced from:
tensorflow::ResourceHandle::SerializeAsString() const in resource_handle.o
"google::protobuf::Message::GetTypeName() const", referenced from:
vtable for tensorflow::EntryValue in test_log.pb.o
vtable for tensorflow::BenchmarkEntry in test_log.pb.o
vtable for tensorflow::BenchmarkEntries in test_log.pb.o
vtable for tensorflow::BuildConfiguration in test_log.pb.o
vtable for tensorflow::CommitId in test_log.pb.o
vtable for tensorflow::CPUInfo in test_log.pb.o
vtable for tensorflow::MemoryInfo in test_log.pb.o
...
"google::protobuf::Message::InitializationErrorString() const", referenced from:
vtable for tensorflow::EntryValue in test_log.pb.o
vtable for tensorflow::BenchmarkEntry::BenchmarkEntry_ExtrasEntry in test_log.pb.o
vtable for tensorflow::BenchmarkEntry in test_log.pb.o
vtable for tensorflow::BenchmarkEntries in test_log.pb.o
vtable for tensorflow::BuildConfiguration in test_log.pb.o
vtable for tensorflow::CommitId in test_log.pb.o
vtable for tensorflow::CPUInfo::CPUInfo_CacheSizeEntry in test_log.pb.o
...
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[3]: *** [/Users/xxx/github/tensorflow_cc/tensorflow_cc/build/tensorflow/tensorflow/contrib/makefile/gen/host_bin/proto_text] Error 1
make[2]: *** [tensorflow-stamp/tensorflow_static-configure] Error 2
make[1]: *** [CMakeFiles/tensorflow_static.dir/all] Error 2
make: *** [all] Error 2
./copy_links.sh: 4: ./copy_links.sh: realpath: not found
make[2]: *** [tensorflow-stamp/tensorflow_static-configure] Error 127
make[1]: *** [CMakeFiles/tensorflow_static.dir/all] Error 2
make: *** [all] Error 2
please can you help to build a version for macOS?
how about changing the make
command to make -j4
to use multi core, to speed up the building?
Is it possible to build .a lib with GPU support?
During the build procedure I receive the following output:
ERROR: /home/hellbago/tensorflow_cc/tensorflow_cc/build/tensorflow/tensorflow/core/kernels/BUILD:3247:1: error while parsing .d file: /home/hellbago/.cache/bazel/_bazel_hellbago/7804ee3d380a70b07846419715bd3fec/execroot/org_tensorflow/bazel-out/k8-py3-opt/bin/tensorflow/core/kernels/_objs/histogram_op_gpu/tensorflow/core/kernels/histogram_op_gpu.cu.pic.d (No such file or directory)
In file included from /usr/local/cuda-8.0/bin/../targets/x86_64-linux/include/cuda_runtime.h:78:0,
from <command-line>:0:
/usr/local/cuda-8.0/bin/../targets/x86_64-linux/include/host_config.h:119:2: error: #error -- unsupported GNU version! gcc versions later than 5 are not supported!
#error -- unsupported GNU version! gcc versions later than 5 are not supported!
^~~~~
Target //tensorflow:libtensorflow_cc.so failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 690.546s, Critical Path: 37.60s
FAILED: Build did NOT complete successfully
The default gcc/g++ version in my system is 5.4.1.
Why I receive this error message? Any ideas? @FloopCZ
Hi men, I have a stupid question, how do you build a static library ?
the standalone binary works great, but when I try to build a static library I got
### "fatal error: tensorflow/core/platform/env.h: No such file or directory"
How do you specify the relationship between a static library and the package?
If I manually include_directories("/usr/local/include/tensorflow/") I got another error:
### In file included from /usr/local/include/tensorflow/tensorflow/core/lib/core/errors.h:19:0,
from /usr/local/include/tensorflow/tensorflow/core/platform/env.h:24,
from /home/payas/source/trunk/iadl_model/iadlmodel.cc:1:
/usr/local/include/tensorflow/tensorflow/core/lib/core/status.h:22:53: fatal error: tensorflow/core/lib/core/error_codes.pb.h: No such file or directory
compilation terminated.
Hi,
when installing the shared library my bazel-genfiles are just symbolic links to ./cache. I was able to fix it by hand, but maybe its a good idea to adjust the skript.
cheers.
Since this project already uses CMAKE. It will be great to add support for Windows platform as well ?
I'm experiencing the same issue as tensorflow/tensorflow#1924
When linking against both libraries, imread always returns an error with jpg files. I believe this is due to a version mismatch between the libjpeg included in tensorflow_cc compile and my system version of libjpeg.
I've tried using the workaround proposed in tensorflow/tensorflow#14267#issuecomment-351780041 by adding this line:
--config=monolithic
to this build script: tensorflow_cc/cmake/build_tensorflow.sh
, but this didn't seem to change anything.
hello, every step in the readme for building static works great, except the example, I got:
payas@ubuntu:/tensorflow_cc/example/build$ uname -a16.04.2-Ubuntu SMP Thu Jul 20 10:19:48 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Linux ubuntu 4.10.0-28-generic #32
payas@ubuntu:~/tensorflow_cc/example/build$ cmake ..
-- Configuring done
CMake Error at CMakeLists.txt:2 (add_executable):
Target "example" links to target "TensorflowCC::Static" but the target was
not found. Perhaps a find_package() call is missing for an IMPORTED
target, or an ALIAS target is missing?
-- Generating done
-- Build files have been written to: /home/payas/tensorflow_cc/example/build
payas@ubuntu:~/tensorflow_cc/example/build$
what I miss?
follow you instruction, I compiled tensorflow c++ library, its ok for small example like creating a session, but important math ops are missing like tensorflow::ops::MatMul
, can you check this? Thank you very much!
Hi,
In your example cmake file I just added this line
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DNDEBUG")
and it breaks the program. I am getting error
"No session factory registered for the given session options: {target: "" config: } Registered factories are {}."
Then I did "ldd example", and here is the result:
linux-vdso.so.1 => (0x00007ffe679b1000) libtensorflow_framework.so => /usr/lib/tensorflow_cc/libtensorflow_framework.so (0x00007f3ccb5d8000) libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f3ccb256000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f3ccb040000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f3ccac76000) libcublas.so.9.1 => /usr/local/cuda-9.1/lib64/libcublas.so.9.1 (0x00007f3cc7555000) libcuda.so.1 => /usr/lib/x86_64-linux-gnu/libcuda.so.1 (0x00007f3cc69b5000) libcudnn.so.7 => /usr/lib/x86_64-linux-gnu/libcudnn.so.7 (0x00007f3cb5a24000) libcufft.so.9.1 => /usr/local/cuda-9.1/lib64/libcufft.so.9.1 (0x00007f3cae537000) libcurand.so.9.1 => /usr/local/cuda-9.1/lib64/libcurand.so.9.1 (0x00007f3caa5b4000) libcudart.so.9.1 => /usr/local/cuda-9.1/lib64/libcudart.so.9.1 (0x00007f3caa346000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f3caa142000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f3ca9e39000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f3ca9c1c000) /lib64/ld-linux-x86-64.so.2 (0x00007f3ccc511000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f3ca9a14000) libnvidia-fatbinaryloader.so.390.30 => /usr/lib/nvidia-390/libnvidia-fatbinaryloader.so.390.30 (0x00007f3ca97c8000)
libtensorflow_cc.so is missing and if I remove -DNDEBUG flag, it works as expected. It might be issue from tensorflow compilation thing but I am not entirely sure. Can you please help me understand why -DNDEBUG is causing this problem? Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.