GithubHelp home page GithubHelp logo

janrvdolf / tensorflow_cc Goto Github PK

View Code? Open in Web Editor NEW

This project forked from floopcz/tensorflow_cc

0.0 1.0 0.0 131 KB

Build and install TensorFlow C++ API library.

License: MIT License

Shell 42.19% CMake 56.14% C++ 1.67%

tensorflow_cc's Introduction

tensorflow_cc

Build Status TF version

This repository makes possible the usage of the TensorFlow C++ API from the outside of the TensorFlow source code folders and without the use of the Bazel build system.

This repository contains two CMake projects. The tensorflow_cc project downloads, builds and installs the TensorFlow C++ API into the operating system and the example project demonstrates its simple usage.

Docker

If you wish to start using this project right away, fetch a prebuilt image on Docker Hub!

Running the image on CPU:

docker run -it floopcz/tensorflow_cc:ubuntu-shared /bin/bash

If you also want to utilize your NVIDIA GPU, install NVIDIA Docker and run:

docker run --runtime=nvidia -it floopcz/tensorflow_cc:ubuntu-shared-cuda /bin/bash

The list of available images:

Image name Description
floopcz/tensorflow_cc:ubuntu-static Ubuntu + static build of tensorflow_cc
floopcz/tensorflow_cc:ubuntu-shared Ubuntu + shared build of tensorflow_cc
floopcz/tensorflow_cc:ubuntu-shared-cuda Ubuntu + shared build of tensorflow_cc + NVIDIA CUDA
floopcz/tensorflow_cc:archlinux-shared Arch Linux + shared build of tensorflow_cc

To build one of the images yourself, e.g. ubuntu-shared, run:

docker build -t floopcz/tensorflow_cc:ubuntu-shared -f Dockerfiles/ubuntu-shared .

Installation

1) Install requirements

Ubuntu 18.04:
sudo apt-get install build-essential curl git cmake unzip autoconf autogen automake libtool mlocate \
                     zlib1g-dev g++-7 python python3-numpy python3-dev python3-pip python3-wheel wget
sudo updatedb

If you require GPU support on Ubuntu, please also install Bazel, NVIDIA CUDA Toolkit (>=9.2), NVIDIA drivers, cuDNN, and cuda-command-line-tools package. The tensorflow build script will automatically detect CUDA if it is installed in /opt/cuda or /usr/local/cuda directories.

Arch Linux:
sudo pacman -S base-devel cmake git unzip mlocate python python-numpy wget
sudo updatedb

For GPU support on Arch, also install the following:

sudo pacman -S gcc7 bazel cuda cudnn nvidia

Warning: Newer versions of TensorFlow sometimes fail to build with the latest version of Bazel. You may wish to install an older version of Bazel (e.g., 0.16.1).

2) Clone this repository

git clone https://github.com/FloopCZ/tensorflow_cc.git
cd tensorflow_cc

3) Build and install the library

There are two possible ways to build the TensorFlow C++ library:

  1. As a static library (default):
    • Faster to build.
    • Provides only basic functionality, just enough for inferring using an existing network (see contrib/makefile).
    • No GPU support.
  2. As a shared library:
    • Requires Bazel.
    • Slower to build.
    • Provides the full TensorFlow C++ API.
    • GPU support.
cd tensorflow_cc
mkdir build && cd build
# for static library only:
cmake ..
# for shared library only (requires Bazel):
# cmake -DTENSORFLOW_STATIC=OFF -DTENSORFLOW_SHARED=ON ..
make && sudo make install

Warning: Optimizations for Intel CPU generation >=ivybridge are enabled by default. If you have a processor that is older than ivybridge generation, you may wish to run export CC_OPT_FLAGS="-march=native" before the build. This command provides the best possible optimizations for your current CPU generation, but it may cause the built library to be incompatible with older generations.

4) (Optional) Free disk space

# cleanup bazel build directory
rm -rf ~/.cache
# remove the build folder
cd .. && rm -rf build

Usage

1) Write your C++ code:

// example.cpp

#include <tensorflow/core/platform/env.h>
#include <tensorflow/core/public/session.h>
#include <iostream>
using namespace std;
using namespace tensorflow;

int main()
{
    Session* session;
    Status status = NewSession(SessionOptions(), &session);
    if (!status.ok()) {
        cout << status.ToString() << "\n";
        return 1;
    }
    cout << "Session successfully created.\n";
}

2) Link TensorflowCC to your program using CMake

# CMakeLists.txt

find_package(TensorflowCC REQUIRED)
add_executable(example example.cpp)

# Link the static Tensorflow library.
target_link_libraries(example TensorflowCC::Static)

# Altenatively, link the shared Tensorflow library.
# target_link_libraries(example TensorflowCC::Shared)

# For shared library setting, you may also link cuda if it is available.
# find_package(CUDA)
# if(CUDA_FOUND)
#   target_link_libraries(example ${CUDA_LIBRARIES})
# endif()

3) Build and run your program

mkdir build && cd build
cmake .. && make
./example 

If you are still unsure, consult the Dockerfiles for Ubuntu and Arch Linux.

Compile Tensorflow C++ with GPU support on RCI cluster

Elapsed time: 13096.862s for using 12 cores.

1) Ask for a machine with 2x CPU, GPU and with 50G memory and 100G disk space (can be reduced, but that is verified it is working)

srun -v -p gpu --gres=gpu:1 --mem 50G --tmp 100G --pty bash -i

2) Go into the local scratch storage (/data/temporary [1])

export SCRATCH_DIR=/data/temporary
mkdir $SCRATCH_DIR

3) Create a directory for Bazel's cache

mkdir bazel_cache

4) Git clone my version of FloopCZ/tensorflow_cc [2] you find here

git clone https://github.com/jan-rudolf/tensorflow_cc

5) Make a symlink for Bazel's cache

	ln -fs $SCRATCH_DIR/bazel_cache $HOME/.cache/bazel

6) Set up Bazel's enviroment variable with the custome cache directory

export TEST_TMPDIR=$SCRATCH_DIR/bazel_cache

7) Add all dependant modules

ml cuDNN/7.5.0.56-fosscuda-2019a # this also includes GCC, CUDA, libtool etc. 
ml Autotools/20180311-GCCcore-8.2.0
ml wheel/0.31.1-fosscuda-2019a-Python-3.7.2
ml NCCL/2.4.2-fosscuda-2019a # mozna neni potreba
ml Bazel/0.20.0-GCCcore-8.2.0
ml CMake/3.13.3-GCCcore-8.2.0

8) Create a build directory

cd tensorflow_cc/tensorflow_cc/
mkdir build && cd build

9) Create makefile for shared library (default allow CUDA is ON)

cmake -DTENSORFLOW_STATIC=OFF -DTENSORFLOW_SHARED=ON ..

10) Run compilation (takes few hours)

make

11) After compilation, copy header files and object files where you need

	make install	

References: [1] https://login.rci.cvut.cz/wiki/storage [2] https://github.com/FloopCZ/tensorflow_cc

tensorflow_cc's People

Contributors

achalshah20 avatar ahoereth avatar daavoo avatar floopcz avatar johschmitz avatar tano297 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.