Comments (8)
@pango99
hi. For choosing a specific GPU in Tensorflow there exists a global solution: you have to create an environment variable CUDA_VISIBLE_DEVICES
and list the numbers (id's) of the GPUs you want your Tensorflow to use. For example, if you have three GPUs, but you only want to use the first one and the third one, you set: CUDA_VISIBLE_DEVICES=0,2
. In your case, if you have two, but you want to use GPU1, you set CUDA_VISIBLE_DEVICES=1
.
Unfortunately, I don't know if this works with tf-c API. Try it out for yourself.
As for the memory fraction, I'd already created a function for it and Neargye has added it to his code as CreateSessionOptions()
.
from hello_tf_c_api.
@msnh2012 @Neargye to get proto
parameter for TF_SetConfig
we can use tensorflow/core/protobuf/config.pb.h
and tensorflow::ConfigProto
class
The implementation is included to libtensorflow.so
Code example https://github.com/apivovarov/TF_C_API/blob/master/config.cc
build command:
g++ -std=c++11 -o config.o -c -Itensorflow/bazel-tensorflow/external/com_google_protobuf/src -Itensorflow/bazel-bin -Ilibtensorflow/include config.cc
g++ -std=c++11 -o config config.o -Llibtensorflow/lib -ltensorflow -ltensorflow_framework
LD_LIBRARY_PATH=libtensorflow/lib: ./config
0x10,0x2,0x28,0x3,0x32,0xb,0x9,0x9a,0x99,0x99,0x99,0x99,0x99,0xb9,0x3f,0x20,0x1
TF_SetConfig API:
https://github.com/tensorflow/tensorflow/blob/r1.15/tensorflow/c/c_api.h#L147
from hello_tf_c_api.
For RF 1.14.0 use TF_CreateConfig from c_api_experimental.h
// Create a serialized tensorflow.ConfigProto proto, where:
//
// a) ConfigProto.optimizer_options.global_jit_level is set to to ON_1 if
// `enable_xla_compilation` is non-zero, and OFF otherwise.
// b) ConfigProto.gpu_options.allow_growth is set to `gpu_memory_allow_growth`.
// c) ConfigProto.device_count is set to `num_cpu_devices`.
TF_CAPI_EXPORT extern TF_Buffer* TF_CreateConfig(
unsigned char enable_xla_compilation, unsigned char gpu_memory_allow_growth,
unsigned int num_cpu_devices);
and set this config use TF_SetConfig from c_api.h
// Set the config in TF_SessionOptions.options.
// config should be a serialized tensorflow.ConfigProto proto.
// If config was not parsed successfully as a ConfigProto, record the
// error information in *status.
TF_CAPI_EXPORT extern void TF_SetConfig(TF_SessionOptions* options,
const void* proto, size_t proto_len,
TF_Status* status);
from hello_tf_c_api.
thanks your reply, but I still dont know how to do; I use tf-c-api only for inference,if computer has 2 GPUs(GPU0/GPU1), so how to config to force my program to run on GPU1 only?
other problem, from TF_CreateConfig(),how to set the "gpuMemFraction" param ? what's meaning of "enable_xla_compilation" param? I use tf-c-api only for inference,so can I set it to zero?
from hello_tf_c_api.
Maybe @Xonxt could help.
from hello_tf_c_api.
@Xonxt
OK,I'll try it few days later,thank you!
from hello_tf_c_api.
How to set "
sess = tf.Session( config= tf.ConfigProto(inter_op_parallelism_threads=1, intra_op_parallelism_threads=1))
"
Thx a lot!
@Neargye
@Xonxt
from hello_tf_c_api.
uint8_t intra_op_parallelism_threads = 1;
uint8_t inter_op_parallelism_threads = 1;
uint8_t buf[] = {0x10,i ntra_op_parallelism_threads, 0x28, inter_op_parallelism_threads};
TF_SetConfig(sess_opts, buf, sizeof(buf), status);
See tensorflow/tensorflow#13853
from hello_tf_c_api.
Related Issues (20)
- Memory leak during inference with frozen graph HOT 9
- session_run hangs on GPU (libtensorflow-gpu) HOT 4
- question about this library HOT 3
- how to turn off verbose and idle threads?
- GPU dll HOT 5
- cuda_driver.cc:175] Check failed HOT 1
- How to create Tensor of TF_BOOL? HOT 2
- TF_INVALID_ARGUMENT
- Inference is running very slow on CPU HOT 1
- Multiple models inference HOT 4
- 3D input to model returns different output than python HOT 1
- What is this actually doing? HOT 2
- TF_SessionRun with multiple outputs gives Segmentation Fault HOT 5
- TF_INVALID_ARGUMENT HOT 1
- Multiple GPU Inferencing HOT 1
- cmake -G "Unix Makefiles" .. stop HOT 1
- Confine TensorFlow C API not to generate more than one threads
- Import LSTM-Layer: Expected input[1] to be control input
- when i load graph the TF_Code is ‘TF_UNKNOWN’ , why?
- when i load graph the TF_Code is ‘TF_INVALID_ARGUMENT ’ , why?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hello_tf_c_api.