Comments (6)
Hi @Xonxt, thanks for your reply. Could you please fix the code in a comment?
Can I also add an example to the repository based on your code? To be honest, I work quite a bit with pictures, so it was not so easy for me to make my own example. I mainly work with time series analyzes.
hi, @Neargye.
I mean... you can, but I'd only tested that it runs and doesn't crash, I haven't tested what the result of the prediction looks like.
That is, I've seen that the prediction returns a tensor of a correct expected size (in my case, a stack of 19 heatmaps of size 46x46 for an input image of size 368x368), but I haven't looked yet, if they contain what I expect them to.
EDIT:
@Neargye checked it today, yeah it works as intended. The code above is enough, if we expect some kind of probability or a class number.
Here's a snippet, if we expect the output of the model to also be an image:
const TF_Code code = tf_utils::RunSession( session, input_ops, input_tensors, out_ops, output_tensors );
// expected output dimensions:
const std::vector<std::int64_t> output_dims = { 1, 46, 46, 19 };
const std::vector<std::vector<float>> data = tf_utils::GetTensorsData<float>( output_tensors );
// convert the Tensor data into a cv::Mat
cv::Mat heatmaps( (int)output_dims[1], (int)output_dims[2], CV_32FC( output_dims[3]), (void*) data.at( 0 ).data() );
// 'heatmaps' is now a 46x46x19 Mat of floats, do what you want with it.
from hello_tf_c_api.
You need to load the image as an array of float, load this array into the input tensor, and run session.
Later I will add an example of a images prediction.
Here's a working example of image prediction:
cv::Mat image = cv::imread( "d:\\image.jpg", );
// convert image to float32
cv::Mat image32f;
image.convertTo( image32f, CV_32F );
// copy to vector:
std::vector<float> input_data;
input_data.assign( (float*) image32f.data, (float*) image32f.data + image32f.total() * image32f.channels() );
// dimensions
const std::vector<std::int64_t> input_dims = { 1, image.rows, image.cols, image.channels() };
// Tensors:
const std::vector<TF_Output> input_ops = { {TF_GraphOperationByName( graph, "input" ), 0} };
const std::vector<TF_Tensor*> input_tensors = { tf_utils::CreateTensor( TF_FLOAT, input_dims, input_data ) };
const std::vector<TF_Output> out_ops = { {TF_GraphOperationByName( graph, "output" ), 0} };
std::vector<TF_Tensor*> output_tensors = { nullptr };
// create TF session:
auto session = tf_utils::CreateSession( graph );
// run Session
const TF_Code code = tf_utils::RunSession( session, input_ops, input_tensors, out_ops, output_tensors );
// get the data:
const std::vector<std::vector<float>> data = tf_utils::GetTensorsData<float>( output_tensors );
from hello_tf_c_api.
You need to load the image as an array of float, load this array into the input tensor, and run session.
Later I will add an example of a images prediction.
from hello_tf_c_api.
@Neargye. Did you add the images prediction example? Will like to do object detection in c, but there is no docs regarding the c api. Thanx. Or maybe a guide on how to convert https://github.com/lysukhin/tensorflow-object-detection-cpp to c.
from hello_tf_c_api.
You need to load the image as an array of float, load this array into the input tensor, and run session.
Later I will add an example of a images prediction.
do you have an example of a image prediction?
from hello_tf_c_api.
Hi @Xonxt, thanks for your reply. Could you please fix the code in a comment?
Can I also add an example to the repository based on your code? To be honest, I work quite a bit with pictures, so it was not so easy for me to make my own example. I mainly work with time series analyzes.
from hello_tf_c_api.
Related Issues (20)
- Memory leak during inference with frozen graph HOT 9
- session_run hangs on GPU (libtensorflow-gpu) HOT 4
- question about this library HOT 3
- how to turn off verbose and idle threads?
- GPU dll HOT 5
- cuda_driver.cc:175] Check failed HOT 1
- How to create Tensor of TF_BOOL? HOT 2
- TF_INVALID_ARGUMENT
- Inference is running very slow on CPU HOT 1
- Multiple models inference HOT 4
- 3D input to model returns different output than python HOT 1
- What is this actually doing? HOT 2
- TF_SessionRun with multiple outputs gives Segmentation Fault HOT 5
- TF_INVALID_ARGUMENT HOT 1
- Multiple GPU Inferencing HOT 1
- cmake -G "Unix Makefiles" .. stop HOT 1
- Confine TensorFlow C API not to generate more than one threads
- Import LSTM-Layer: Expected input[1] to be control input
- when i load graph the TF_Code is ‘TF_UNKNOWN’ , why?
- when i load graph the TF_Code is ‘TF_INVALID_ARGUMENT ’ , why?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hello_tf_c_api.