Comments (7)
Ok, thanks for update!
from hello_tf_c_api.
For example:
input_dimension -> {?, 42}
output_dimension -> {?, 1}
For batch you can try
input_dimension -> {100, 42}
output_dimension -> {100, 1}
and do TF_SessionRun.
from hello_tf_c_api.
Example batch inference: https://github.com/Neargye/hello_tf_c_api/blob/master/src/batch_interface.cpp
from hello_tf_c_api.
@BRTNDR Please look at the examples I hope they will help you.
from hello_tf_c_api.
It turned out I was casting a pointer to the wrong type. Small code, enormous problems.
Thanks!
from hello_tf_c_api.
Sorry for reopening this again.
You might remember, I figured out the image inference some time ago with this code, and now I also needed to do batch inference. It isn't working properly, and your example doesn't help.
Here's my code (shortened and simplified version):
std::vector<std::vector<float>> run( const std::vector<cv::Mat>& images )
{
int batchSize = images.size();
int outputSize = 10; // the model should output a vector of 10 elements
std::vector<TF_Tensor*> input_tensors;
std::vector<TF_Tensor*> output_tensors;
// set the input dims
const std::vector<std::int64_t> input_dims = { batchSize, images.front().rows, images.front().cols, images.front().channels() };
// input data vector:
std::vector<float> input_data;
for ( auto& image : images )
{
// convert to float32
cv::Mat image32f;
image.convertTo( image32f, CV_32F );
// insert at the end of the input data vector:
input_data.insert( input_data.end(), (float*) image32f.data, (float*) image32f.data + image32f.total() * image32f.channels() );
}
// set the input tensor
input_tensors.push_back( TF::CreateTensor( TF_FLOAT, input_dims, input_data ) );
// set the output tensor
const std::vector<std::int64_t> output_dims = { batchSize, outputSize };
output_tensors.push_back( TF::CreateEmptyTensor( TF_FLOAT, output_dims ) );
// run session:
const TF_Code code = TF::RunSession( m_pSession, input_ops, input_tensors, output_ops, output_tensors );
if ( code == TF_OK )
{
auto output_data = TF::GetTensorsData<float>( output_tensors );
TF::DeleteTensors( output_tensors );
TF::DeleteTensors( input_tensors );
return output_data;
}
else
{
return std::vector<std::vector<float>>();
}
}
and then run it and show it in the console:
cv::Mat image = cv::imread( "image.jpg", cv::IMREAD_UNCHANGED );
// copy image 4 times to emulate a batch
std::vector<cv::Mat> input_vector;
input_vector.push_back( image );
input_vector.push_back( image );
input_vector.push_back( image );
input_vector.push_back( image );
// process
std::vector<std::vector<float>> output_data = run( input_vector );
// and show:
for ( int i = 0; i < output_data.size(); i++ ) {
for ( int j = 0; j < output_data[i].size(); j++ ) {
std::cout << output_data[i][j] << " ";
}
std::cout << std::endl;
}
But it only prints out the 10 numbers, as if there was only one input image, instead of a batch.
I even tried pre-allocating the output_tensors
not as empty, but with desired size filled with zeros. And if I then print out the TF_TensorByteSize()
before and after, I see that the RunSession
just overwrites it with the 1-batch size.
Do you perhaps notice any errors in my code that I don't see?
from hello_tf_c_api.
UPDATE:
yeah, ok, sorry, it seems I've been using the wrong output layer name. I needed the last layer, but I was giving the second last instead.
It works ok now. You can close the issue back.
from hello_tf_c_api.
Related Issues (20)
- Memory leak during inference with frozen graph HOT 9
- session_run hangs on GPU (libtensorflow-gpu) HOT 4
- question about this library HOT 3
- how to turn off verbose and idle threads?
- GPU dll HOT 5
- cuda_driver.cc:175] Check failed HOT 1
- How to create Tensor of TF_BOOL? HOT 2
- TF_INVALID_ARGUMENT
- Inference is running very slow on CPU HOT 1
- Multiple models inference HOT 4
- 3D input to model returns different output than python HOT 1
- What is this actually doing? HOT 2
- TF_SessionRun with multiple outputs gives Segmentation Fault HOT 5
- TF_INVALID_ARGUMENT HOT 1
- Multiple GPU Inferencing HOT 1
- cmake -G "Unix Makefiles" .. stop HOT 1
- Confine TensorFlow C API not to generate more than one threads
- Import LSTM-Layer: Expected input[1] to be control input
- when i load graph the TF_Code is ‘TF_UNKNOWN’ , why?
- when i load graph the TF_Code is ‘TF_INVALID_ARGUMENT ’ , why?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hello_tf_c_api.