Comments (5)
@Xonxt need help...
from hello_tf_c_api.
@pauperonway could you please give a bit more detail on how you modify the loadModel()
and what exactly does your model's output look like?
from hello_tf_c_api.
now the result is ok.
i modify a few lines like this:(FYI)
bool tf_image::TF_Model::loadModel( const std::string & path, const double gpu_memory_fraction, bool inferIO )
{
// create an empty status;
...
//get final_binary_output
std::vectorstd::string outputNames;
outputNames.clear();
std::string binary_output_name = "net/final_binary_output";
name = binary_output_name.c_str();
//inputNames.push_back(binary_output_name);
outputNames.push_back(name);
// get last:
while ( (op = TF_GraphNextOperation( m_pGraph, &pos )) != nullptr ) {
name = TF_OperationName( op );
}
outputNames.push_back(name);
setOutputs( { outputNames } );
}
return true;
}
std::vectorcv::Mat tf_image::TF_Model::predict_image2image( const std::vectorcv::Mat& input )
{
...
// if everything ok, parse the output:
if ( !output_tensors.empty() ) {
// construct the method output:
for ( size_t i = 0; i < output_tensors.size(); i++ )
{
auto shape = GetTensorShape( output_tensors.at( i ) );
if (shape[2] <= 0 || shape[2] > 3) shape[2] = 1;
auto data = tf_utils::GetTensorsData( { output_tensors.at( i ) } );
// go through all outputs
output.push_back( cv::Mat( static_cast(shape[0]), static_cast(shape[1]), CV_32FC( static_cast(shape[2]) ), (void*) data[0].data() ) );
}
}
return output;
}
from hello_tf_c_api.
@Xonxt the key modify should be data[i] ---> data[0].
from hello_tf_c_api.
@Neargye hope this branch could merge to master.
from hello_tf_c_api.
Related Issues (20)
- Memory leak during inference with frozen graph HOT 9
- session_run hangs on GPU (libtensorflow-gpu) HOT 4
- question about this library HOT 3
- how to turn off verbose and idle threads?
- GPU dll HOT 5
- cuda_driver.cc:175] Check failed HOT 1
- How to create Tensor of TF_BOOL? HOT 2
- TF_INVALID_ARGUMENT
- Inference is running very slow on CPU HOT 1
- Multiple models inference HOT 4
- 3D input to model returns different output than python HOT 1
- What is this actually doing? HOT 2
- TF_INVALID_ARGUMENT HOT 1
- Multiple GPU Inferencing HOT 1
- cmake -G "Unix Makefiles" .. stop HOT 1
- Confine TensorFlow C API not to generate more than one threads
- Import LSTM-Layer: Expected input[1] to be control input
- when i load graph the TF_Code is ‘TF_UNKNOWN’ , why?
- when i load graph the TF_Code is ‘TF_INVALID_ARGUMENT ’ , why?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hello_tf_c_api.