GithubHelp home page GithubHelp logo

paddlepaddle / paddle-inference-demo Goto Github PK

View Code? Open in Web Editor NEW
234.0 71.0 155.0 193.64 MB

License: Apache License 2.0

CMake 7.14% C++ 46.79% Shell 9.80% Python 29.77% Cuda 3.28% Go 0.86% Makefile 0.12% Batchfile 0.16% HTML 2.08%

paddle-inference-demo's Introduction

Paddle Inference Demos

Paddle Inference为飞桨核心框架推理引擎。Paddle Inference功能特性丰富,性能优异,针对服务器端应用场景进行了深度的适配优化,做到高吞吐、低时延,保证了飞桨模型在服务器端即训即用,快速部署。

为了能让广大用户快速的使用Paddle Inference进行部署应用,我们在此Repo中提供了C++、Python的使用样例。

在这个repo中我们会假设您已经对Paddle Inference有了一定的了解。

如果您刚刚接触Paddle Inference不久,建议您访问这里对Paddle Inference做一个初步的认识。

测试样例

1) 在python目录中,我们通过真实输入的方式罗列了一系列的测试样例,其中包括图像的分类,分割,检测,以及NLP的Ernie/Bert等Python使用样例,同时也包含Paddle-TRT, 多线程的使用样例。

2) 在c++目录中,我们通过单测方式展现了一系列的测试样例,其中包括图像的分类,分割,检测,以及NLP的Ernie/Bert等C++使用样例,同时也包含Paddle-TRT, 多线程的使用样例。

注意:如果您使用2.0以前的Paddle,请参考release/1.8分支

paddle-inference-demo's People

Contributors

2446071400 avatar b3602sss avatar baoachun avatar cryoco avatar cyj1986 avatar dannyisfunny avatar gglin001 avatar heliqi avatar jiangjiajun avatar jiweibo avatar juncaipeng avatar jzz-note avatar lidanqing-intel avatar lizexu123 avatar ming1753 avatar nhzlx avatar qili93 avatar shangzhizhou avatar shentanyue avatar shixiaowei02 avatar wangye707 avatar wangzheee avatar winter-wang avatar wjj19950828 avatar wozna avatar xiaoxiaohehe001 avatar yixinkristy avatar yuanlehome avatar zhangjun avatar zhoutianzi666 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

paddle-inference-demo's Issues

python 在for循环中赋值预测结果不正确

在fro循环中执行预测结果不正确

具体情况

predictor.run第一次循环可以正确运行,在控制台输出I0624 15:48:29.592471 560 device_context.cc:598] oneDNN v2.2.1,并且probs结果正确,在第二次for循环的时候,predictor.run()运行过后控制台没有输出类似I0624 15:48:29.592471 560 device_context.cc:598] oneDNN v2.2.1这样的结果,并且probs结果时不正确的,想问下这种情况是什么原因,具体怎么解决。

部分代码

1. 主要预测部分

reuslt_all_text是一个字符串列表

   for index in range(0,len(result_all_text)) :
        data=result_all_text[index]
        result=  _predict_text([data],predict)
        print("预测内容:{}\n 预测结果:{}\n".format(data,result))

2._predict_text()代码

ModelPredcit是我自己简单封装的一个类,方便调用

def _predict_text(text_list:list,predict:ModelPredict):
    predict.set_input(text_list)
    result=predict.predict_and_get_output()
    return result

3. 类内部封装代码

下面的三个函数都是封装在ModelPredcit类里面的

3.1 set_input

把字符串列表转换成模型可以接受的格式

    def set_input(self,text_list):
        input_segment_tuple_list=[]
        for text in text_list:
            input_ids,segment_ids=self.convert_example(text)
            input_segment_tuple_list.append((input_ids,segment_ids))
        batchify_fn = lambda samples, fn=Tuple(Pad(axis=0, pad_val=self.tokenizer.pad_token_id),
                                               Pad(axis=0, pad_val=self.tokenizer.pad_token_id)): fn(samples)
        input_ids, segment_ids = batchify_fn(input_segment_tuple_list)


        # 获取输入的名称
        input_names = self.predictor.get_input_names()
        input_ids_tensor = self.predictor.get_input_handle(input_names[0])
        segment_ids_tensor=self.predictor.get_input_handle(input_names[1])

        # 设置输入

        input_ids_tensor.reshape(input_ids.shape)
        segment_ids_tensor.reshape(segment_ids.shape)
        input_ids_tensor.copy_from_cpu(np.array(input_ids))
        segment_ids_tensor.copy_from_cpu(np.array(segment_ids))

3.2 convert_example

使用paddlenlp 转换字符串为tokenizer

    def convert_example(self,text):
        encoded_inputs=self.tokenizer.encode(text=text,max_seq_len=256)
        input_ids=encoded_inputs["input_ids"]
        segment_ids=encoded_inputs["token_type_ids"]
        return input_ids,segment_ids

3.3 predict_and_get_outpu

执行预测并输出结果

    def predict_and_get_output(self)->List:
        self.predictor.run()
        # 获取输出
        output_names = self.predictor.get_output_names()
        output_handle = self.predictor.get_output_handle(output_names[0])
        output_data = output_handle.copy_to_cpu() # numpy.ndarray类型
        probs=functional.softmax(paddle.to_tensor(output_data)).numpy().tolist()
        return probs

使用的版本

In [1]: import paddle

In [2]: import paddlenlp

In [3]: print(paddle.__version__)
2.1.0

In [4]: print(paddlenlp.__version__)
2.0.3

jetson nx上部署paddleseg框架下biesenetV2算法训练自己的数据集产生的模型,推理速度太慢.预测程序用的是paddleseg中的C++部署示例,

image
这是我参考的例程,
这是我的推理时间:
image
以下是全部代码:

#include
#include
#include
#include
#include
#include <time.h>
#include <gflags/gflags.h>
#include <glog/logging.h>
using namespace std;

#include "paddle/include/paddle_inference_api.h"
#include "yaml-cpp/yaml.h"
#include "opencv2/core.hpp"
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"

DEFINE_string(model_dir, "", "Directory of the inference model. "
"It constains deploy.yaml and infer models");
DEFINE_string(img_path, "", "Path of the test image.");
DEFINE_bool(use_cpu, false, "Wether use CPU. Default: use GPU.");
DEFINE_bool(use_trt, false, "Wether enable TensorRT when use GPU. Defualt: false.");
DEFINE_bool(use_mkldnn, false, "Wether enable MKLDNN when use CPU. Defualt: false.");
DEFINE_string(save_dir, "", "Directory of the output image.");

typedef struct YamlConfig {
std::string model_file;
std::string params_file;
bool is_normalize;
}YamlConfig;

YamlConfig load_yaml(const std::string& yaml_path) {
YAML::Node node = YAML::LoadFile(yaml_path);
std::string model_file = node["Deploy"]["model"].asstd::string();
std::string params_file = node["Deploy"]["params"].asstd::string();
bool is_normalize = false;
if (node["Deploy"]["transforms"] &&
node["Deploy"]["transforms"][0]["type"].asstd::string() == "Normalize") {
is_normalize = true;
}

YamlConfig yaml_config = {model_file, params_file, is_normalize};
return yaml_config;
}

std::shared_ptr<paddle_infer::Predictor> create_predictor(const YamlConfig& yaml_config) {
std::string& model_dir = FLAGS_model_dir;

paddle_infer::Config infer_config;
infer_config.SetModel(model_dir + "/" + yaml_config.model_file,
model_dir + "/" + yaml_config.params_file);
infer_config.EnableMemoryOptim();

if (FLAGS_use_cpu) {
LOG(INFO) << "Use CPU";
if (FLAGS_use_mkldnn) {
// TODO(jc): fix the bug
//infer_config.EnableMKLDNN();
infer_config.SetCpuMathLibraryNumThreads(5);
}
} else {
LOG(INFO) << "Use GPU";
infer_config.EnableUseGpu(500, 0);
if (FLAGS_use_trt) {
infer_config.EnableTensorRtEngine(1 << 30, 1, 1,
paddle_infer::PrecisionType::kFloat32, false, false);
}
}

auto predictor = paddle_infer::CreatePredictor(infer_config);
return predictor;
}

void hwc_img_2_chw_data(const cv::Mat& hwc_img, float* data) {
int rows = hwc_img.rows;
int cols = hwc_img.cols;
int chs = hwc_img.channels();
for (int i = 0; i < chs; ++i) {
cv::extractChannel(hwc_img, cv::Mat(rows, cols, CV_32FC1, data + i * rows * cols), i);
}
}

cv::Mat read_process_image(bool is_normalize) {
cv::Mat img = cv::imread(FLAGS_img_path, cv::IMREAD_COLOR);
cv::cvtColor(img, img, cv::COLOR_BGR2RGB);
if (is_normalize) {
img.convertTo(img, CV_32F, 1.0 / 255, 0);
img = (img - 0.5) / 0.5;
}
return img;
}

int main(int argc, char *argv[]) {
google::ParseCommandLineFlags(&argc, &argv, true);
if (FLAGS_model_dir == "") {
LOG(FATAL) << "The model_dir should not be empty.";
}

// Load yaml
std::string yaml_path = FLAGS_model_dir + "/deploy.yaml";
YamlConfig yaml_config = load_yaml(yaml_path);

// Prepare data
cv::Mat img = read_process_image(yaml_config.is_normalize);
int rows = img.rows;
int cols = img.cols;
int chs = img.channels();
std::vector input_data(1 * 3 * 1080 *1920, 0.0f);
hwc_img_2_chw_data(img, input_data.data());

// Create predictor
auto predictor = create_predictor(yaml_config);

// Set input
auto input_names = predictor->GetInputNames();
auto input_t = predictor->GetInputHandle(input_names[0]);
std::vector input_shape = {1, 3, 1080, 1920};
input_t->Reshape(input_shape);
input_t->CopyFromCpu(input_data.data());

// Run
clock_t start,end;
start=clock();
predictor->Run();
end=clock();
cout << "The run time is:" << (double)(end-start)/CLOCKS_PER_SEC << "s" << endl;
// Get output
auto output_names = predictor->GetOutputNames();
auto output_t = predictor->GetOutputHandle(output_names[0]);
std::vector output_shape = output_t->shape(); // n * h * w
int out_num = std::accumulate(output_shape.begin(), output_shape.end(), 1,
std::multiplies());
std::vector<int64_t> out_data(out_num);
output_t->CopyToCpu(out_data.data());

// Get pseudo image
std::vector<uint8_t> out_data_u8(out_num);
for (int i = 0; i < out_num; i++) {
out_data_u8[i] = static_cast<uint8_t>(out_data[i]);
}
cv::Mat out_gray_img(output_shape[1], output_shape[2], CV_8UC1, out_data_u8.data());
cv::Mat out_eq_img;
cv::equalizeHist(out_gray_img, out_eq_img);
cv::imwrite("out_img.jpg", out_eq_img);

LOG(INFO) << "Finish";
}
paddle 版本2.1.2:
image
nx 环境:
image
输入图像大小为1080*1920。

旧版本文档

请问一下,1.8版本的文档在哪查看呢,我切换到release1.8打开的还是2.0的文档

ernie-varlen 运行报错 Segmentation fault (core dumped)

运行 demo ernie-varlen
https://github.com/PaddlePaddle/Paddle-Inference-Demo/tree/master/c%2B%2B/ernie-varlen

Linux环境:
Ubuntu18.04
cuda10.2
cudnn8.1

模型:
demo页面自带的 ernie_model_4.tar.gz
TR:
TensorRT-7.2.3.4.Ubuntu-18.04.x86_64-gnu.cuda-10.2.cudnn8.1
编译通过,运行模型出现如下错误:

./build/ernie_varlen_test --model_dir=./ernie_model_4

报错:
You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0528 13:57:40.816888 19597 analysis_predictor.cc:139] Profiler is deactivated, and no profiling report will be generated.
I0528 13:57:40.850455 19597 analysis_predictor.cc:474] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [adaptive_pool2d_convert_global_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [shuffle_channel_detect_pass]
--- Running IR pass [quant_conv2d_dequant_fuse_pass]
--- Running IR pass [delete_quant_dequant_op_pass]
--- Running IR pass [delete_quant_dequant_filter_op_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [embedding_eltwise_layernorm_fuse_pass]
I0528 13:57:41.143846 19597 graph_pattern_detector.cc:101] --- detected 1 subgraphs
I0528 13:57:41.145031 19597 graph_pattern_detector.cc:101] --- detected 1 subgraphs
I0528 13:57:41.150530 19597 graph_pattern_detector.cc:101] --- detected 25 subgraphs
--- Running IR pass [multihead_matmul_fuse_pass_v2]
I0528 13:57:41.282735 19597 graph_pattern_detector.cc:101] --- detected 12 subgraphs
--- Running IR pass [skip_layernorm_fuse_pass]
I0528 13:57:41.389559 19597 graph_pattern_detector.cc:101] --- detected 24 subgraphs
--- Running IR pass [unsqueeze2_eltwise_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [squeeze2_matmul_fuse_pass]
--- Running IR pass [reshape2_matmul_fuse_pass]
--- Running IR pass [flatten2_matmul_fuse_pass]
--- Running IR pass [map_matmul_to_mul_pass]
--- Running IR pass [fc_fuse_pass]
I0528 13:57:41.401916 19597 graph_pattern_detector.cc:101] --- detected 12 subgraphs
I0528 13:57:41.406260 19597 graph_pattern_detector.cc:101] --- detected 26 subgraphs
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [tensorrt_subgraph_pass]
I0528 13:57:41.419596 19597 tensorrt_subgraph_pass.cc:126] --- detect a sub-graph with 82 nodes
W0528 13:57:41.422353 19597 tensorrt_subgraph_pass.cc:304] The Paddle lib links the 7134 version TensorRT, make sure the runtime TensorRT you are using is no less than this version, otherwise, there might be Segfault!
I0528 13:57:41.439488 19597 tensorrt_subgraph_pass.cc:345] Prepare TRT engine (Optimize model structure, Select OP kernel etc). This process may cost a lot of time.
Segmentation fault (core dumped)

旧版本文档

请问一下,1.8版本的文档在哪查看呢,我切换到release1.8打开的还是2.0的文档

C++ Traceback Segmentation fault

numpy 1.21.1
opencv-python 4.5.3.56
packaging 21.0
paddlepaddle-gpu 2.1.1
paddleslim 2.1.0
paddlex 2.0.0rc4

python3.8 linux cuda11.2

python infer_resnet.py --model_file=./resnet50/inference.pdmodel --params_file=./resnet50/inference.pdiparams --use_gpu=1
infer_resnet.py:12: SyntaxWarning: "is not" with a literal. Did you mean "!="?
if args.model_dir is not "":
W0807 09:36:34.106942 12056 analysis_predictor.cc:715] The one-time configuration of analysis predictor failed, which may be due to native predictor called first and its configurations taken effect.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
I0807 09:36:34.281066 12056 graph_pattern_detector.cc:91] --- detected 53 subgraphs
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [embedding_eltwise_layernorm_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass_v2]
--- Running IR pass [squeeze2_matmul_fuse_pass]
--- Running IR pass [reshape2_matmul_fuse_pass]
I0807 09:36:34.303220 12056 graph_pattern_detector.cc:91] --- detected 1 subgraphs
--- Running IR pass [flatten2_matmul_fuse_pass]
--- Running IR pass [map_matmul_to_mul_pass]
--- Running IR pass [fc_fuse_pass]
I0807 09:36:34.304824 12056 graph_pattern_detector.cc:91] --- detected 1 subgraphs
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
I0807 09:36:34.314018 12056 graph_pattern_detector.cc:91] --- detected 33 subgraphs
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
I0807 09:36:34.322469 12056 graph_pattern_detector.cc:91] --- detected 16 subgraphs
--- Running IR pass [conv_elementwise_add_fuse_pass]
I0807 09:36:34.325784 12056 graph_pattern_detector.cc:91] --- detected 4 subgraphs
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0807 09:36:34.328943 12056 ir_params_sync_among_devices_pass.cc:45] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [memory_optimize_pass]
I0807 09:36:34.364941 12056 memory_optimize_pass.cc:199] Cluster name : inputs size: 602112
I0807 09:36:34.364956 12056 memory_optimize_pass.cc:199] Cluster name : relu_1.tmp_0 size: 3211264
I0807 09:36:34.364959 12056 memory_optimize_pass.cc:199] Cluster name : batch_norm_12.tmp_3 size: 401408
I0807 09:36:34.364962 12056 memory_optimize_pass.cc:199] Cluster name : batch_norm_4.tmp_2 size: 3211264
I0807 09:36:34.364964 12056 memory_optimize_pass.cc:199] Cluster name : relu_0.tmp_0 size: 3211264
--- Running analysis [ir_graph_to_program_pass]
I0807 09:36:34.386811 12056 analysis_predictor.cc:636] ======= optimize end =======
I0807 09:36:34.386862 12056 naive_executor.cc:98] --- skip [feed], feed -> inputs
I0807 09:36:34.389506 12056 naive_executor.cc:98] --- skip [batch_norm_4.tmp_2], fetch -> fetch
W0807 09:36:34.405136 12056 device_context.cc:404] Please NOTE: device: 0, GPU Compute Capability: 7.5, Driver API Version: 11.2, Runtime API Version: 10.2
W0807 09:36:34.412685 12056 device_context.cc:422] device: 0, cuDNN Version: 8.1.


C++ Traceback (most recent call last):

0 paddle::framework::SignalHandle(char const*, int)
1 paddle::platform::GetCurrentTraceBackStringabi:cxx11


Error Message Summary:

FatalError: Segmentation fault is detected by the operating system.
[TimeInfo: *** Aborted at 1628300196 (unix time) try "date -d @1628300196" if you are using GNU date ***]
[SignalInfo: *** SIGSEGV (@0x0) received by PID 12056 (TID 0x7f23e6a2c100) from PID 0 ***]

段错误

ernie-varlen 运行报错 TensorRT's tensor input requires at least 2 dimensions, but input read_file_0.tmp_0 has 1 dims.

运行 demo ernie-varlen
https://github.com/PaddlePaddle/Paddle-Inference-Demo/tree/master/c%2B%2B/ernie-varlen

Linux环境:
CentOS Linux release 7.4.1708 (Core)
Linux version 3.10.0-693.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) ) #1 SMP Tue Aug 22 21:09:27 UTC 2017

前向库:
https://paddle-inference-lib.bj.bcebos.com/2.0.0-rc0-gpu-cuda10-cudnn7-avx-mkl/paddle_inference.tgz
模型:
demo页面自带的 ernie_model_4.tar.gz
TR:
TensorRT-6.0.1.5.CentOS-7.6.x86_64-gnu.cuda-10.0.cudnn7.6.tar.gz
编译通过,运行模型出现如下错误:

./build/ernie_varlen_test --model_dir=./ernie_model_4

报错:

I0316 20:46:29.055225 163963 tensorrt_subgraph_pass.cc:118] --- detect a sub-graph with 81 nodes

W0316 20:46:29.058228 163963 tensorrt_subgraph_pass.cc:293] The Paddle lib links the 6015 version TensorRT, make sure the runtime TensorRT you are using is no less than this version, otherwise, there might be Segfault!
I0316 20:46:29.058288 163963 tensorrt_subgraph_pass.cc:329] Prepare TRT engine (Optimize model structure, Select OP kernel etc). This process may cost a lot of time.
I0316 20:46:29.536973 163963 op_converter.h:187] trt input [matmul_0.tmp_0] dynamic shape info not set, please check and retry.
terminate called after throwing an instance of 'paddle::platform::EnforceNotMet'
what():


C++ Traceback (most recent call last):

0 paddle_infer::CreatePredictor(paddle::AnalysisConfig const&)
1 paddle_infer::Predictor::Predictor(paddle::AnalysisConfig const&)
2 std::unique_ptr<paddle::PaddlePredictor, std::default_deletepaddle::PaddlePredictor > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
3 paddle::AnalysisPredictor::Init(std::shared_ptrpaddle::framework::Scope const&, std::shared_ptrpaddle::framework::ProgramDesc const&)
4 paddle::AnalysisPredictor::PrepareProgram(std::shared_ptrpaddle::framework::ProgramDesc const&)
5 paddle::AnalysisPredictor::OptimizeInferenceProgram()
6 paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7 paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
8 paddle::inference::analysis::IRPassManager::Apply(std::unique_ptr<paddle::framework::ir::Graph, std::default_deletepaddle::framework::ir::Graph >)
9 paddle::framework::ir::Pass::Apply(paddle::framework::ir::Graph*) const
10 paddle::inference::analysis::TensorRtSubgraphPass::ApplyImpl(paddle::framework::ir::Graph*) const
11 paddle::inference::analysis::TensorRtSubgraphPass::CreateTensorRTOp(paddle::framework::ir::Node*, paddle::framework::ir::Graph*, std::vector<std::string, std::allocatorstd::string > const&, std::vector<std::string, std::allocatorstd::string >) const
12 paddle::inference::tensorrt::OpConverter::ConvertBlockToTRTEngine(paddle::framework::BlockDesc
, paddle::framework::Scope const&, std::vector<std::string, std::allocatorstd::string > const&, std::unordered_set<std::string, std::hashstd::string, std::equal_tostd::string, std::allocatorstd::string > const&, std::vector<std::string, std::allocatorstd::string > const&, paddle::inference::tensorrt::TensorRTEngine*)
13 paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int)
14 paddle::platform::GetCurrentTraceBackString()


Error Message Summary:

InvalidArgumentError: TensorRT's tensor input requires at least 2 dimensions, but input read_file_0.tmp_0 has 1 dims.
[Hint: Expected shape.size() > 1UL, but received shape.size():1 <= 1UL:1.] (at /paddle/paddle/fluid/inference/tensorrt/engine.h:78)

加载model.pdmodel和model.pdparams提示(InvalidArgument) Deserialize to tensor failed, maybe the loaded file is not a paddle model(expected file format: 0, but 2828338304 found).

paddle预测库是自己编译的,分支是2.0-rc1,编译流程记录在这个博客文章
PaddlePaddle/model/paddleCV分支是2.0-beta

将PaddlePaddle/model/paddleCV的human_pose_estimation项目的权重加载到model之后通过以下两行代码导出静态图模型

# 在该项目test.py的87行后加入以下代码,87行前已载入预训练参数
# 第87行代码
test_exe = fluid.ParallelExecutor(
        use_cuda=True if args.use_gpu else False,
        main_program=fluid.default_main_program().clone(for_test=True),
        loss_name=None)
# 添加的代码
paddle.save(fluid.default_main_program(), "temp/model.pdmodel")
paddle.save(fluid.default_main_program().state_dict(), "temp/model.pdparams")
exit()

得到model.pdmodel和model.pdparams文件

使用如下C++代码加载模型

#include "paddle/include/paddle_inference_api.h"

#include <chrono>
#include <iostream>
#include <memory>
#include <numeric>

#include <gflags/gflags.h>
#include <glog/logging.h>

using paddle_infer::Config;
using paddle_infer::Predictor;
using paddle_infer::CreatePredictor;
using paddle_infer::PrecisionType;

DEFINE_string(model_file, "", "Directory of the inference model.");
DEFINE_string(params_file, "", "Directory of the inference model.");
DEFINE_string(model_dir, "", "Directory of the inference model.");
DEFINE_int32(batch_size, 1, "Directory of the inference model.");

int main(int argc, char *argv[]) {
	google::ParseCommandLineFlags(&argc, &argv, true);
	paddle_infer::Config config;
	if (FLAGS_model_dir == "") {
		std::cout << "SetModel" << std::endl;
		config.SetModel(FLAGS_model_file, FLAGS_params_file);
	}
	else {
		config.SetModel(FLAGS_model_dir); // Load no-combined model
	}
	config.EnableUseGpu(500, 0);
	config.SwitchIrOptim(true);
	config.EnableMemoryOptim();
	std::shared_ptr<paddle_infer::Predictor> predictor = paddle_infer::CreatePredictor(config);
	auto input_names = predictor->GetInputNames();
	auto input_t = predictor->GetInputHandle(input_names[0]);
        // 对应模型输入3x348x348
	std::vector<int> input_shape = { 1, 3, 348, 348 };
	std::vector<float> input_data(1 * 3 * 348 * 348, 1);
	input_t->Reshape(input_shape);
	input_t->CopyFromCpu(input_data.data());

	return 0;
}

提示以下信息
image

多路视频流部署加速问题

想咨询一下,我们想在一个nvidia-agx上跑两路视频流,有相关多路视频流部署及优化的文档或案例供参考吗,非常感谢

使用paddle_trt预测失败

您好,我在使用paddle_trt推理时遇到如下报错,请问该如何解决?
paddlepaddle-gpu 1.8.2.post107

EnforceNotMet:


C++ Call Stacks (More useful to developers):

0 std::string paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2 paddle::framework::ir::PassRegistry::Get(std::string const&) const
3 paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::string, std::allocatorstd::string > const&)
4 paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5 paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6 paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7 paddle::AnalysisPredictor::OptimizeInferenceProgram()
8 paddle::AnalysisPredictor::PrepareProgram(std::shared_ptrpaddle::framework::ProgramDesc const&)
9 paddle::AnalysisPredictor::Init(std::shared_ptrpaddle::framework::Scope const&, std::shared_ptrpaddle::framework::ProgramDesc const&)
10 std::unique_ptr<paddle::PaddlePredictor, std::default_deletepaddle::PaddlePredictor > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11 std::unique_ptr<paddle::PaddlePredictor, std::default_deletepaddle::PaddlePredictor > paddle::CreatePaddlePredictorpaddle::AnalysisConfig(paddle::AnalysisConfig const&)


Error Message Summary:

Error: Pass tensorrt_subgraph_pass has not been registered at (/paddle/paddle/fluid/framework/ir/pass.h:201)

Error when using Paddle-inference


C++ Call Stacks (More useful to developers):

0 std::__cxx11::basic_string<char, std::char_traits, std::allocator > paddle::platform::GetTraceBackString<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&>(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, char const*, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, char const*, int)
2 paddle::operators::ConvOp::GetExpectedKernelType(paddle::framework::ExecutionContext const&) const
3 paddle::framework::OperatorWithKernel::ChooseKernel(paddle::framework::RuntimeContext const&, paddle::framework::Scope const&, paddle::platform::Place const&) const
4 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&, paddle::framework::RuntimeContext*) const
5 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const
6 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&)
7 paddle::framework::NaiveExecutor::Run()
8 paddle::AnalysisPredictor::ZeroCopyRun()


Python Call Stacks (More useful to users):

File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/framework.py", line 2610, in append_op
attrs=kwargs.get("attrs", None))
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op
return self.main_program.current_block().append_op(*args, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/nn.py", line 2938, in conv2d
"data_format": data_format,
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlex/cv/nets/mobilenet_v3.py", line 201, in _conv_bn_layer
bias_attr=False)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlex/cv/nets/mobilenet_v3.py", line 368, in call
name='conv1')
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlex/cv/nets/detection/yolo_v3.py", line 502, in build_net
feats = self.backbone(image)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlex/cv/models/ppyolo.py", line 171, in build_net
model_out = model.build_net(inputs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlex/cv/models/load_model.py", line 66, in load_model
mode='test')
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlex/command.py", line 156, in main
model = pdx.load_model(args.model_dir, fixed_input_shape)
File "/opt/conda/envs/python35-paddle120-env/bin/paddlex", line 10, in
sys.exit(main())


Error Message Summary:

Error: input and filter data type should be consistent
[Hint: Expected input_data_type == filter_data_type, but received input_data_type:2 != filter_data_type:5.] at (/home/paddle/github/paddle/paddle/fluid/operators/conv_op.cc:172)
[operator < conv2d_fusion > error]

”CHECK" 找不到识别符

先生:

inference C++中:
识别不了“CHECK”; 找不到;
无论是引用windows 下的预编译库,还是ubuntu 的预编译库;

请问,该怎么处理这个问题?

无法使用GPU

环境:jetson nano,装的jetpack4.3 ,python3.6.9,cuda10,cudnn7.6,paddle推理库是0.0.0的。跑自己模型的时候使用GPU报错如下:

W1115 15:58:03.358932 9830 device_context.cc:265] Please NOTE: device: 0, CUDA Capability: 53, Driver API Version: 10.0, Runtime API Version: 10.0
W1115 15:58:03.604619 9830 device_context.cc:273] device: 0, cuDNN Version: 7.6.
W1115 15:58:46.768879 9830 operator.cc:187] elementwise_mul raises an exception thrust::system::system_error, parallel_for failed: too many resources requested for launch
Traceback (most recent call last):
File "infer_yolov3.py", line 85, in
result = run(pred, [data, im_shape])
File "infer_yolov3.py", line 39, in run
predictor.zero_copy_run()
RuntimeError: parallel_for failed: too many resources requested for launch

不使用GPU则无报错

使用paddlepaddle的预测库,运行C++ ResNet50图像分类样例,关于TensorRT的使用报错Load symbol getPluginRegistry failed!

在win10系统上使用cuda10.0_cudnn7_avx_mkl_trt6版本的paddlepaddle的预测库,CUDA version:10.0, CUDNN version:7.4。

设置compile.sh中的USE_TENSORRT=OFF。编译成功,也可以生成可执行文件。但是运行的时候,报错如下:

You are using Paddle compiled with TensorRT, but TensorRT dynamic library is not found. Ignore this if TensorRT is not needed.
WARNING: Logging before InitGoogleLogging() is written to STDERR
E0402 09:31:04.486455 19108 port.h:50] Load symbol getPluginRegistry failed.

如果设置compile.sh中的USE_TENSORRT=ON。编译成功,可以生成可执行文件,并成功运行。

使用cuda10.0_cudnn7_avx_mkl_trt6版本的paddlepaddle的预测库,编译的时候必须设置USE_TENSORRT=ON吗,TENSORRT的选项不是可选择的吗?

在nano上跑官方demo的infer_yolov3无法使用GPU加速

use_gpu=1,加载模型全部正常,但是最后报错如下

W1112 08:38:37,097999 16867 devlce context.cc:237] Please NOTE: device: 0, CUDA capability: 53 ,Driver API Version : 10.2, Runtime API Version: 10.0
W1112 08:38:37.226699 16867 devlce context.cc:245] devlce: 0, cuDNN Version: 8.0.
W1112 08:38:43.420408 16867 init.cc:209] Warntng: PaddlePaddle catches a fatlure signal, it may not work properly
W1112 08:38:43.420493 16867 init.cc:211] You could check whether you killed PaddlePaddle thread/process accidentally or report the case to PaddlePaddle
W1112 08:38:43.420511 16867 init.cc:214] The detail failure signal is:

W1112 08:38:43.420527 16867 init.cc:217] *** Aborted at 1605188323 (unix time) try "date -d @1605188323" Lf you are using GNU date ***
W1112 08:38:43.426568 16867 init.cc:217] PC: @ 0x0(unknown)
W1112 08:38:43 448750 16867 init.cc:217] *** SIGSECV (@0x0) received by PID 16867 (TID 0x7fa0dfd010) from PID 0; stack trace: ***
W1112 08:38:43.455078 16867 init.cc:217] @ 0x7fa0e046c0 ([vdso]+0x6bf)
Segmentation fault (core dumped)

编译win10平台源码没找到libpaddle_inference.lib

根据源码编译教程编译win10平台源码,文档说明编译成功的项目结构如下:

build/paddle_inference_install_dir
├── CMakeCache.txt
├── paddle
│   ├── include
│   │   ├── paddle_anakin_config.h
│   │   ├── paddle_analysis_config.h
│   │   ├── paddle_api.h
│   │   ├── paddle_inference_api.h
│   │   ├── paddle_mkldnn_quantizer_config.h
│   │   └── paddle_pass_builder.h
│   └── lib
│       ├── libpaddle_inference.a (Linux)
│       ├── libpaddle_inference.so (Linux)
│       └── libpaddle_inference.lib (Windows)
├── third_party
│   ├── boost
│   │   └── boost
│   ├── eigen3
│   │   ├── Eigen
│   │   └── unsupported
│   └── install
│       ├── gflags
│       ├── glog
│       ├── mkldnn
│       ├── mklml
│       ├── protobuf
│       ├── xxhash
│       └── zlib
└── version.txt

问题1:但是我编译出来的paddle/lib文件夹中没有这个文件libpaddle_inference.lib (Windows),但是有一个叫paddle_fluid.dll和.lib的文件,这个paddle_fluid就是libpaddle_inference吗?

编译指令如下:

cmake .. -G "Visual Studio 14 2015" -A x64 -T host=x64 -DWITH_GPU=ON -DWITH_TESTING=OFF -DON_INFER=ON -DCMAKE_BUILD_TYPE=Release -DPY_VERSION=3  -DWITH_TENSORRT=ON -DTENSORRT_ROOT="E:\\TensorRT-7.0.0.11" -DWITH_NCCL=OFF -DCUDA_ARCH_NAME=Turing

问题2:还有编译Paddle-Inference-Demo的paddle_infer_demo项目提示如下,使用的是自己编译出来的预测库。libmklml_intel这个文件在预测库里没有找到。编译出来实则是叫mklml吗?
image

在jetson nano上用paddle1.6.3跑demo的问题

在跑yolov3使用python接口的demo时遇到的问题
由于paddle版本低于你们要求的,而且有API不支持,我自己修改了infer_yolov3.py部分代码。全在main()里面
代码修改入下:
if name == 'main':
args = parse_args()
predictor = create_predictor(args)
img_name = 'kite.jpg'
save_img_name = 'res.jpg'
im_size = 608

img = cv2.imread(img_name)
data = preprocess(img, im_size)
im_shape = np.array([im_size, im_size]).reshape((1, 2)).astype(np.int32)

img_data = PaddleTensor(data)#1.6.3 supported
img_shape = PaddleTensor(im_shape)
result = predictor.run([img_data, img_shape])#1.6.3 supported
result = result[0]
result_data = result.as_ndarray()  #return numpy.ndarray
print(result_data.shape)
result_data = result_data.tolist()
print(result_data)
img = Image.open(img_name).convert('RGB').resize((im_size, im_size))
draw_bbox(img, result=result_data, save_name=save_img_name)

在jetson nano结果如下:paddlepaddle-gpu==1.6.3
Please NOTE: device: 0, CUDA Capability: 53, Driver API Version: 10.0, Runtime API Version: 10.0
I0707 19:41:27.813036 15092 op_compatible_info.cc:201] The default operator required version is missing. Please update the model version.
I0707 19:41:27.813113 15092 analysis_predictor.cc:841] MODEL VERSION: 0.0.0
I0707 19:41:27.813138 15092 analysis_predictor.cc:843] PREDICTOR VERSION: 0.0.0
W0707 19:41:27.813304 15092 analysis_predictor.cc:855] - Version incompatible (1) batch_norm
W0707 19:41:27.813334 15092 analysis_predictor.cc:855] - Version incompatible (1) concat
W0707 19:41:27.813356 15092 analysis_predictor.cc:855] - Version incompatible (1) conv2d
W0707 19:41:27.813376 15092 analysis_predictor.cc:855] - Version incompatible (1) elementwise_add
W0707 19:41:27.813397 15092 analysis_predictor.cc:855] - Version incompatible (1) feed
W0707 19:41:27.813417 15092 analysis_predictor.cc:855] - Version incompatible (1) fetch
W0707 19:41:27.813437 15092 analysis_predictor.cc:855] - Version incompatible (1) leaky_relu
W0707 19:41:27.813457 15092 analysis_predictor.cc:855] - Version incompatible (1) multiclass_nms
W0707 19:41:27.813477 15092 analysis_predictor.cc:855] - Version incompatible (1) nearest_interp
W0707 19:41:27.813495 15092 analysis_predictor.cc:855] - Version incompatible (1) scale
W0707 19:41:27.813514 15092 analysis_predictor.cc:855] - Version incompatible (1) transpose2
W0707 19:41:27.813534 15092 analysis_predictor.cc:855] - Version incompatible (1) yolo_box
W0707 19:41:27.813552 15092 analysis_predictor.cc:144] WARNING: Results may be incorrect! Using same versions between model and lib.

在使用paddlepaddle1.8.0的虚拟机上运行相同的代码(修改后的),图片预测结果也是错误的,但没有报model和lib版本不一的错误。最后预测结果也不正确,请问这个是因为什么原因啊?
个人感觉是不支持算子的原因?有什么在nano上比较好的解决方法吗?

谢谢大佬

examples/text_classification/pretrained_models/deploy/python/predict.py 这里的这里有个变量是没有定义的,报错

` def convert_example(example,
tokenizer,
label_list,
max_seq_length=512,
is_test=False):
text = example
encoded_inputs = tokenizer(text=text, max_seq_len=max_seq_length)
input_ids = encoded_inputs["input_ids"]
segment_ids = encoded_inputs["token_type_ids"]

  if not is_test:
      # create label maps
      label_map = {}
      for (i, l) in enumerate(label_list):
          label_map[l] = i

      label = label_map[label]        !!!!!!!这里的label没有定义!!!!!!!!
      label = np.array([label], dtype="int64")
      return input_ids, segment_ids, label
  else:
      return input_ids, segment_ids`

运行golang的示例出错

_/root/go_projects/PaddleInference/paddle

../paddle/common.go:20:11: fatal error: paddle_c_api.h: No such file or directory
// #include <paddle_c_api.h>
^~~~~~~~~~~~~~~~
compilation terminated.

使用paddle源码编译c++库时出错。

报错内容:
/home/huyutao/paddle/paddle/fluid/framework/unused_var_check.cc: In function ‘void paddle::framework::CheckUnusedVar(const paddle::framework::OperatorBase&, const paddle::framework::Scope&)’:
/home/huyutao/paddle/paddle/fluid/framework/unused_var_check.cc:82:57: error: converting to ‘std::unordered_set<std::basic_string >’ from initializer list would use explicit constructor ‘std::unordered_set<_Value, _Hash, _Pred, _Alloc>::unordered_set(std::unordered_set<_Value, _Hash, _Pred, _Alloc>::size_type, const hasher&, const key_equal&, const allocator_type&) [with _Value = std::basic_string; _Hash = std::hash<std::basic_string >; _Pred = std::equal_to<std::basic_string >; _Alloc = std::allocator<std::basic_string >; std::unordered_set<_Value, _Hash, _Pred, _Alloc>::size_type = long unsigned int; std::unordered_set<_Value, _Hash, _Pred, _Alloc>::hasher = std::hash<std::basic_string >; std::unordered_set<_Value, _Hash, _Pred, _Alloc>::key_equal = std::equal_to<std::basic_string >; std::unordered_set<_Value, _Hash, _Pred, _Alloc>::allocator_type = std::allocator<std::basic_string >]’
std::unordered_setstd::string no_need_buffer_ins = {};
^
make[2]: *** [paddle/fluid/framework/CMakeFiles/unused_var_check.dir/unused_var_check.cc.o] Error 1
make[1]: *** [paddle/fluid/framework/CMakeFiles/unused_var_check.dir/all] Error 2
make: *** [all] Error 2

编译选项:cmake -DFLUID_INFERENCE_INSTALL_DIR=/home/huyutao/paddle/libs -DCMAKE_BUILD_TYPE=Release -DWITH_PYTHON=OFF -DON_INFER=ON -DWITH_GPU=ONWITH_MKL=OFF -DWITH_MKLDNN=OFF -DWITH_XBYAK=ON -DWITH_NV_JETSON=OFF .. && make && make inference_lib_dist

环境:
GIT COMMIT ID: 1e01335e195d993f3c5c97bed3a15a6f9170acea
WITH_MKL: ON
WITH_MKLDNN: OFF
WITH_GPU: ONWITH_MKL=OFF
CUDA version: 9.2
CUDNN version: v7.6
CXX compiler version: 4.9.2

Python如何生成TensorRT的校准表

  • PaddlePaddle 1.8.5
  • windows 10

我在这个文档上看到将Float32的模型转换成Int8的模型的介绍,请问如何使用Python实现这个生成校准表呢?
https://www.paddlepaddle.org.cn/documentation/docs/zh/advanced_guide/performance_improving/inference_improving/paddle_tensorrt_infer.html#a-name-paddle-trt-int8-paddle-trt-int8-a

config.enable_tensorrt_engine(workspace_size=1 << 30,
max_batch_size=1,
min_subgraph_size=5,
precision_mode=PrecisionType.Float32,
use_static=False,
use_calib_mode=False)

win10使用RT报错

image
环境: paddlepaddle-gpu-1.8.1.post107
cuda:9.0
系统:win10
python:3.7
执行这句的时候报错
config.enable_tensorrt_engine(workspace_size = 1<<30,
max_batch_size=1, min_subgraph_size=5,
precision_mode=AnalysisConfig.Precision.Float32,
use_static=False, use_calib_mode=False)

错误信息:
Error: Pass tensorrt_subgraph_pass has not been registered at (D:\1.8.1\paddle\paddle/fluid/framework/ir/pass.h:201)
关掉RT不报错

InvalidArgumentError: The input's dimension of Operator(Conv2DFusion) is expected to be 4. But received: input's dimension = 2, shape = [1, 2].

When I run this:

python infer_yolov3.py --model_file=./yolov3_infer/model --params_file=./yolov3_infer/params --use_gpu=1

the cmd show:

W0413 15:44:13.015229 17512 analysis_predictor.cc:677] The one-time configuration of analysis predictor failed, which may be due to native predictor called first and its configurations taken effect.
I0413 15:44:13.015229 17512 analysis_predictor.cc:155] Profiler is deactivated, and no profiling report will be generated.
e[1me[35m--- Running analysis [ir_graph_build_pass]e[0m
e[1me[35m--- Running analysis [ir_graph_clean_pass]e[0m
e[1me[35m--- Running analysis [ir_analysis_pass]e[0m
e[32m--- Running IR pass [is_test_pass]e[0m
e[32m--- Running IR pass [simplify_with_basic_ops_pass]e[0m
e[32m--- Running IR pass [conv_affine_channel_fuse_pass]e[0m
e[32m--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]e[0m
e[32m--- Running IR pass [conv_bn_fuse_pass]e[0m
I0413 15:44:13.586211 17512 graph_pattern_detector.cc:101] --- detected 72 subgraphs
e[32m--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]e[0m
e[32m--- Running IR pass [embedding_eltwise_layernorm_fuse_pass]e[0m
e[32m--- Running IR pass [multihead_matmul_fuse_pass_v2]e[0m
e[32m--- Running IR pass [squeeze2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [reshape2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [flatten2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [map_matmul_to_mul_pass]e[0m
e[32m--- Running IR pass [fc_fuse_pass]e[0m
e[32m--- Running IR pass [fc_elementwise_layernorm_fuse_pass]e[0m
e[32m--- Running IR pass [conv_elementwise_add_act_fuse_pass]e[0m
e[32m--- Running IR pass [conv_elementwise_add2_act_fuse_pass]e[0m
e[32m--- Running IR pass [conv_elementwise_add_fuse_pass]e[0m
I0413 15:44:13.997112 17512 graph_pattern_detector.cc:101] --- detected 75 subgraphs
e[32m--- Running IR pass [transpose_flatten_concat_fuse_pass]e[0m
e[32m--- Running IR pass [runtime_context_cache_pass]e[0m
e[1me[35m--- Running analysis [ir_params_sync_among_devices_pass]e[0m
I0413 15:44:14.019053 17512 ir_params_sync_among_devices_pass.cc:45] Sync params from CPU to GPU
e[1me[35m--- Running analysis [adjust_cudnn_workspace_size_pass]e[0m
e[1me[35m--- Running analysis [inference_op_replace_pass]e[0m
e[1me[35m--- Running analysis [memory_optimize_pass]e[0m
I0413 15:44:14.172643 17512 memory_optimize_pass.cc:200] Cluster name : elementwise_add_20.tmp_0 size: 4096
I0413 15:44:14.172643 17512 memory_optimize_pass.cc:200] Cluster name : elementwise_add_14.tmp_0 size: 2048
I0413 15:44:14.172643 17512 memory_optimize_pass.cc:200] Cluster name : elementwise_add_21.tmp_0 size: 4096
I0413 15:44:14.172643 17512 memory_optimize_pass.cc:200] Cluster name : leaky_relu_51.tmp_0 size: 4096
I0413 15:44:14.173640 17512 memory_optimize_pass.cc:200] Cluster name : leaky_relu_45.tmp_0 size: 4096
I0413 15:44:14.173640 17512 memory_optimize_pass.cc:200] Cluster name : image size: 12
I0413 15:44:14.173640 17512 memory_optimize_pass.cc:200] Cluster name : elementwise_add_10.tmp_0 size: 1024
I0413 15:44:14.173640 17512 memory_optimize_pass.cc:200] Cluster name : transpose_0.tmp_0 size: 12
I0413 15:44:14.174638 17512 memory_optimize_pass.cc:200] Cluster name : transpose_1.tmp_0 size: 12
I0413 15:44:14.174638 17512 memory_optimize_pass.cc:200] Cluster name : im_size size: 8
e[1me[35m--- Running analysis [ir_graph_to_program_pass]e[0m
I0413 15:44:14.209545 17512 analysis_predictor.cc:598] ======= optimize end =======
I0413 15:44:14.209545 17512 naive_executor.cc:107] --- skip [feed], feed -> im_size
I0413 15:44:14.210542 17512 naive_executor.cc:107] --- skip [feed], feed -> image
I0413 15:44:14.212536 17512 naive_executor.cc:107] --- skip [elementwise_add_20.tmp_0], fetch -> fetch
W0413 15:44:14.265396 17512 device_context.cc:362] Please NOTE: device: 0, GPU Compute Capability: 7.5, Driver API Version: 10.2, Runtime API Version: 10.2
W0413 15:44:14.279357 17512 device_context.cc:372] device: 0, cuDNN Version: 7.6.
Traceback (most recent call last):
File "infer_yolov3.py", line 91, in
result = run(pred, [im_shape, data, scale_factor])
File "infer_yolov3.py", line 40, in run
predictor.run()
ValueError: In user code:

File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/framework.py", line 2610, in append_op
attrs=kwargs.get("attrs", None))

File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op
return self.main_program.current_block().append_op(*args, **kwargs)

File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/nn.py", line 2938, in conv2d
"data_format": data_format,

File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlex/cv/nets/darknet.py", line 68, in _conv_norm
bias_attr=False)

File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlex/cv/nets/darknet.py", line 154, in __call__
name=self.prefix_name + "yolo_input")

File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlex/cv/nets/detection/yolo_v3.py", line 507, in build_net
feats = self.backbone(image)

File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlex/cv/models/ppyolo.py", line 175, in build_net
model_out = model.build_net(inputs)

File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlex/cv/models/load_model.py", line 82, in load_model
mode='test')

File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlex/command.py", line 158, in main
model = pdx.load_model(args.model_dir, fixed_input_shape)

File "/opt/conda/envs/python35-paddle120-env/bin/paddlex", line 10, in <module>
sys.exit(main())


InvalidArgumentError: The input's dimension of Operator(Conv2DFusion) is expected to be 4. But received: input's dimension = 2, shape = [1, 2].
  [Hint: Expected in_dims.size() == 4U, but received in_dims.size():2 != 4U:4.] (at D:\v2.0.1\paddle\paddle\fluid\operators\fused\conv_fusion_op.cc:77)
  [operator < conv2d_fusion > error]

I don't know why and how can I solve this problem?

paddle_trt 2.0版本用model跑不起来;在1.8版本代码下能跑通

RT,model采用的是:
wget https://paddle-inference-dist.bj.bcebos.com/inference_demo/Ernie_inference_model.gz
报错
Traceback (most recent call last):
File "infer_trt_ernie.py", line 107, in
pred = init_predictor(args)
File "infer_trt_ernie.py", line 50, in init_predictor
predictor = create_predictor(config)
ValueError: (InvalidArgument) some trt inputs dynamic shape info not set, check the INFO log above for more details.
[Hint: Expected all_dynamic_shape_set == true, but received all_dynamic_shape_set:0 != true:1.] (at /paddle/paddle/fluid/inference/tensorrt/convert/op_converter.h:221)

C#部署inference结果不一致

同一个inference在C#部署环境下与paddle环境下运行的结果不一致,请问这种情况是什么原因导致的?

gpu利用率低

image
1.大图小图都是5085mb;
2.怎样提升GPU 的利用率,来加快推理速度?
谢谢~

推理Demo代码问题

Paddle-Inference-Demo/python/resnet50/img_preprocess.py
里面第24行笔误:img = img[int(h_start):int(h_end), int(w_start):int(w_end), :]w_start:w_end, :]
image

jetson nano加载TRT推理报错

使用jetpack4.4
模型为PPYOLO
以下是predictor部分设置代码
def create_predictor(model, params):
config = AnalysisConfig(model, params)

        config.enable_use_gpu(100, 0)
         # config.switch_ir_optim(True)
         config.enable_memory_optim()
         config.switch_use_feed_fetch_ops(False)

# 开启TensorRT预测,精度为fp32
         config.enable_tensorrt_engine(
                            workspace_size = 1<<30,
                            max_batch_size=1, 
                            min_subgraph_size=5,
                            precision_mode=AnalysisConfig.Precision.Float32,
                            use_static=False, 
                            use_calib_mode=False
                            )


             # config.switch_specify_input_names(True)
         predictor = create_paddle_predictor(config)

return predictor

报错内容:

     W1106 13:08:45.822986 14852 analysis_predictor.cc:578] The one-time configuration of analysis predictor failed, which may be due to native predictor called first and its configurations taken effect.

I1106 13:08:45.823392 14852 analysis_predictor.cc:139] Profiler is deactivated, and no profiling report will be generated.
I1106 13:08:45.967123 14852 analysis_predictor.cc:952] MODEL VERSION: 1.8.5
I1106 13:08:45.967211 14852 analysis_predictor.cc:954] PREDICTOR VERSION: 2.0.0
I1106 13:08:45.968782 14852 analysis_predictor.cc:449] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [shuffle_channel_detect_pass]
--- Running IR pass [quant_conv2d_dequant_fuse_pass]
--- Running IR pass [delete_quant_dequant_op_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [embedding_eltwise_layernorm_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass_v2]
--- Running IR pass [skip_layernorm_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
I1106 13:08:49.006705 14852 graph_pattern_detector.cc:100] --- detected 73 subgraphs
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [tensorrt_subgraph_pass]
I1106 13:08:49.315248 14852 tensorrt_subgraph_pass.cc:115] --- detect a sub-graph with 28 nodes
I1106 13:08:49.320538 14852 tensorrt_subgraph_pass.cc:321] Prepare TRT engine (Optimize model structure, Select OP kernel etc). This process may cost a lot of time.
I1106 13:09:46.427413 14852 tensorrt_subgraph_pass.cc:115] --- detect a sub-graph with 25 nodes
I1106 13:09:46.443456 14852 tensorrt_subgraph_pass.cc:321] Prepare TRT engine (Optimize model structure, Select OP kernel etc). This process may cost a lot of time.
I1106 13:09:57.191118 14852 tensorrt_subgraph_pass.cc:115] --- detect a sub-graph with 13 nodes
I1106 13:09:57.195003 14852 tensorrt_subgraph_pass.cc:321] Prepare TRT engine (Optimize model structure, Select OP kernel etc). This process may cost a lot of time.
/usr/lib/python3/dist-packages/apport/report.py:13: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import fnmatch, glob, traceback, errno, sys, atexit, locale, imp
Traceback (most recent call last):
File "inference_paddle/inference.py", line 176, in
'/home/zdhsyb/inference_paddle/inference_model/best_model/params'
File "inference_paddle/inference.py", line 95, in create_predictor
predictor = create_paddle_predictor(config)
RuntimeError: parallel_for failed: cudaErrorNoKernelImageForDevice: no kernel image is available for execution on the device

Yolo v3示例报错

--- Running analysis [ir_graph_build_pass]
Traceback (most recent call last):
File "infer_yolov3.py", line 64, in
pred = create_predictor(args)
File "infer_yolov3.py", line 27, in create_predictor
predictor = create_paddle_predictor(config)
paddle.fluid.core_avx.EnforceNotMet:


C++ Call Stacks (More useful to developers):

0 std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::exception_ptr, char const*, int)
2 paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, float>::Compute(paddle::framework::ExecutionContext const&) const
3 std::__1::__function::__func<paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CPUPlace, false, 0ul, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, float>, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, double>, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, int>, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, signed char>, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, long long> >::operator()(char const*, char const*, int) const::'lambda'(paddle::framework::ExecutionContext const&), std::__1::allocator<paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CPUPlace, false, 0ul, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, float>, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, double>, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, int>, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, signed char>, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, long long> >::operator()(char const*, char const*, int) const::'lambda'(paddle::framework::ExecutionContext const&)>, void (paddle::framework::ExecutionContext const&)>::operator()(paddle::framework::ExecutionContext const&)
4 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&, paddle::framework::RuntimeContext*) const
5 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const
6 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&)
7 paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool)
8 paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > > > const&, bool, bool)
9 paddle::inference::LoadPersistables(paddle::framework::Executor*, paddle::framework::Scope*, paddle::framework::ProgramDesc const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, bool)
10 paddle::inference::Load(paddle::framework::Executor*, paddle::framework::Scope*, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&)
11 paddle::inference::analysis::IrGraphBuildPass::RunImpl(paddle::inference::analysis::Argument*)
12 paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
13 paddle::AnalysisPredictor::OptimizeInferenceProgram()
14 paddle::AnalysisPredictor::PrepareProgram(std::__1::shared_ptrpaddle::framework::ProgramDesc const&)
15 paddle::AnalysisPredictor::Init(std::__1::shared_ptrpaddle::framework::Scope const&, std::__1::shared_ptrpaddle::framework::ProgramDesc const&)
16 std::__1::unique_ptr<paddle::PaddlePredictor, std::__1::default_deletepaddle::PaddlePredictor > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
17 std::__1::unique_ptr<paddle::PaddlePredictor, std::__1::default_deletepaddle::PaddlePredictor > paddle::CreatePaddlePredictorpaddle::AnalysisConfig(paddle::AnalysisConfig const&)
18 void pybind11::cpp_function::initialize<std::__1::unique_ptr<paddle::PaddlePredictor, std::__1::default_deletepaddle::PaddlePredictor > (&)(paddle::AnalysisConfig const&), std::__1::unique_ptr<paddle::PaddlePredictor, std::__1::default_deletepaddle::PaddlePredictor >, paddle::AnalysisConfig const&, pybind11::name, pybind11::scope, pybind11::sibling>(std::__1::unique_ptr<paddle::PaddlePredictor, std::__1::default_deletepaddle::PaddlePredictor > (&)(paddle::AnalysisConfig const&), std::__1::unique_ptr<paddle::PaddlePredictor, std::__1::default_deletepaddle::PaddlePredictor > ()(paddle::AnalysisConfig const&), pybind11::name const&, pybind11::scope const&, pybind11::sibling const&)::'lambda'(pybind11::detail::function_call&)::operator()(pybind11::detail::function_call&) const
19 pybind11::cpp_function::dispatcher(_object
, _object*, _object*)


Error Message Summary:

Error: OP(LoadCombine) fail to open file ./yolove_infer/params, please check whether the model file is complete or damaged. at (/home/teamcity/work/ef54dc8a5b211854/paddle/fluid/operators/load_combine_op.h:46)

预测时参数无法加载

您好,我在Jeston Nano上跑yolov3的示例预测代码时,遇到了参数无法加载的问题。进程一直卡在Sync params from CPU to GPU,但是我查看内存使用情况,4.1G的Memory只使用了2.6G,然后一直不变,请问一下这是什么原因呢

paddleinference 2.1.1 C++预测问题

官方给出的预测示例都是用vector给出如下图,现在有几个问题想了解一下:
image

1、opencv 的Mat类型的数据如何在预测库中使用;
2、在使用前需不需要convertTo(im, CV_32FC3, 1 / 255.0);
3、在restnet50 demo中需不需要进行resize(im, im,Size(224,224));
这是我写的代码示例图,我不知道要不要resize要不要connvertto,但是无论我怎么去调整使用,都没办法得到准确的结果,比如画面中能够一只狗,预测的不是狗,换一张图可能结果也是类似的乱的。
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.