GithubHelp home page GithubHelp logo

paddlepaddle / paddlers Goto Github PK

View Code? Open in Web Editor NEW
341.0 12.0 87.0 13.88 MB

Awesome Remote Sensing Toolkit based on PaddlePaddle.

License: Apache License 2.0

Python 98.78% Shell 0.31% C++ 0.27% Cuda 0.62% Dockerfile 0.02%
change-detection image-segmentation object-detection remote-sensing deep-learning classification geospatial gis image-restoration scene-classification

paddlers's Introduction

简体中文 | English

飞桨高性能、多任务遥感影像智能解译开发套件,端到端完成从数据到部署的全流程遥感应用

version license python version support os

最新动态

  • [2022-11-09] 🔥 PaddleRS发布1.0正式版本,详细发版信息请参考Release Note
  • [2022-05-19] 🔥 PaddleRS发布1.0-beta版本,全面支持遥感领域深度学习任务。详细发版信息请参考Release Note

简介

PaddleRS是百度飞桨、遥感科研院所及相关高校共同开发的基于飞桨的遥感影像智能解译开发套件,支持图像分割、目标检测、场景分类、变化检测、图像复原等常见遥感任务。PaddleRS致力于帮助遥感领域科研从业者快速完成算法的研发、验证和调优。同时,PaddleRS也期望助力投身于产业实践的开发者,便捷地实现从数据预处理到模型部署的全流程遥感深度学习应用

特性

PaddleRS具有以下五大特色:

  • 丰富的视觉与遥感特色模型库:集成飞桨四大视觉套件的成熟模型库,同时支持FarSeg、BIT、ChangeStar等众多遥感领域深度学习模型,覆盖图像分割、目标检测、场景分类、变化检测、图像复原等任务。

  • 对遥感领域专有任务的支持:支持包括变化检测在内的遥感领域特色任务,提供完善的训练、部署教程以及丰富的实践案例。

  • 针对遥感影像大幅面性质的优化:支持大幅面影像滑窗推理,使用内存延迟载入技术提升性能;支持对大幅面影像地理坐标信息的读写。

  • 顾及遥感特性与地学知识的数据预处理:针对遥感数据特点,提供对包含任意数量波段的数据以及多时相数据的预处理功能,支持影像配准、辐射校正、波段选择等遥感数据预处理方法,支持50余种遥感指数的提取与知识融入。

  • 工业级训练与部署性能:支持多进程异步I/O、多卡并行训练等加速策略,结合飞桨核心框架的显存优化功能,可大幅度减少模型的训练开销,帮助开发者以更低成本、更高效地完成遥感的开发和训练。

技术交流

  • 如果您发现任何PaddleRS存在的问题或是对PaddleRS有建议, 欢迎通过GitHub Issues向我们提出。
  • 欢迎加入PaddleRS微信群:

产品矩阵

模型库 数据变换算子 遥感特色工具 实践案例
变化检测
场景分类
图像复原
目标检测
图像分割
数据预处理
  • CenterCrop
  • Dehaze(影像去雾)
  • MatchRadiance(辐射校正)
  • Normalize
  • Pad
  • ReduceDim(高光谱降维)
  • Resize
  • ResizeByLong
  • ResizeByShort
  • SelectBand(波段选择)
  • ...
数据增强
  • AppendIndex(遥感指数计算)
  • MixupImage
  • RandomBlur
  • RandomCrop
  • RandomDistort
  • RandomExpand
  • RandomHorizontalFlip
  • RandomResize
  • RandomResizeByShort
  • RandomScaleAspect
  • RandomSwap(随机时序交换)
  • RandomVerticalFlip
  • ...
遥感指数
  • ARI
  • ARI2
  • ARVI
  • AWEInsh
  • AWEIsh
  • BAI
  • BI
  • BLFEI
  • BNDVI
  • BWDRVI
  • BaI
  • CIG
  • CSI
  • CSIT
  • DBI
  • DBSI
  • DVI
  • EBBI
  • EVI
  • EVI2
  • FCVI
  • GARI
  • GBNDVI
  • GLI
  • GRVI
  • IPVI
  • LSWI
  • MBI
  • MGRVI
  • MNDVI
  • MNDWI
  • MSI
  • NBLI
  • NDVI
  • NDWI
  • NDYI
  • NIRv
  • PSRI
  • RI
  • SAVI
  • SWI
  • TDVI
  • UI
  • VIG
  • WI1
  • WI2
  • WRI
  • ...
数据格式转换
数据集制作
数据后处理
数据可视化
开源数据集预处理
官方案例
社区案例

教程与文档

实践案例

更多案例请参考PaddleRS实践案例库

许可证书

本项目的发布受Apache 2.0 license许可认证。

开源贡献

  • 非常感谢国家对地观测科学数据中心、**科学院空天信息创新研究院、北京航空航天大学、武汉大学、**石油大学(华东)、**地质大学、**四维、航天宏图、中科星图、超图等单位对PaddleRS项目的贡献。注:排名不分先后。
  • 非常感谢geoyee(陈奕州),kongdebug(孔远杭),huilin16(赵慧琳)等开发者对PaddleRS项目的贡献。

学术引用

如果我们的项目在学术上帮助到您,请考虑以下引用:

@misc{paddlers2022,
    title={PaddleRS, Awesome Remote Sensing Toolkit based on PaddlePaddle},
    author={PaddlePaddle Authors},
    howpublished = {\url{https://github.com/PaddlePaddle/PaddleRS}},
    year={2022}
}

paddlers's People

Contributors

asthestarsfalll avatar bobholamovic avatar dependabot[bot] avatar geoyee avatar ggggkkkknnnn avatar harryoung avatar huilin16 avatar juncaipeng avatar kongdebug avatar lhe-it avatar liuxtakeoff avatar lizechng avatar lutaochu avatar michaelowenliu avatar onecatcn avatar sherwincn avatar sun222 avatar trellixvulnteam avatar ucsk avatar yzl19940819 avatar zbt78 avatar zhwesky2010 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

paddlers's Issues

关于PaddleRS变化检测任务的几个问题,请教大佬解答一下

PaddleRS release1.0版本
1.请问一下,怎么初始化变化检测的模型权重,比如自定义使用正态分布的随机初始化,凯明初始化等,提供的liverCD与训练参数感觉效果不太好
2.如何更改训练的优化器,和学习率更新策略,比如使用SGD+cosin模拟退火的优化策略
3.变化检测任务,正样本太少有没有好的办法,目前我使用paddlers中的组合loss方式如交叉熵+Dice,感觉效果不明显,请教一下大佬,正样本少的情况下,有没有好的办法提高模型精度,训练过程中有啥要注意的事项等

谢谢!

[paddlepaddle-gpu==2.4.2 在windows平台推理文件结果出错]

Thanks for your issue. To help us better solve the issue, please provide the following information:

  1. PaddleRS version: (PaddleRS develop)
  2. PaddlePaddle version: (paddlepaddle==2.4.2)
  3. Operation system: (Windows 10)
  4. Python version: (Python3.7)
  5. CUDA/cuDNN version: (e.g. CUDA11.7/cuDNN v8.4.1)
  6. Additional context: (add any other context about the problem)

欢迎您的提问。辛苦您提供以下信息,以方便我们快速定位和解决问题:

  1. PaddleRS版本:(PaddleRS develop)
  2. PaddlePaddle版本:(paddlepaddle==2.4.2)
  3. 操作系统信息:(Windows10)
  4. Python版本号:(Python3.7)
  5. CUDA/cuDNN版本:( CUDA11.7/cuDNN v8.4.1)
  6. 其他内容: (增加其他与问题相关的内容)
    问题:
    工作流程是:通过滑动窗口推理得到tiff,使用mask2shape生成shape文件;
    目前问题:
    cpu跑出来的两期影像生成的正确和shp矢量文件是正确的,
    而gpu跑出来则是错误跟预期不一样,也即是paddlers-gpu运行结果出错
    代码:
predictor = Predictor("./inference_model", use_gpu=True, gpu_id=args.gpu_id)
res = predictor.slider_predict((‘ref.tif’, 'comp.tiff'), args.save_path, block_size=2048, batch_size=args.batch_number, overlap=args.overlap, merge_strategy='accum')
ms.mask2shape(args.image0_path, mask_path, output_shp)

结果:
cpu结果:
image

cpu生成shape文件:
image

cpu运行情况:
Warning: Unable to use OC-SORT, please install filterpy, for example: pip install filterpy, see https://github.com/rlabbe/filterpy
2023-04-24 17:17:08,635-WARNING: post-quant-hpo is not support in system other than linux
Image1 size is 5912 x 3858 pixels.
Image2 size is 5912 x 3858 pixels.
2023-04-24 17:17:08 [WARNING] Cannot find raw_params. Default arguments will be used to construct the model.
2023-04-24 17:17:08 [INFO] Model[BIT] loaded.
cpu

0%| | 0/6 [00:00<?, ?it/s]
2 out of 6 blocks processed.: 33%|███▎ | 2/6 [00:12<00:24, 6.10s/it]
6 out of 6 blocks processed.: 100%|██████████| 6/6 [00:35<00:00, 5.70s/it]2023-04-24 17:17:45 [INFO] GeoTiff file saved in output\ref.tif.
6 out of 6 blocks processed.: 100%|██████████| 6/6 [00:35<00:00, 5.85s/it]
Total time consumed: 1.2861473560333252.

gpu生成shape文件:
image

gpu运行情况:
Warning: Unable to use OC-SORT, please install filterpy, for example: pip install filterpy, see https://github.com/rlabbe/filterpy
2023-04-24 17:12:26,120-WARNING: post-quant-hpo is not support in system other than linux
Image1 size is 5912 x 3858 pixels.
Image2 size is 5912 x 3858 pixels.
2023-04-24 17:12:26 [WARNING] Cannot find raw_params. Default arguments will be used to construct the model.
2023-04-24 17:12:26 [INFO] Model[BIT] loaded.
gpu

6 out of 6 blocks processed.: 100%|██████████| 6/6 [00:09<00:00, 1.52s/it]
2023-04-24 17:12:49 [INFO] GeoTiff file saved in output\ref.tif.
Total time consumed: 81.71551060676575.

解决方案:
采用paddlepaddle==2.3.2,paddlepaddle-gpu==2.3.2,cuda 11.2,cudnn 8.2.1,paddlers成功安装后,gpu能够跑出正确结果。

如何实现多机训练?

这套代码是否支持多机训练呢?如果支持的话,要如何启动?如果不支持,未来会计划支持吗?

[General Issue]请问PaddleRS里的滑窗推理能否推理paddledetection里的模型呢 如果可以 要咋推理呢

Thanks for your issue. To help us better solve the issue, please provide the following information:

  1. PaddleRS version: (please specify the branch as well,e.g. PaddleRS release/1.0)
  2. PaddlePaddle version: (e.g. PaddlePaddle 2.3.0)
  3. Operation system: (e.g. Linux/Windows/MacOS)
  4. Python version: (e.g. Python3.7/8)
  5. CUDA/cuDNN version: (e.g. CUDA10.2/cuDNN 7.6.5)
  6. Additional context: (add any other context about the problem)

欢迎您的提问。辛苦您提供以下信息,以方便我们快速定位和解决问题:

  1. PaddleRS版本:(请提供版本号和分支信息,如PaddleRS release/1.0)
  2. PaddlePaddle版本:(如PaddlePaddle 2.3.0)
  3. 操作系统信息:(如Linux/Windows/MacOS)
  4. Python版本号:(如Python3.7/8)
  5. CUDA/cuDNN版本:( 如CUDA10.2/cuDNN 7.6.5等)
  6. 其他内容: (增加其他与问题相关的内容)

[Bug] 'dict' object has no attribute 'net'

我在本地运行给的变化检测的例子时,在运行run_task.py进行训练的时候,报错如下:
...
use_vdl: True
2023-04-17 20:20:17 [INFO] 1024 samples in file ./data/levircd/val.txt
2023-04-17 20:20:20 [INFO] 7120 samples in file ./data/levircd/train.txt
Traceback (most recent call last):
File "run_task.py", line 108, in
cfg['optimizer'].args['parameters'] = model.net.parameters()
AttributeError: 'dict' object has no attribute 'net'
这是怎么回事呢?model是个dict吗?

[Bug]module 'paddlers' has no attribute 'transforms'

(paddle_env) E:\PaddleRS>python setup.py install
Traceback (most recent call last):
File "setup.py", line 16, in
import paddlers
File "E:\PaddleRS\paddlers_init_.py", line 17, in
from paddlers.utils.env import get_environ_info, init_parallel_env
File "E:\PaddleRS\paddlers\utils_init_.py", line 28, in
from .visualize import map_display
File "E:\PaddleRS\paddlers\utils\visualize.py", line 26, in
from paddlers.transforms.functions import to_uint8
File "E:\PaddleRS\paddlers\transforms_init_.py", line 18, in
from .operators import *
File "E:\PaddleRS\paddlers\transforms\operators.py", line 30, in
import paddlers.transforms.functions as F
AttributeError: module 'paddlers' has no attribute 'transforms'

[General Issue]ValueError: (InvalidArgument) argsort(): argument (position 2) must be int, but got str (at ..\paddle\fluid\pybind\op_function_common.cc:147)

Thanks for your issue. To help us better solve the issue, please provide the following information:

  1. PaddleRS version: (please specify the branch as well,e.g. PaddleRS release/1.0)

  2. PaddlePaddle version: (e.g. PaddlePaddle 2.3.0)

  3. Operation system: (e.g. Linux/Windows/MacOS)

  4. Python version: (e.g. Python3.7/8)

  5. CUDA/cuDNN version: (e.g. CUDA10.2/cuDNN 7.6.5)

  6. Additional context: (add any other context about the problem)


欢迎您的提问。辛苦您提供以下信息,以方便我们快速定位和解决问题:

  1. PaddleRS版本:(请提供版本号和分支信息,如PaddleRS release/1.0)
  2. PaddlePaddle版本:(如PaddlePaddle 2.3.0)
  3. 操作系统信息:(如Linux/Windows/MacOS)
  4. Python版本号:(如Python3.7/8)
  5. CUDA/cuDNN版本:( 如CUDA10.2/cuDNN 7.6.5等)
  6. 其他内容: (增加其他与问题相关的内容)

[Bug]AttributeError: module 'paddlers.tasks' has no attribute 'changedetector'

Hi there! Thanks for your tremendous efforts in this repo, and I hope you guys are doing well. Anyway, I've encountered some problems with the change detection. When I tried to export a static model or try slide_detector of change detection, they both had the AttributeError: module 'paddlers.tasks' has no attribute 'changedetector'. So that's the reason why I am here, and wondering if you could offer me some help with this issue.

Here are some environment details listed below in case you might need them.

All tasks were running on Ubuntu 18.04 LTS.
Traceback (most recent call last):
File "deploy/export/export_model.py", line 73, in
model = load_model(args.model_dir)
File "/home/supermicro3090/anaconda3/envs/rs/lib/python3.7/site-packages/paddlers-develop-py3.7.egg/paddlers/tasks/load_model.py", line 77, in load_model
mod = getattr(paddlers.tasks, model_type)
AttributeError: module 'paddlers.tasks' has no attribute 'changedetector'

conda list | grep paddle
paddle-bfloat 0.1.7 pypi_0 pypi
paddlepaddle-gpu 2.3.2.post112 pypi_0 pypi
paddlers develop pypi_0 pypi
paddleslim 2.3.4 pypi_0 pypi

nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Mon_Nov_30_19:08:53_PST_2020
Cuda compilation tools, release 11.2, V11.2.67
Build cuda_11.2.r11.2/compiler.29373293_0

Please let me know if you need anything else.

关于metrics中的代码请教。

https://github.com/PaddlePaddle/PaddleRS/blob/a8b17d07c800bb54648954269abdaf5ddfb56065/paddlers/tasks/change_detector.py#LL149C77-L149C77

大佬您好,对于calculate_area()中的参数ignore_index我有以下疑问,请您指教下:

在变化检测的训练中,默认将标签二值化处理为0-1。但在evaluate中调用calculate_area(),计算预测面积时,这里传入的label也是0,1的范围。
··

这个地方mask = label != ignore_index, 不就意味着mask一直都是true吗? 因为label是0, 1 而 ignore_index是 255 . 所以都是不等。 那么mask就全为true了把? 即没有忽略任何值。
如果本身就是这样的? 那请问这个默认值255,在这里的作用是什么呢? 是在什么样的情景需要用到呢?

[Feature Request] Add postprocs in pipline

欢迎提出一个新的功能需求。为了帮助我们更好理解您的需求,辛苦您提供以下信息:

  1. 请使用清晰简洁的语言描述该项功能需求。

类似于给eval和predict给出transforms的compose,能够将后处理也能这样塞进去。

  1. 请分析这个功能的必要性。

可以在eval时直观看到使用了后处理的指标。当开发者有意尝试开发一种新的后处理方法,可以像transforms一样按照某种规则(或继承后处理的基类)完成下来,就能方便的融入整个流程。

  1. 如果可能的话,请提供相关代码实现效果。
# 类似于这样
from paddlers.datasets import SegDataset
from paddlers.tasks import load_model
import paddlers.transforms as T
import paddlers.postprocs as P

transforms = T.Compose([
    T.Normalize(mean=[0.5] * 3, std=[0.5] * 3)
])

eval_dataset = SegDataset(
    data_dir="dataset",
    file_list="dataset/val.txt",
    transforms=transforms
)

postprocs  = P.Compose([
    P.Open(),  # 开运算
    P.Regularization()  # 规则化
])

model = load_model("output/best_model")
model.eval(eval_dataset, postprocs)  # 评估
pred = model.predict(img_path, transforms, postprocs)["label_map"]  # 推理

[General Issue]我想问下paddlers中的图像分割模型和paddlex中样例工程的遥感影像分割有什么不同。

Thanks for your issue. To help us better solve the issue, please provide the following information:

  1. PaddleRS version: (please specify the branch as well,e.g. PaddleRS release/1.0)
  2. PaddlePaddle version: (e.g. PaddlePaddle 2.3.0)
  3. Operation system: (e.g. Linux/Windows/MacOS)
  4. Python version: (e.g. Python3.7/8)
  5. CUDA/cuDNN version: (e.g. CUDA10.2/cuDNN 7.6.5)
  6. Additional context: (add any other context about the problem)

欢迎您的提问。辛苦您提供以下信息,以方便我们快速定位和解决问题:

  1. PaddleRS版本:(请提供版本号和分支信息,如PaddleRS release/1.0)
  2. PaddlePaddle版本:(如PaddlePaddle 2.3.0)
  3. 操作系统信息:(如Linux/Windows/MacOS)
  4. Python版本号:(如Python3.7/8)
  5. CUDA/cuDNN版本:( 如CUDA10.2/cuDNN 7.6.5等)
  6. 其他内容: (增加其他与问题相关的内容)

[Bug]Windows环境ValueError: Cannot read the image file .\data\sarship\images\SARShip-1.0-5.tiff!

(pytorch37pp) PS D:\CL\PaddleRS> & D:/softs/Anaconda3/envs/pytorch37pp/python.exe d:/CL/PaddleRS/tutorials/train/object_detection/faster_rcnn.py
Can not import map_display. This is probably because GDAL is not properly installed.
2022-11-25 17:16:15,568-WARNING: post-quant-hpo is not support in system other than linux
2022-11-25 17:16:15 [INFO] Decompressing ./data/sarship.zip...
2022-11-25 17:16:18 [DEBUG] ./data/sarship.zip decompressed.
2022-11-25 17:16:18 [INFO] Starting to read file list from dataset...
2022-11-25 17:16:19 [INFO] 175 samples in file ./data/sarship/train.txt, including 175 positive samples
and 0 negative samples.
creating index...
index created!
2022-11-25 17:16:19 [INFO] Starting to read file list from dataset...
2022-11-25 17:16:19 [INFO] 5 samples in file ./data/sarship/eval.txt, including 5 positive samples and 0 negative samples.
creating index...
index created!
W1125 17:16:19.075634 1516 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver
API Version: 11.6, Runtime API Version: 11.6
W1125 17:16:19.079631 1516 gpu_resources.cc:91] device: 0, cuDNN Version: 8.4.
2022-11-25 17:16:19 [INFO] Loading pretrained model from ./output/faster_rcnn/pretrain\faster_rcnn_r50_fpn_2x_coco.pdparams
2022-11-25 17:16:20 [WARNING] [SKIP] Shape of parameters bbox_head.bbox_score.weight do not match. (pretrained: [1024, 81] vs actual: [1024, 2])
2022-11-25 17:16:20 [WARNING] [SKIP] Shape of parameters bbox_head.bbox_score.bias do not match. (pretrained: [81] vs actual: [2])
2022-11-25 17:16:20 [WARNING] [SKIP] Shape of parameters bbox_head.bbox_delta.weight do not match. (pretrained: [1024, 320] vs actual: [1024, 4])
2022-11-25 17:16:20 [WARNING] [SKIP] Shape of parameters bbox_head.bbox_delta.bias do not match. (pretrained: [320] vs actual: [4])
2022-11-25 17:16:20 [INFO] There are 291/295 variables loaded into FasterRCNN.
Exception in thread Thread-4:
Traceback (most recent call last):
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\site-packages\paddlers-develop-py3.7.egg\paddlers\transforms\operators.py", line 231, in read_img
import gdal
ModuleNotFoundError: No module named 'gdal'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\site-packages\osgeo_init_.py", line 29, in swig_import_helper
return importlib.import_module(mname)
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\importlib_init_.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1006, in _gcd_import
File "", line 983, in _find_and_load
File "", line 967, in _find_and_load_unlocked
File "", line 670, in _load_unlocked
File "", line 583, in module_from_spec
File "", line 1043, in create_module
File "", line 219, in _call_with_frames_removed
ImportError: DLL load failed: 找不到指定的模块。

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\site-packages\paddlers-develop-py3.7.egg\paddlers\transforms\operators.py", line 234, in read_img
from osgeo import gdal
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\site-packages\osgeo_init_.py", line 45, in
gdal = swig_import_helper()
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\site-packages\osgeo_init
.py", line 42, in swig_import_helper
return importlib.import_module('gdal')
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\importlib_init
.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ModuleNotFoundError: No module named '_gdal'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\site-packages\paddlers-develop-py3.7.egg\paddlers\transforms\operators.py", line 275, in apply_im
data = self.read_img(im_path)
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\site-packages\paddlers-develop-py3.7.egg\paddlers\transforms\operators.py", line 237, in read_img
"Failed to import gdal! Please install GDAL library according to the document."
ImportError: Failed to import gdal! Please install GDAL library according to the document.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\threading.py", line 926, in _bootstrap_inner
self.run()
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\site-packages\paddle\fluid\dataloader\dataloader_iter.py", line 218, in _thread_loop
self._thread_done_event)
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\site-packages\paddle\fluid\dataloader\fetcher.py", line 121,
in fetch
data.append(self.dataset[idx])
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\site-packages\paddlers-develop-py3.7.egg\paddlers\datasets\voc.py", line 327, in getitem
sample = self.transforms(sample)
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\site-packages\paddlers-develop-py3.7.egg\paddlers\transforms\operators.py", line 111, in call
sample = self.apply_transforms(sample)
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\site-packages\paddlers-develop-py3.7.egg\paddlers\transforms\operators.py", line 122, in apply_transforms
sample = op(sample)
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\site-packages\paddlers-develop-py3.7.egg\paddlers\transforms\operators.py", line 184, in call
sample = self.apply(sample)
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\site-packages\paddlers-develop-py3.7.egg\paddlers\transforms\operators.py", line 321, in apply
sample['image'] = self.apply_im(sample['image'])
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\site-packages\paddlers-develop-py3.7.egg\paddlers\transforms\operators.py", line 278, in apply_im
im_path))
ValueError: Cannot read the image file .\data\sarship\images\SARShip-1.0-5.tiff!

[General Issue] 使用PPYOLOE_R 模型预测Crash

使用PPYOLOE_R初始化了一个model
model = pdrs.tasks.det.PPYOLOE_R(
backbone="CSPResNet_m",
num_classes=133,
nms_score_threshold=0.1,
nms_topk=2000,
nms_keep_topk=-1,
nms_normalized=False,
nms_iou_threshold=0.1)

使用这个model进行predict时,发生crash
result = model.predict(test_imgs,transforms = T.Compose(test_transforms))

crash发生在object_detector.postprocess方法中:
dt = bboxes[k]
k = k + 1
num_id, score, xmin, ymin, xmax, ymax = dt.tolist()
dt是个10元组,但是这里只使用了6个变量去接收。

PPYOLOE_R 的旋转框需要4个坐标(8元组来描述),而一般的目标检测,只需要4元组就能描述。

然后看了下文档:
image
这里对目标检测结果的返回中,只有bbox(4元组)来描述框,没法描述PPYOLOE_R的旋转框。

想问一下,这里是没适配PPYOLOE_R的输出,还是应该调用其他的方式来predict?

win10 安装padders 问题

欢迎您的提问。辛苦您提供以下信息,以方便我们快速定位和解决问题:

  1. PaddleRS版本:PaddleRS develop)
  2. PaddlePaddle版本:addlePaddle 2.3.2-gpu
  3. 操作系统信息:Windows
  4. Python版本号:Python3.8
  5. CUDA/cuDNN版本:CUDA11.1/cuDNN 8.0.4

python tutorials/train/change_detection/bit.py

File "C:\Users\Administrator\anaconda3\envs\PaddleRs\lib\site-packages\paddlers-0.0.0.dev0-py3.8.egg\paddlers\models\ppcls\arch_init_.py", line 30, in
from .distill.afd_attention import LinearTransformStudent, LinearTransformTeacher
ModuleNotFoundError: No module named 'paddlers.models.ppcls.arch.distill'

安装报错,Building wheel for pyrfr (setup.py) ... error

WIN11
anaconda python 3.7

git clone https://github.com/PaddleCV-SIG/PaddleRS
cd PaddleRS
git checkout develop
pip install -r requirements.txt

python setup.py install

Building wheel for pyrfr (setup.py) ... error
  error: subprocess-exited-with-error

  × python setup.py bdist_wheel did not run successfully.
  │ exit code: 1
  ╰─> [8 lines of output]
      [<setuptools.extension.Extension('pyrfr._regression') at 0x18d195b3488>, <setuptools.extension.Extension('pyrfr._util') at 0x18d1be1a288>]
      running bdist_wheel
      running build
      running build_ext
      building 'pyrfr._regression' extension
      swigging pyrfr/regression.i to pyrfr/regression_wrap.cpp
      swig.exe -python -c++ -modern -py3 -features nondynamic -I./include -o pyrfr/regression_wrap.cpp pyrfr/regression.i
      error: command 'swig.exe' failed: None
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for pyrfr
  Running setup.py clean for pyrfr
Successfully built lap smac pynisher
Failed to build pyrfr
Installing collected packages: sortedcontainers, pytz, msgpack, lap, heapdict, geojson, easydict, zipp, zict, xmltodict, Werkzeug, typing-extensions, tornado, toolz, tifffile, threadpoolctl, tblib, shapely, scipy, regex, pyzmq, pyyaml, PyWavelets, python-dateutil, pyrfr, pyparsing, pycryptodome, psutil, opencv-contrib-python, networkx, natsort, munch, MarkupSafe, locket, llvmlite, joblib, itsdangerous, imageio, future, fsspec, fonttools, et-xmlfile, emcee, cython, cycler, colorama, cloudpickle, chardet, Babel, tqdm, scikit-learn, pynisher, partd, pandas, packaging, openpyxl, numba, kiwisolver, Jinja2, importlib-metadata, ConfigSpace, bce-python-sdk, scikit-image, motmetrics, matplotlib, dask, click, pycocotools, flask, distributed, smac, Flask-Babel, visualdl, paddleslim
  Running setup.py install for pyrfr ... error
  error: subprocess-exited-with-error

  × Running setup.py install for pyrfr did not run successfully.
  │ exit code: 1
  ╰─> [9 lines of output]
      [<setuptools.extension.Extension('pyrfr._regression') at 0x1e614c9d508>, <setuptools.extension.Extension('pyrfr._util') at 0x1e6173c3dc8>]
      running install
      D:\anaconda3\envs\PaddleRS\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
        setuptools.SetuptoolsDeprecationWarning,
      running build_ext
      building 'pyrfr._regression' extension
      swigging pyrfr/regression.i to pyrfr/regression_wrap.cpp
      swig.exe -python -c++ -modern -py3 -features nondynamic -I./include -o pyrfr/regression_wrap.cpp pyrfr/regression.i
      error: command 'swig.exe' failed: None
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure

× Encountered error while trying to install package.
╰─> pyrfr

note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.

运行过程中报错了,如何解决呢

Exception in thread Thread-2:
Traceback (most recent call last):
File "/Users/apple/miniconda3/lib/python3.9/threading.py", line 973, in _bootstrap_inner
self.run()
File "/Users/apple/miniconda3/lib/python3.9/threading.py", line 910, in run
self._target(*self._args, **self._kwargs)
File "/Users/apple/miniconda3/lib/python3.9/site-packages/paddle/fluid/dataloader/dataloader_iter.py", line 217, in _thread_loop
batch = self._dataset_fetcher.fetch(indices,
File "/Users/apple/miniconda3/lib/python3.9/site-packages/paddle/fluid/dataloader/fetcher.py", line 121, in fetch
data.append(self.dataset[idx])
File "/Users/apple/zx-workspace/GeoView/PaddleRS/paddlers/datasets/cd_dataset.py", line 134, in getitem
sample = self.transforms.apply_transforms(sample)
File "/Users/apple/zx-workspace/GeoView/PaddleRS/paddlers/transforms/operators.py", line 122, in apply_transforms
sample = op(sample)
File "/Users/apple/zx-workspace/GeoView/PaddleRS/paddlers/transforms/operators.py", line 184, in call
sample = self.apply(sample)
File "/Users/apple/zx-workspace/GeoView/PaddleRS/paddlers/transforms/operators.py", line 1237, in apply
sample = Resize(self.crop_size)(sample)
File "/Users/apple/zx-workspace/GeoView/PaddleRS/paddlers/transforms/operators.py", line 184, in call
sample = self.apply(sample)
File "/Users/apple/zx-workspace/GeoView/PaddleRS/paddlers/transforms/operators.py", line 479, in apply
sample['mask'] = self.apply_mask(sample['mask'], target_size)
File "/Users/apple/zx-workspace/GeoView/PaddleRS/paddlers/transforms/operators.py", line 427, in apply_mask
mask = cv2.resize(mask, target_size, interpolation=cv2.INTER_NEAREST)
TypeError: Expected Ptrcv::UMat for argument 'src'

[Bug]paddle::pybind::ThrowExceptionToPython(std::-_exception_ptr::exception_ptr)

欢迎您反馈PaddleRS使用问题。辛苦您提供以下信息,以方便我们快速定位和解决问题:

  1. PaddleRS版本:(请提供版本号和分支信息,如PaddleRS release/1.0)
    develop branch
  2. PaddlePaddle版本:(如PaddlePaddle 2.3.0)
    paddle2.4.1.post17
  3. 操作系统信息:(如Linux/Windows/MacOS)
    ubuntu20.04
  4. Python版本号:(如Python3.7/8)
    python3.9
  5. CUDA/cuDNN版本:( 如CUDA10.2/cuDNN 7.6.5等)
    cuda11.7,cudnn8.4.0,nccl2.14.3
  6. 完整的代码:(若修改过原代码,请提供修改前后代码对比)
    我在changdetection中能够运行bit、cdnet、fc_sian_diff,但是当我运行sunet、stanet、changeformer时,终端给我报错了。相关运行代码为
    python run_task.py train cd --config "test_tipc/configs/cd/stanet/stanet_levircd.yaml"
  7. 详细的错误信息与相关log:(若使用多卡,log默认保存在log/worklog.0

[2022-12-16 20:50:31,484] [[2022-12-16 20:50:31,485] [[2022-12-16 20:50:31,674] [
INFO] download.py:119 - unique_endpoints {'}INFO] download.py:273 File /home/zy/.cache/paddle/hapi/weights/resnet18.pdparams md5 checking...INFO] download.py:156 - Found /home/zy/.cache/paddle/hapi/weights/resnet18.pdparams
C++ Traceback (most recent call last):paddle::pybind::ThrowExceptionToPython(std::-_exception_ptr::exception_ptr)

Error Message Summary:----------------------
FatalError: Process abort signal' is detected by the operating system.[TimeInfo: *** Aborted at 1671195045 (unix time) try "date -d @1671195045" if you are using GNU date ***][SignalInfo: *** SIGABRT (@0x3e8000a67d7) received by PD 681943 (TID 0x7f60273e3180) from PID 681943 ***]
Process finished with exit code 134 (interrupted by signal 6: SIGABRT)

  1. 问题复现步骤:
    相同环境下,运行sunet、stanet、changeformer等变换检测代码即可复现。

  2. 其他内容: (增加其他与问题相关的内容)

是否有变化检测的预训练模型

语义分割、目标检测等任务可以在PaddleSeg和PaddleDet中找到预训练模型,但是没有找到变化检测相关预训练模型,不知后续是否会提供?

[Bug] 依赖版本反馈

环境:AI Studio 32G V100环境
paddlers版本:develop

  1. 需求中的protobuf >= 3.1.0, <= 4.22.1会造成错误,应使用protobuf >= 3.1.0, <= 3.20.3报错误如下:
Traceback (most recent call last):
  File "PaddleRS/tools/geojson2mask.py", line 19, in <module>
    import paddlers
  File "/home/aistudio/PaddleRS/paddlers/__init__.py", line 17, in <module>
    from paddlers.utils.env import get_environ_info, init_parallel_env
  File "/home/aistudio/PaddleRS/paddlers/utils/__init__.py", line 15, in <module>
    from . import logging
  File "/home/aistudio/PaddleRS/paddlers/utils/logging.py", line 21, in <module>
    import paddle
  File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/__init__.py", line 25, in <module>
    from .fluid import monkey_patch_variable
  File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/__init__.py", line 36, in <module>
    from . import framework
  File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/framework.py", line 35, in <module>
    from .proto import framework_pb2
  File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/proto/framework_pb2.py", line 36, in <module>
    type=None),
  File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/google/protobuf/descriptor.py", line 796, in __new__
    _message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
 1. Downgrade the protobuf package to 3.20.x or lower.
 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
  1. 需求中的opencv-contrib-python >= 4.3.0,安装最新版后,报错如下。切换到4.3.0.28可以运行:
AttributeError: module 'cv2' has no attribute '_registerMatType'

[General Issue]介绍里有影像配准功能,具体有文档吗?或者图像配准功能的文档

Thanks for your issue. To help us better solve the issue, please provide the following information:

  1. PaddleRS version: (please specify the branch as well,e.g. PaddleRS release/1.0)
  2. PaddlePaddle version: (e.g. PaddlePaddle 2.3.0)
  3. Operation system: (e.g. Linux/Windows/MacOS)
  4. Python version: (e.g. Python3.7/8)
  5. CUDA/cuDNN version: (e.g. CUDA10.2/cuDNN 7.6.5)
  6. Additional context: (add any other context about the problem)

欢迎您的提问。辛苦您提供以下信息,以方便我们快速定位和解决问题:

  1. PaddleRS版本:(请提供版本号和分支信息,如PaddleRS release/1.0)
  2. PaddlePaddle版本:(如PaddlePaddle 2.3.0)
  3. 操作系统信息:(如Linux/Windows/MacOS)
  4. Python版本号:(如Python3.7/8)
  5. CUDA/cuDNN版本:( 如CUDA10.2/cuDNN 7.6.5等)
  6. 其他内容: (增加其他与问题相关的内容)

[Bug] setup版本问题

当前的setup.py会从paddlers的__init__.py中读取当前的版本号,但是__init__.py开始import了一些其他的包。所以当环境没有安装一些需要的包时,直接使用python setup.py install的时候会报错,如在AI Studio中使用

! git clone https://github.com/PaddlePaddle/PaddleRS.git
%cd PaddleRS
! git checkout release/1.0
! pip install --upgrade pip
! python setup.py install
%cd ..

首先是

Traceback (most recent call last):
  File "setup.py", line 16, in <module>
    import paddlers
  File "/home/aistudio/PaddleRS/paddlers/__init__.py", line 17, in <module>
    from paddlers.utils.env import get_environ_info, init_parallel_env
  File "/home/aistudio/PaddleRS/paddlers/utils/__init__.py", line 17, in <module>
    from . import postprocs
  File "/home/aistudio/PaddleRS/paddlers/utils/postprocs/__init__.py", line 16, in <module>
    from .connection import cut_road_connection
  File "/home/aistudio/PaddleRS/paddlers/utils/postprocs/connection.py", line 20, in <module>
    from skimage import morphology
ModuleNotFoundError: No module named 'skimage'

包括后面还有paddleslim等。setup.py导入paddlers只是为了获取一下版本号,我想是不是可以把版本号单独拿出来做成类似一个文本文件,放在paddlers中,然后在setup.py中使用文件读取获取,在__init__.py中同样,类似于

version = open("paddlers/.version", "r").read().strip(),

[Bug]geojson2mask无法使用

(pytorch37pp) PS D:\CL\PaddleRS> python D:/CL/PaddleRS/tools/geojson2mask.py --srcimg_path D:\CL\data\building/p1/test_data/train.tif --geojson_path D:\CL/data/building/p1/test_data/train.geojson --save_path D:\CL/data/building/p1/test_data
Warning: Unable to use JDE/FairMOT/ByteTrack, please install lap, for example: pip install lap, see https://github.com/gatagat/lap
2023-02-20 20:36:02,076-WARNING: post-quant-hpo is not support in system other than linux
ERROR 1: PROJ: createGeodeticReferenceFrame: Cannot find proj.db
ERROR 1: PROJ: proj_as_wkt: Cannot find proj.db
ERROR 1: PROJ: proj_create_from_wkt: Cannot find proj.db
ERROR 1: PROJ: proj_create_from_wkt: Cannot find proj.db
ERROR 1: PROJ: proj_as_wkt: Cannot find proj.db
ERROR 1: PROJ: proj_create_from_wkt: Cannot find proj.db
ERROR 1: PROJ: proj_create_from_database: Cannot find proj.db
ERROR 1: PROJ: proj_create_from_wkt: Cannot find proj.db
ERROR 1: PROJ: proj_create_from_wkt: Cannot find proj.db
ERROR 1: PROJ: proj_as_wkt: Cannot find proj.db
ERROR 1: PROJ: proj_create_from_wkt: Cannot find proj.db
ERROR 1: PROJ: proj_as_wkt: Cannot find proj.db
ERROR 1: PROJ: proj_create_from_wkt: Cannot find proj.db
ERROR 1: PROJ: proj_as_wkt: Cannot find proj.db
ERROR 1: Failed to process SRS definition:
Traceback (most recent call last):
File "D:/CL/PaddleRS/tools/geojson2mask.py", line 73, in
convert_data(args.srcimg_path, args.geojson_path, args.save_path)
File "D:\CL\PaddleRS\tools\utils\timer.py", line 23, in wrapper
result = func(*args, **kwargs)
File "D:/CL/PaddleRS/tools/geojson2mask.py", line 43, in convert_data
File "C:\Users\Administrator\AppData\Roaming\Python\Python37\site-packages\geojson\codec.py", line 54, in loads
**kwargs)
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\json_init_.py", line 361, in loads
return cls(**kw).decode(s)
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "D:\softs\Anaconda3\envs\pytorch37pp\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

关于CDDataset的参数为问题请教

各位大佬您们好,
我想问下“变化监测任务”是否也有多标签的存在呢?代码参数中label_list 应该是用于多标签的任务把? 请问这一块的数据集是什么样的呢? 又是如何使用训练呢?

[General Issue]按照快速上手教程安装报错

在AI Studio中安装PaddleRS

!git clone https://github.com/PaddlePaddle/PaddleRS /home/aistudio/work/PaddleRS
%cd /home/aistudio/work/PaddleRS
!git checkout develop
!pip install --upgrade pip
!pip install -r requirements.txt
!python setup.py install

执行到最后一步python setup.py install时报错

Traceback (most recent call last):
File "setup.py", line 16, in
import paddlers
File "/home/aistudio/work/PaddleRS/paddlers/init.py", line 17, in
from paddlers.utils.env import get_environ_info, init_parallel_env
File "/home/aistudio/work/PaddleRS/paddlers/utils/init.py", line 17, in
from . import postprocs
File "/home/aistudio/work/PaddleRS/paddlers/utils/postprocs/init.py", line 15, in
from .regularization import building_regularization
File "/home/aistudio/work/PaddleRS/paddlers/utils/postprocs/regularization.py", line 17, in
import cv2
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/cv2/init.py", line 181, in
bootstrap()
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/cv2/init.py", line 175, in bootstrap
if __load_extra_py_code_for_module("cv2", submodule, DEBUG):
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/cv2/init.py", line 28, in __load_extra_py_code_for_module
py_module = importlib.import_module(module_name)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/cv2/gapi/init.py", line 290, in
cv.gapi.wip.GStreamerPipeline = cv.gapi_wip_gst_GStreamerPipeline
AttributeError: module 'cv2' has no attribute 'gapi_wip_gst_GStreamerPipeline'

查了一下,说是opencv-python和opencv-contrib-python不能共存,所以加了一步先卸载opencv-python

!git clone https://github.com/PaddlePaddle/PaddleRS /home/aistudio/work/PaddleRS
%cd /home/aistudio/work/PaddleRS
!git checkout develop
!pip install --upgrade pip
!pip uninstall opencv-python -y
!pip install -r requirements.txt
!python setup.py install

再次报错,错误变了

Traceback (most recent call last):
File "setup.py", line 16, in
import paddlers
File "/home/aistudio/work/PaddleRS/paddlers/init.py", line 17, in
from paddlers.utils.env import get_environ_info, init_parallel_env
File "/home/aistudio/work/PaddleRS/paddlers/utils/init.py", line 28, in
from .visualize import map_display
File "/home/aistudio/work/PaddleRS/paddlers/utils/visualize.py", line 26, in
from paddlers.transforms.functions import to_uint8
File "/home/aistudio/work/PaddleRS/paddlers/transforms/init.py", line 18, in
from .operators import *
File "/home/aistudio/work/PaddleRS/paddlers/transforms/operators.py", line 69, in
'NEAREST': cv2.INTER_NEAREST,
AttributeError: module 'cv2' has no attribute 'INTER_NEAREST'

该怎么解决呢?

欢迎您的提问。辛苦您提供以下信息,以方便我们快速定位和解决问题:

  1. PaddleRS版本:PaddleRS release/1.0
  2. PaddlePaddle版本:paddlepaddle-gpu 2.4.0rc0.post112
  3. 操作系统信息:Linux
  4. Python版本号:3.7.4
  5. CUDA/cuDNN版本:如CUDA11.2 /cuDNN 8.2

我想请教一下,paddlers针对基于pytorch的deeplabV3P的改进

请问paddlers针对deeplabv3p语义分割提取道路是否做过优化呢?为什么同样条件下,用paddlers中的deeplabv3p来提取遥感图像中的道路,其mIoU指标要比基于pytorch的deeplabv3p好很多呢(paddle mIoU有0.78以上,pytorch只有0.69)?两者的网络结构是一样的,也都没有使用数据后处理.

[Question] PyTorch Version of FactSeg

Welcome to request a new feature! To help us better understand your request, please provide the following information:

  1. A clear and concise description of the requested feature.
  2. Tell us why the feature will be useful.
  3. If possible, please show related codes.

欢迎提出一个新的功能需求。为了帮助我们更好理解您的需求,辛苦您提供以下信息:

  1. 请使用清晰简洁的语言描述该项功能需求。
  2. 请分析这个功能的必要性。
  3. 如果可能的话,请提供相关代码实现效果。

你好博主,请问可以有没有复现factseg的torch版训练过程,真的很需要torch版,谢谢博主,感谢博主

黑客松第四期 152

对于题目152中的
“撰写中英文文档,对 PaddleRS 每个模型的构造参数进行详细说明(参考 paddlers/rs_models 中的 docstring)
撰写中英文文档,对 PaddleRS 每个数据预处理/数据变换算子的构造参数进行详细说明”,我不太理解其中的撰写文档是要新建两个文档来描述每个模型的构造参数(分别用英文和中文),如果是这样,那么这两个文件应该创建在哪个目录下。

mask2geojson.py 不可用

parser = argparse.ArgumentParser()
parser.add_argument("--mask_path", type=str, required=True,
help="Path of mask data.")
parser.add_argument("--save_path", type=str, required=True,
help="Path to store the GeoJSON file (the coordinate system is WGS84).")
args = parser.parse_args()
convert_data(args.raster_path, args.geojson_path)

参数是mask_path save_path 而convert_data(args.raster_path, args.geojson_path) 参数根本不一样,这个脚本是否可用,是否做过测试,谢谢

ValueError: (InvalidArgument) yolo_box(): argument 'X' (position 0) must be Tensor, but got Tensor (at ..\paddle\fluid\pybind\op_function_common.cc:818)

欢迎您的提问。辛苦您提供以下信息,以方便我们快速定位和解决问题:

  1. PaddleRS版本:PaddleRS 1.0b0
  2. PaddlePaddle版本:PaddlePaddle 2.4.2
  3. 操作系统信息:Windows
  4. Python版本号:Python3.7
  5. CUDA/cuDNN版本:无(集成显卡)
  6. 代码如下:
    from paddlers.tasks.object_detector import PPYOLOv2 num_classes = len(train_dataset.labels)#项目数 model = PPYOLOv2(num_classes=num_classes) model.train( num_epochs=30, train_dataset=train_dataset, train_batch_size=10, eval_dataset=eval_dataset, pretrain_weights="COCO", learning_rate=3e-5, warmup_steps=10, warmup_start_lr=0.0, save_interval_epochs=5, lr_decay_epochs=[10, 20], save_dir="output", use_vdl=True)
  7. 报错如下:
    2023-03-08 08:56:07 [INFO] Start to evaluate(total_samples=945, total_steps=945)...

ValueError Traceback (most recent call last)
in ()
12 lr_decay_epochs=[10, 20],
13 save_dir="output",
---> 14 use_vdl=True)

D:\anaconda3-5.3.1\lib\site-packages\paddlers\tasks\object_detector.py in train(self, num_epochs, train_dataset, train_batch_size, eval_dataset, optimizer, save_interval_epochs, log_interval_steps, save_dir, pretrain_weights, learning_rate, warmup_steps, warmup_start_lr, lr_decay_epochs, lr_decay_gamma, metric, use_ema, early_stop, early_stop_patience, use_vdl, resume_checkpoint)
332 early_stop=early_stop,
333 early_stop_patience=early_stop_patience,
--> 334 use_vdl=use_vdl)
335
336 def quant_aware_train(self,

D:\anaconda3-5.3.1\lib\site-packages\paddlers\tasks\base.py in train_loop(self, num_epochs, train_dataset, train_batch_size, eval_dataset, save_interval_epochs, log_interval_steps, save_dir, ema, early_stop, early_stop_patience, use_vdl)
428 eval_dataset,
429 batch_size=eval_batch_size,
--> 430 return_details=True)
431 # 保存最优模型
432 if local_rank == 0:

D:\anaconda3-5.3.1\lib\site-packages\paddlers\tasks\object_detector.py in evaluate(self, eval_dataset, batch_size, metric, return_details)
497 with paddle.no_grad():
498 for step, data in enumerate(self.eval_data_loader):
--> 499 outputs = self.run(self.net, data, 'eval')
500 eval_metric.update(data, outputs)
501 eval_metric.accumulate()

D:\anaconda3-5.3.1\lib\site-packages\paddlers\tasks\object_detector.py in run(self, net, inputs, mode)
103
104 def run(self, net, inputs, mode):
--> 105 net_out = net(inputs)
106 if mode in ['train', 'eval']:
107 outputs = net_out

D:\anaconda3-5.3.1\lib\site-packages\paddle\fluid\dygraph\layers.py in call(self, *inputs, **kwargs)
1010 ):
1011 self._build_once(*inputs, **kwargs)
-> 1012 return self.forward(*inputs, **kwargs)
1013 else:
1014 return self._dygraph_call_func(*inputs, **kwargs)

D:\anaconda3-5.3.1\lib\site-packages\paddlers\models\ppdet\modeling\architectures\meta_arch.py in forward(self, inputs)
69 for inp in inputs_list:
70 self.inputs = inp
---> 71 outs.append(self.get_pred())
72
73 # multi-scale test

D:\anaconda3-5.3.1\lib\site-packages\paddlers\models\ppdet\modeling\architectures\yolo.py in get_pred(self)
122
123 def get_pred(self):
--> 124 return self._forward()

D:\anaconda3-5.3.1\lib\site-packages\paddlers\models\ppdet\modeling\architectures\yolo.py in _forward(self)
113 bbox, bbox_num = self.post_process(
114 yolo_head_outs, self.yolo_head.mask_anchors,
--> 115 self.inputs['im_shape'], self.inputs['scale_factor'])
116 output = {'bbox': bbox, 'bbox_num': bbox_num}
117

D:\anaconda3-5.3.1\lib\site-packages\paddle\fluid\dygraph\layers.py in call(self, *inputs, **kwargs)
1010 ):
1011 self._build_once(*inputs, **kwargs)
-> 1012 return self.forward(*inputs, **kwargs)
1013 else:
1014 return self._dygraph_call_func(*inputs, **kwargs)

D:\anaconda3-5.3.1\lib\site-packages\paddlers\models\ppdet\modeling\post_process.py in forward(self, head_out, rois, im_shape, scale_factor)
61 """
62 if self.nms is not None:
---> 63 bboxes, score = self.decode(head_out, rois, im_shape, scale_factor)
64 bbox_pred, bbox_num, _ = self.nms(bboxes, score, self.num_classes)
65 else:

D:\anaconda3-5.3.1\lib\site-packages\paddlers\models\ppdet\modeling\layers.py in call(self, yolo_head_out, anchors, im_shape, scale_factor, var_weight)
543 self.num_classes, self.conf_thresh,
544 self.downsample_ratio // 2**i,
--> 545 self.clip_bbox, self.scale_x_y)
546 boxes_list.append(boxes)
547 scores_list.append(paddle.transpose(scores, perm=[0, 2, 1]))

D:\anaconda3-5.3.1\lib\site-packages\paddlers\models\ppdet\modeling\ops.py in yolo_box(x, origin_shape, anchors, class_num, conf_thresh, downsample_ratio, clip_bbox, scale_x_y, name)
698 conf_thresh, 'downsample_ratio', downsample_ratio,
699 'clip_bbox', clip_bbox, 'scale_x_y', scale_x_y)
--> 700 boxes, scores = core.ops.yolo_box(x, origin_shape, *attrs)
701 return boxes, scores
702 else:

ValueError: (InvalidArgument) yolo_box(): argument 'X' (position 0) must be Tensor, but got Tensor (at ..\paddle\fluid\pybind\op_function_common.cc:818)
请问如何解决?

pip subprocess to install build dependencies did not run successfully.

版本信息

  1. PaddleRS版本:pip清华源
  2. PaddlePaddle版本:paddlepaddle-gpu 2.4.1
  3. 操作系统信息:ubuntu lts 22.04.1
  4. Python版本号:3.10.8
  5. CUDA/cuDNN版本:cuda11.7 cudnn8.7

复现步骤

pip install paddlers -i https://pypi.tuna.tsinghua.edu.cn/simple

错误信息

...
                  |                               ^~~~~~~~
                  |                               |
                  |                               double
             3320 |             (((PyC@name@ScalarObject *)obj)->obval).real);
                  |             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:77,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/pyhash.h:10:38: note: expected ‘PyObject *’ {aka ‘struct _object *’} but argument is of type ‘double’
               10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
                  |                                      ^~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:3319:16: error: too few arguments to function ‘_Py_HashDouble’
             3319 |     hashreal = _Py_HashDouble((double)
                  |                ^~~~~~~~~~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:77,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/pyhash.h:10:23: note: declared here
               10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
                  |                       ^~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:3325:31: error: incompatible type for argument 1 of ‘_Py_HashDouble’
             3325 |     hashimag = _Py_HashDouble((double)
                  |                               ^~~~~~~~
                  |                               |
                  |                               double
             3326 |             (((PyC@name@ScalarObject *)obj)->obval).imag);
                  |             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:77,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/pyhash.h:10:38: note: expected ‘PyObject *’ {aka ‘struct _object *’} but argument is of type ‘double’
               10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
                  |                                      ^~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:3325:16: error: too few arguments to function ‘_Py_HashDouble’
             3325 |     hashimag = _Py_HashDouble((double)
                  |                ^~~~~~~~~~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:77,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/pyhash.h:10:23: note: declared here
               10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
                  |                       ^~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src: In function ‘half_arrtype_hash’:
            numpy/core/src/multiarray/scalartypes.c.src:3341:27: error: incompatible type for argument 1 of ‘_Py_HashDouble’
             3341 |     return _Py_HashDouble(npy_half_to_double(((PyHalfScalarObject *)obj)->obval));
                  |                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                  |                           |
                  |                           double
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:77,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/pyhash.h:10:38: note: expected ‘PyObject *’ {aka ‘struct _object *’} but argument is of type ‘double’
               10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
                  |                                      ^~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:3341:12: error: too few arguments to function ‘_Py_HashDouble’
             3341 |     return _Py_HashDouble(npy_half_to_double(((PyHalfScalarObject *)obj)->obval));
                  |            ^~~~~~~~~~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:77,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/pyhash.h:10:23: note: declared here
               10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
                  |                       ^~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src: In function ‘longdouble_arrtype_hash’:
            numpy/core/src/multiarray/scalartypes.c.src:3312:1: warning: control reaches end of non-void function [-Wreturn-type]
             3312 | }
                  | ^
            numpy/core/src/multiarray/scalartypes.c.src: In function ‘float_arrtype_hash’:
            numpy/core/src/multiarray/scalartypes.c.src:3312:1: warning: control reaches end of non-void function [-Wreturn-type]
             3312 | }
                  | ^
            numpy/core/src/multiarray/scalartypes.c.src: In function ‘half_arrtype_hash’:
            numpy/core/src/multiarray/scalartypes.c.src:3342:1: warning: control reaches end of non-void function [-Wreturn-type]
             3342 | }
                  | ^
            gcc: numpy/core/src/multiarray/vdot.c
            gcc: numpy/core/src/multiarray/usertypes.c
            gcc: numpy/core/src/umath/umathmodule.c
            gcc: build/src.linux-x86_64-3.1/numpy/core/src/umath/loops.c
            gcc: numpy/core/src/umath/ufunc_object.c
            gcc: numpy/core/src/umath/reduction.c
            gcc: numpy/core/src/multiarray/number.c
            gcc: build/src.linux-x86_64-3.1/numpy/core/src/multiarray/nditer_templ.c
            gcc: build/src.linux-x86_64-3.1/numpy/core/src/umath/scalarmath.c
            numpy/core/src/umath/loops.c.src: In function ‘PyUFunc_On_Om’:
            numpy/core/src/umath/loops.c.src:506:9: warning: ‘PyEval_CallObjectWithKeywords’ is deprecated [-Wdeprecated-declarations]
              506 |         result = PyEval_CallObject(tocall, arglist);
                  |         ^~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:130,
                             from numpy/core/src/umath/loops.c.src:7:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/ceval.h:17:43: note: declared here
               17 | Py_DEPRECATED(3.9) PyAPI_FUNC(PyObject *) PyEval_CallObjectWithKeywords(
                  |                                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            gcc: numpy/core/src/multiarray/flagsobject.c
            gcc: numpy/core/src/npymath/npy_math.c
            gcc: numpy/core/src/multiarray/getset.c
            gcc: numpy/core/src/multiarray/nditer_api.c
            gcc: build/src.linux-x86_64-3.1/numpy/core/src/npymath/ieee754.c
            gcc: numpy/core/src/npymath/halffloat.c
            gcc: build/src.linux-x86_64-3.1/numpy/core/src/npymath/npy_math_complex.c
            gcc: numpy/core/src/common/array_assign.c
            gcc: numpy/core/src/common/npy_longdouble.c
            gcc: numpy/core/src/common/mem_overlap.c
            gcc: numpy/core/src/umath/extobj.c
            gcc: numpy/core/src/common/ucsnarrow.c
            numpy/core/src/common/ucsnarrow.c: In function ‘PyUnicode_FromUCS4’:
            numpy/core/src/common/ucsnarrow.c:139:9: warning: ‘PyUnicode_FromUnicode’ is deprecated [-Wdeprecated-declarations]
              139 |         ret = (PyUnicodeObject *)PyUnicode_FromUnicode((Py_UNICODE*)buf,
                  |         ^~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/unicodeobject.h:1046,
                             from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:83,
                             from numpy/core/src/common/ucsnarrow.c:4:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/cpython/unicodeobject.h:551:42: note: declared here
              551 | Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode(
                  |                                          ^~~~~~~~~~~~~~~~~~~~~
            gcc: numpy/core/src/common/ufunc_override.c
            gcc: numpy/core/src/umath/cpuid.c
            gcc: numpy/core/src/common/numpyos.c
            gcc: numpy/core/src/umath/ufunc_type_resolution.c
            gcc: numpy/core/src/umath/override.c
            gcc: numpy/core/src/multiarray/mapping.c
            gcc: numpy/core/src/multiarray/methods.c
            gcc: build/src.linux-x86_64-3.1/numpy/core/src/umath/matmul.c
            gcc: build/src.linux-x86_64-3.1/numpy/core/src/umath/clip.c
            error: Command "gcc -pthread -B /home/zhangzrjerry/anaconda3/envs/globalenv/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/zhangzrjerry/anaconda3/envs/globalenv/include -fPIC -O2 -isystem /home/zhangzrjerry/anaconda3/envs/globalenv/include -fPIC -DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Ibuild/src.linux-x86_64-3.1/numpy/core/src/umath -Ibuild/src.linux-x86_64-3.1/numpy/core/src/npymath -Ibuild/src.linux-x86_64-3.1/numpy/core/src/common -Inumpy/core/include -Ibuild/src.linux-x86_64-3.1/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10 -Ibuild/src.linux-x86_64-3.1/numpy/core/src/common -Ibuild/src.linux-x86_64-3.1/numpy/core/src/npymath -Ibuild/src.linux-x86_64-3.1/numpy/core/src/common -Ibuild/src.linux-x86_64-3.1/numpy/core/src/npymath -c build/src.linux-x86_64-3.1/numpy/core/src/multiarray/scalartypes.c -o build/temp.linux-x86_64-cpython-310/build/src.linux-x86_64-3.1/numpy/core/src/multiarray/scalartypes.o -MMD -MF build/temp.linux-x86_64-cpython-310/build/src.linux-x86_64-3.1/numpy/core/src/multiarray/scalartypes.o.d" failed with exit status 1
            [end of output]
      
        note: This error originates from a subprocess, and is likely not a problem with pip.
        ERROR: Failed building wheel for numpy
        Running setup.py clean for numpy
        error: subprocess-exited-with-error
      
        × python setup.py clean did not run successfully.
        │ exit code: 1
        ╰─> [10 lines of output]
            Running from numpy source directory.
      
            `setup.py clean` is not supported, use one of the following instead:
      
              - `git clean -xdf` (cleans all files)
              - `git clean -Xdf` (cleans all versioned files, doesn't touch
                                  files that aren't checked into the git repo)
      
            Add `--force` to your command to use it anyway if you must (unsupported).
      
            [end of output]
      
        note: This error originates from a subprocess, and is likely not a problem with pip.
        ERROR: Failed cleaning build dir for numpy
      Failed to build numpy
      Installing collected packages: wheel, setuptools, numpy, Cython, scipy
        Running setup.py install for numpy: started
        Running setup.py install for numpy: finished with status 'error'
        error: subprocess-exited-with-error
      
        × Running setup.py install for numpy did not run successfully.
        │ exit code: 1
        ╰─> [664 lines of output]
            Running from numpy source directory.
      
            Note: if you need reliable uninstall behavior, then install
            with pip instead of using `setup.py install`:
      
              - `pip install .`       (from a git repo or downloaded source
                                       release)
              - `pip install numpy`   (last NumPy release on PyPi)
      
      
            blas_opt_info:
            blas_mkl_info:
            customize UnixCCompiler
              libraries mkl_rt not found in ['/home/zhangzrjerry/anaconda3/envs/globalenv/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
              NOT AVAILABLE
      
            blis_info:
            customize UnixCCompiler
              libraries blis not found in ['/home/zhangzrjerry/anaconda3/envs/globalenv/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
              NOT AVAILABLE
      
            openblas_info:
            customize UnixCCompiler
            customize UnixCCompiler
              libraries openblas not found in ['/home/zhangzrjerry/anaconda3/envs/globalenv/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
              NOT AVAILABLE
      
            atlas_3_10_blas_threads_info:
            Setting PTATLAS=ATLAS
            customize UnixCCompiler
              libraries tatlas not found in ['/home/zhangzrjerry/anaconda3/envs/globalenv/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
              NOT AVAILABLE
      
            atlas_3_10_blas_info:
            customize UnixCCompiler
              libraries satlas not found in ['/home/zhangzrjerry/anaconda3/envs/globalenv/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
              NOT AVAILABLE
      
            atlas_blas_threads_info:
            Setting PTATLAS=ATLAS
            customize UnixCCompiler
              libraries ptf77blas,ptcblas,atlas not found in ['/home/zhangzrjerry/anaconda3/envs/globalenv/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
              NOT AVAILABLE
      
            atlas_blas_info:
            customize UnixCCompiler
              libraries f77blas,cblas,atlas not found in ['/home/zhangzrjerry/anaconda3/envs/globalenv/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
              NOT AVAILABLE
      
            accelerate_info:
              NOT AVAILABLE
      
            /tmp/pip-install-evxpallc/numpy_48025521c4d7462ba49749eed0f7f407/numpy/distutils/system_info.py:690: UserWarning:
                Optimized (vendor) Blas libraries are not found.
                Falls back to netlib Blas library which has worse performance.
                A better performance should be easily gained by switching
                Blas library.
              self.calc_info()
            blas_info:
            customize UnixCCompiler
              libraries blas not found in ['/home/zhangzrjerry/anaconda3/envs/globalenv/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
              NOT AVAILABLE
      
            /tmp/pip-install-evxpallc/numpy_48025521c4d7462ba49749eed0f7f407/numpy/distutils/system_info.py:690: UserWarning:
                Blas (http://www.netlib.org/blas/) libraries not found.
                Directories to search for the libraries can be specified in the
                numpy/distutils/site.cfg file (section [blas]) or by setting
                the BLAS environment variable.
              self.calc_info()
            blas_src_info:
              NOT AVAILABLE
      
            /tmp/pip-install-evxpallc/numpy_48025521c4d7462ba49749eed0f7f407/numpy/distutils/system_info.py:690: UserWarning:
                Blas (http://www.netlib.org/blas/) sources not found.
                Directories to search for the sources can be specified in the
                numpy/distutils/site.cfg file (section [blas_src]) or by setting
                the BLAS_SRC environment variable.
              self.calc_info()
              NOT AVAILABLE
      
            /bin/sh: 1: svnversion: not found
            non-existing path in 'numpy/distutils': 'site.cfg'
            lapack_opt_info:
            lapack_mkl_info:
            customize UnixCCompiler
              libraries mkl_rt not found in ['/home/zhangzrjerry/anaconda3/envs/globalenv/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
              NOT AVAILABLE
      
            openblas_lapack_info:
            customize UnixCCompiler
            customize UnixCCompiler
              libraries openblas not found in ['/home/zhangzrjerry/anaconda3/envs/globalenv/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
              NOT AVAILABLE
      
            openblas_clapack_info:
            customize UnixCCompiler
            customize UnixCCompiler
              libraries openblas,lapack not found in ['/home/zhangzrjerry/anaconda3/envs/globalenv/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
              NOT AVAILABLE
      
            flame_info:
            customize UnixCCompiler
              libraries flame not found in ['/home/zhangzrjerry/anaconda3/envs/globalenv/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
              NOT AVAILABLE
      
            atlas_3_10_threads_info:
            Setting PTATLAS=ATLAS
            customize UnixCCompiler
              libraries lapack_atlas not found in /home/zhangzrjerry/anaconda3/envs/globalenv/lib
            customize UnixCCompiler
              libraries tatlas,tatlas not found in /home/zhangzrjerry/anaconda3/envs/globalenv/lib
            customize UnixCCompiler
              libraries lapack_atlas not found in /usr/local/lib
            customize UnixCCompiler
              libraries tatlas,tatlas not found in /usr/local/lib
            customize UnixCCompiler
              libraries lapack_atlas not found in /usr/lib64
            customize UnixCCompiler
              libraries tatlas,tatlas not found in /usr/lib64
            customize UnixCCompiler
              libraries lapack_atlas not found in /usr/lib
            customize UnixCCompiler
              libraries tatlas,tatlas not found in /usr/lib
            customize UnixCCompiler
              libraries lapack_atlas not found in /usr/lib/x86_64-linux-gnu
            customize UnixCCompiler
              libraries tatlas,tatlas not found in /usr/lib/x86_64-linux-gnu
            <class 'numpy.distutils.system_info.atlas_3_10_threads_info'>
              NOT AVAILABLE
      
            atlas_3_10_info:
            customize UnixCCompiler
              libraries lapack_atlas not found in /home/zhangzrjerry/anaconda3/envs/globalenv/lib
            customize UnixCCompiler
              libraries satlas,satlas not found in /home/zhangzrjerry/anaconda3/envs/globalenv/lib
            customize UnixCCompiler
              libraries lapack_atlas not found in /usr/local/lib
            customize UnixCCompiler
              libraries satlas,satlas not found in /usr/local/lib
            customize UnixCCompiler
              libraries lapack_atlas not found in /usr/lib64
            customize UnixCCompiler
              libraries satlas,satlas not found in /usr/lib64
            customize UnixCCompiler
              libraries lapack_atlas not found in /usr/lib
            customize UnixCCompiler
              libraries satlas,satlas not found in /usr/lib
            customize UnixCCompiler
              libraries lapack_atlas not found in /usr/lib/x86_64-linux-gnu
            customize UnixCCompiler
              libraries satlas,satlas not found in /usr/lib/x86_64-linux-gnu
            <class 'numpy.distutils.system_info.atlas_3_10_info'>
              NOT AVAILABLE
      
            atlas_threads_info:
            Setting PTATLAS=ATLAS
            customize UnixCCompiler
              libraries lapack_atlas not found in /home/zhangzrjerry/anaconda3/envs/globalenv/lib
            customize UnixCCompiler
              libraries ptf77blas,ptcblas,atlas not found in /home/zhangzrjerry/anaconda3/envs/globalenv/lib
            customize UnixCCompiler
              libraries lapack_atlas not found in /usr/local/lib
            customize UnixCCompiler
              libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib
            customize UnixCCompiler
              libraries lapack_atlas not found in /usr/lib64
            customize UnixCCompiler
              libraries ptf77blas,ptcblas,atlas not found in /usr/lib64
            customize UnixCCompiler
              libraries lapack_atlas not found in /usr/lib
            customize UnixCCompiler
              libraries ptf77blas,ptcblas,atlas not found in /usr/lib
            customize UnixCCompiler
              libraries lapack_atlas not found in /usr/lib/x86_64-linux-gnu
            customize UnixCCompiler
              libraries ptf77blas,ptcblas,atlas not found in /usr/lib/x86_64-linux-gnu
            <class 'numpy.distutils.system_info.atlas_threads_info'>
              NOT AVAILABLE
      
            atlas_info:
            customize UnixCCompiler
              libraries lapack_atlas not found in /home/zhangzrjerry/anaconda3/envs/globalenv/lib
            customize UnixCCompiler
              libraries f77blas,cblas,atlas not found in /home/zhangzrjerry/anaconda3/envs/globalenv/lib
            customize UnixCCompiler
              libraries lapack_atlas not found in /usr/local/lib
            customize UnixCCompiler
              libraries f77blas,cblas,atlas not found in /usr/local/lib
            customize UnixCCompiler
              libraries lapack_atlas not found in /usr/lib64
            customize UnixCCompiler
              libraries f77blas,cblas,atlas not found in /usr/lib64
            customize UnixCCompiler
              libraries lapack_atlas not found in /usr/lib
            customize UnixCCompiler
              libraries f77blas,cblas,atlas not found in /usr/lib
            customize UnixCCompiler
              libraries lapack_atlas not found in /usr/lib/x86_64-linux-gnu
            customize UnixCCompiler
              libraries f77blas,cblas,atlas not found in /usr/lib/x86_64-linux-gnu
            <class 'numpy.distutils.system_info.atlas_info'>
              NOT AVAILABLE
      
            lapack_info:
            customize UnixCCompiler
              libraries lapack not found in ['/home/zhangzrjerry/anaconda3/envs/globalenv/lib', '/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/x86_64-linux-gnu']
              NOT AVAILABLE
      
            /tmp/pip-install-evxpallc/numpy_48025521c4d7462ba49749eed0f7f407/numpy/distutils/system_info.py:1712: UserWarning:
                Lapack (http://www.netlib.org/lapack/) libraries not found.
                Directories to search for the libraries can be specified in the
                numpy/distutils/site.cfg file (section [lapack]) or by setting
                the LAPACK environment variable.
              if getattr(self, '_calc_info_{}'.format(lapack))():
            lapack_src_info:
              NOT AVAILABLE
      
            /tmp/pip-install-evxpallc/numpy_48025521c4d7462ba49749eed0f7f407/numpy/distutils/system_info.py:1712: UserWarning:
                Lapack (http://www.netlib.org/lapack/) sources not found.
                Directories to search for the sources can be specified in the
                numpy/distutils/site.cfg file (section [lapack_src]) or by setting
                the LAPACK_SRC environment variable.
              if getattr(self, '_calc_info_{}'.format(lapack))():
              NOT AVAILABLE
      
            /home/zhangzrjerry/anaconda3/envs/globalenv/lib/python3.10/site-packages/setuptools/_distutils/dist.py:262: UserWarning: Unknown distribution option: 'define_macros'
              warnings.warn(msg)
            running install
            /home/zhangzrjerry/anaconda3/envs/globalenv/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
              warnings.warn(
            running build
            running config_cc
            unifing config_cc, config, build_clib, build_ext, build commands --compiler options
            running config_fc
            unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options
            running build_src
            build_src
            building py_modules sources
            building library "npymath" sources
            get_default_fcompiler: matching types: '['gnu95', 'intel', 'lahey', 'pg', 'absoft', 'nag', 'vast', 'compaq', 'intele', 'intelem', 'gnu', 'g95', 'pathf95', 'nagfor']'
            customize Gnu95FCompiler
            Could not locate executable gfortran
            Could not locate executable f95
            customize IntelFCompiler
            Could not locate executable ifort
            Could not locate executable ifc
            customize LaheyFCompiler
            Could not locate executable lf95
            customize PGroupFCompiler
            Could not locate executable pgfortran
            customize AbsoftFCompiler
            Could not locate executable f90
            Could not locate executable f77
            customize NAGFCompiler
            customize VastFCompiler
            customize CompaqFCompiler
            Could not locate executable fort
            customize IntelItaniumFCompiler
            Could not locate executable efort
            Could not locate executable efc
            customize IntelEM64TFCompiler
            customize GnuFCompiler
            Could not locate executable g77
            customize G95FCompiler
            Could not locate executable g95
            customize PathScaleFCompiler
            Could not locate executable pathf95
            customize NAGFORCompiler
            Could not locate executable nagfor
            don't know how to compile Fortran code on platform 'posix'
            C compiler: gcc -pthread -B /home/zhangzrjerry/anaconda3/envs/globalenv/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/zhangzrjerry/anaconda3/envs/globalenv/include -fPIC -O2 -isystem /home/zhangzrjerry/anaconda3/envs/globalenv/include -fPIC
      
            compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10 -c'
            gcc: _configtest.c
            gcc -pthread -B /home/zhangzrjerry/anaconda3/envs/globalenv/compiler_compat _configtest.o -o _configtest
            success!
            removing: _configtest.c _configtest.o _configtest.o.d _configtest
            C compiler: gcc -pthread -B /home/zhangzrjerry/anaconda3/envs/globalenv/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/zhangzrjerry/anaconda3/envs/globalenv/include -fPIC -O2 -isystem /home/zhangzrjerry/anaconda3/envs/globalenv/include -fPIC
      
            compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10 -c'
            gcc: _configtest.c
            _configtest.c:1:5: warning: conflicting types for built-in function ‘exp’; expected ‘double(double)’ [-Wbuiltin-declaration-mismatch]
                1 | int exp (void);
                  |     ^~~
            _configtest.c:1:1: note: ‘exp’ is declared in header ‘<math.h>’
              +++ |+#include <math.h>
                1 | int exp (void);
            gcc -pthread -B /home/zhangzrjerry/anaconda3/envs/globalenv/compiler_compat _configtest.o -o _configtest
            /home/zhangzrjerry/anaconda3/envs/globalenv/compiler_compat/ld: _configtest.o: in function `main':
            _configtest.c:(.text.startup+0x9): undefined reference to `exp'
            collect2: error: ld returned 1 exit status
            failure.
            removing: _configtest.c _configtest.o _configtest.o.d
            C compiler: gcc -pthread -B /home/zhangzrjerry/anaconda3/envs/globalenv/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/zhangzrjerry/anaconda3/envs/globalenv/include -fPIC -O2 -isystem /home/zhangzrjerry/anaconda3/envs/globalenv/include -fPIC
      
            compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10 -c'
            gcc: _configtest.c
            _configtest.c:1:5: warning: conflicting types for built-in function ‘exp’; expected ‘double(double)’ [-Wbuiltin-declaration-mismatch]
                1 | int exp (void);
                  |     ^~~
            _configtest.c:1:1: note: ‘exp’ is declared in header ‘<math.h>’
              +++ |+#include <math.h>
                1 | int exp (void);
            gcc -pthread -B /home/zhangzrjerry/anaconda3/envs/globalenv/compiler_compat _configtest.o -lm -o _configtest
            success!
            removing: _configtest.c _configtest.o _configtest.o.d _configtest
              adding 'build/src.linux-x86_64-3.1/numpy/core/src/npymath' to include_dirs.
            None - nothing done with h_files = ['build/src.linux-x86_64-3.1/numpy/core/src/npymath/npy_math_internal.h']
            building library "npysort" sources
              adding 'build/src.linux-x86_64-3.1/numpy/core/src/common' to include_dirs.
            None - nothing done with h_files = ['build/src.linux-x86_64-3.1/numpy/core/src/common/npy_sort.h', 'build/src.linux-x86_64-3.1/numpy/core/src/common/npy_partition.h', 'build/src.linux-x86_64-3.1/numpy/core/src/common/npy_binsearch.h']
            building extension "numpy.core._dummy" sources
              adding 'build/src.linux-x86_64-3.1/numpy/core/include/numpy/config.h' to sources.
              adding 'build/src.linux-x86_64-3.1/numpy/core/include/numpy/_numpyconfig.h' to sources.
            executing numpy/core/code_generators/generate_numpy_api.py
              adding 'build/src.linux-x86_64-3.1/numpy/core/include/numpy/__multiarray_api.h' to sources.
            numpy.core - nothing done with h_files = ['build/src.linux-x86_64-3.1/numpy/core/include/numpy/config.h', 'build/src.linux-x86_64-3.1/numpy/core/include/numpy/_numpyconfig.h', 'build/src.linux-x86_64-3.1/numpy/core/include/numpy/__multiarray_api.h']
            building extension "numpy.core._multiarray_tests" sources
            building extension "numpy.core._multiarray_umath" sources
              adding 'build/src.linux-x86_64-3.1/numpy/core/include/numpy/config.h' to sources.
              adding 'build/src.linux-x86_64-3.1/numpy/core/include/numpy/_numpyconfig.h' to sources.
            executing numpy/core/code_generators/generate_numpy_api.py
              adding 'build/src.linux-x86_64-3.1/numpy/core/include/numpy/__multiarray_api.h' to sources.
            executing numpy/core/code_generators/generate_ufunc_api.py
              adding 'build/src.linux-x86_64-3.1/numpy/core/include/numpy/__ufunc_api.h' to sources.
              adding 'build/src.linux-x86_64-3.1/numpy/core/src/umath' to include_dirs.
              adding 'build/src.linux-x86_64-3.1/numpy/core/src/npymath' to include_dirs.
              adding 'build/src.linux-x86_64-3.1/numpy/core/src/common' to include_dirs.
            numpy.core - nothing done with h_files = ['build/src.linux-x86_64-3.1/numpy/core/src/umath/funcs.inc', 'build/src.linux-x86_64-3.1/numpy/core/src/umath/simd.inc', 'build/src.linux-x86_64-3.1/numpy/core/src/umath/loops.h', 'build/src.linux-x86_64-3.1/numpy/core/src/umath/matmul.h', 'build/src.linux-x86_64-3.1/numpy/core/src/umath/clip.h', 'build/src.linux-x86_64-3.1/numpy/core/src/npymath/npy_math_internal.h', 'build/src.linux-x86_64-3.1/numpy/core/src/common/templ_common.h', 'build/src.linux-x86_64-3.1/numpy/core/include/numpy/config.h', 'build/src.linux-x86_64-3.1/numpy/core/include/numpy/_numpyconfig.h', 'build/src.linux-x86_64-3.1/numpy/core/include/numpy/__multiarray_api.h', 'build/src.linux-x86_64-3.1/numpy/core/include/numpy/__ufunc_api.h']
            building extension "numpy.core._umath_tests" sources
            building extension "numpy.core._rational_tests" sources
            building extension "numpy.core._struct_ufunc_tests" sources
            building extension "numpy.core._operand_flag_tests" sources
            building extension "numpy.fft._pocketfft_internal" sources
            building extension "numpy.linalg.lapack_lite" sources
            ### Warning:  Using unoptimized lapack ###
              adding 'numpy/linalg/lapack_lite/python_xerbla.c' to sources.
              adding 'numpy/linalg/lapack_lite/f2c_z_lapack.c' to sources.
              adding 'numpy/linalg/lapack_lite/f2c_c_lapack.c' to sources.
              adding 'numpy/linalg/lapack_lite/f2c_d_lapack.c' to sources.
              adding 'numpy/linalg/lapack_lite/f2c_s_lapack.c' to sources.
              adding 'numpy/linalg/lapack_lite/f2c_lapack.c' to sources.
              adding 'numpy/linalg/lapack_lite/f2c_blas.c' to sources.
              adding 'numpy/linalg/lapack_lite/f2c_config.c' to sources.
              adding 'numpy/linalg/lapack_lite/f2c.c' to sources.
            building extension "numpy.linalg._umath_linalg" sources
            ### Warning:  Using unoptimized lapack ###
              adding 'numpy/linalg/lapack_lite/python_xerbla.c' to sources.
              adding 'numpy/linalg/lapack_lite/f2c_z_lapack.c' to sources.
              adding 'numpy/linalg/lapack_lite/f2c_c_lapack.c' to sources.
              adding 'numpy/linalg/lapack_lite/f2c_d_lapack.c' to sources.
              adding 'numpy/linalg/lapack_lite/f2c_s_lapack.c' to sources.
              adding 'numpy/linalg/lapack_lite/f2c_lapack.c' to sources.
              adding 'numpy/linalg/lapack_lite/f2c_blas.c' to sources.
              adding 'numpy/linalg/lapack_lite/f2c_config.c' to sources.
              adding 'numpy/linalg/lapack_lite/f2c.c' to sources.
            building extension "numpy.random.mt19937" sources
            building extension "numpy.random.philox" sources
            building extension "numpy.random.pcg64" sources
            building extension "numpy.random.sfc64" sources
            building extension "numpy.random.common" sources
            building extension "numpy.random.bit_generator" sources
            building extension "numpy.random.generator" sources
            building extension "numpy.random.bounded_integers" sources
            building extension "numpy.random.mtrand" sources
            building data_files sources
            build_src: building npy-pkg config files
            running build_py
            copying numpy/version.py -> build/lib.linux-x86_64-cpython-310/numpy
            copying build/src.linux-x86_64-3.1/numpy/__config__.py -> build/lib.linux-x86_64-cpython-310/numpy
            copying build/src.linux-x86_64-3.1/numpy/distutils/__config__.py -> build/lib.linux-x86_64-cpython-310/numpy/distutils
            running build_clib
            customize UnixCCompiler
            customize UnixCCompiler using build_clib
            running build_ext
            customize UnixCCompiler
            customize UnixCCompiler using build_ext
            building 'numpy.core._multiarray_umath' extension
            compiling C sources
            C compiler: gcc -pthread -B /home/zhangzrjerry/anaconda3/envs/globalenv/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/zhangzrjerry/anaconda3/envs/globalenv/include -fPIC -O2 -isystem /home/zhangzrjerry/anaconda3/envs/globalenv/include -fPIC
      
            compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Ibuild/src.linux-x86_64-3.1/numpy/core/src/umath -Ibuild/src.linux-x86_64-3.1/numpy/core/src/npymath -Ibuild/src.linux-x86_64-3.1/numpy/core/src/common -Inumpy/core/include -Ibuild/src.linux-x86_64-3.1/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10 -Ibuild/src.linux-x86_64-3.1/numpy/core/src/common -Ibuild/src.linux-x86_64-3.1/numpy/core/src/npymath -Ibuild/src.linux-x86_64-3.1/numpy/core/src/common -Ibuild/src.linux-x86_64-3.1/numpy/core/src/npymath -c'
            gcc: build/src.linux-x86_64-3.1/numpy/core/src/multiarray/scalartypes.c
            numpy/core/src/multiarray/scalartypes.c.src: In function ‘unicodetype_repr’:
            numpy/core/src/multiarray/scalartypes.c.src:475:5: warning: ‘PyUnicode_AsUnicode’ is deprecated [-Wdeprecated-declarations]
              475 |     ip = dptr = Py@Name@_AS_@NAME@(self);
                  |     ^~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/unicodeobject.h:1046,
                             from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:83,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/cpython/unicodeobject.h:580:45: note: declared here
              580 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
                  |                                             ^~~~~~~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:476:5: warning: ‘_PyUnicode_get_wstr_length’ is deprecated [-Wdeprecated-declarations]
              476 |     len = Py@Name@_GET_SIZE(self);
                  |     ^~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/unicodeobject.h:1046,
                             from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:83,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here
              446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) {
                  |                          ^~~~~~~~~~~~~~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:476:5: warning: ‘PyUnicode_AsUnicode’ is deprecated [-Wdeprecated-declarations]
              476 |     len = Py@Name@_GET_SIZE(self);
                  |     ^~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/unicodeobject.h:1046,
                             from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:83,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/cpython/unicodeobject.h:580:45: note: declared here
              580 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
                  |                                             ^~~~~~~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:476:5: warning: ‘_PyUnicode_get_wstr_length’ is deprecated [-Wdeprecated-declarations]
              476 |     len = Py@Name@_GET_SIZE(self);
                  |     ^~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/unicodeobject.h:1046,
                             from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:83,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here
              446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) {
                  |                          ^~~~~~~~~~~~~~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:481:5: warning: ‘PyUnicode_FromUnicode’ is deprecated [-Wdeprecated-declarations]
              481 |     new = Py@Name@_From@Name@@extra@(ip, len);
                  |     ^~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/unicodeobject.h:1046,
                             from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:83,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/cpython/unicodeobject.h:551:42: note: declared here
              551 | Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode(
                  |                                          ^~~~~~~~~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src: In function ‘unicodetype_str’:
            numpy/core/src/multiarray/scalartypes.c.src:475:5: warning: ‘PyUnicode_AsUnicode’ is deprecated [-Wdeprecated-declarations]
              475 |     ip = dptr = Py@Name@_AS_@NAME@(self);
                  |     ^~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/unicodeobject.h:1046,
                             from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:83,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/cpython/unicodeobject.h:580:45: note: declared here
              580 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
                  |                                             ^~~~~~~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:476:5: warning: ‘_PyUnicode_get_wstr_length’ is deprecated [-Wdeprecated-declarations]
              476 |     len = Py@Name@_GET_SIZE(self);
                  |     ^~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/unicodeobject.h:1046,
                             from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:83,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here
              446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) {
                  |                          ^~~~~~~~~~~~~~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:476:5: warning: ‘PyUnicode_AsUnicode’ is deprecated [-Wdeprecated-declarations]
              476 |     len = Py@Name@_GET_SIZE(self);
                  |     ^~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/unicodeobject.h:1046,
                             from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:83,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/cpython/unicodeobject.h:580:45: note: declared here
              580 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
                  |                                             ^~~~~~~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:476:5: warning: ‘_PyUnicode_get_wstr_length’ is deprecated [-Wdeprecated-declarations]
              476 |     len = Py@Name@_GET_SIZE(self);
                  |     ^~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/unicodeobject.h:1046,
                             from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:83,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here
              446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) {
                  |                          ^~~~~~~~~~~~~~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:481:5: warning: ‘PyUnicode_FromUnicode’ is deprecated [-Wdeprecated-declarations]
              481 |     new = Py@Name@_From@Name@@extra@(ip, len);
                  |     ^~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/unicodeobject.h:1046,
                             from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:83,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/cpython/unicodeobject.h:551:42: note: declared here
              551 | Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode(
                  |                                          ^~~~~~~~~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src: In function ‘gentype_reduce’:
            numpy/core/src/multiarray/scalartypes.c.src:1849:9: warning: ‘PyUnicode_AsUnicode’ is deprecated [-Wdeprecated-declarations]
             1849 |         buffer = PyUnicode_AS_DATA(self);
                  |         ^~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/unicodeobject.h:1046,
                             from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:83,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/cpython/unicodeobject.h:580:45: note: declared here
              580 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
                  |                                             ^~~~~~~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:1850:9: warning: ‘_PyUnicode_get_wstr_length’ is deprecated [-Wdeprecated-declarations]
             1850 |         buflen = PyUnicode_GET_DATA_SIZE(self);
                  |         ^~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/unicodeobject.h:1046,
                             from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:83,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here
              446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) {
                  |                          ^~~~~~~~~~~~~~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:1850:9: warning: ‘PyUnicode_AsUnicode’ is deprecated [-Wdeprecated-declarations]
             1850 |         buflen = PyUnicode_GET_DATA_SIZE(self);
                  |         ^~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/unicodeobject.h:1046,
                             from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:83,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/cpython/unicodeobject.h:580:45: note: declared here
              580 | Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
                  |                                             ^~~~~~~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:1850:9: warning: ‘_PyUnicode_get_wstr_length’ is deprecated [-Wdeprecated-declarations]
             1850 |         buflen = PyUnicode_GET_DATA_SIZE(self);
                  |         ^~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/unicodeobject.h:1046,
                             from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:83,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/cpython/unicodeobject.h:446:26: note: declared here
              446 | static inline Py_ssize_t _PyUnicode_get_wstr_length(PyObject *op) {
                  |                          ^~~~~~~~~~~~~~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src: In function ‘float_arrtype_hash’:
            numpy/core/src/multiarray/scalartypes.c.src:3311:27: error: incompatible type for argument 1 of ‘_Py_HashDouble’
             3311 |     return _Py_HashDouble((double) ((Py@name@ScalarObject *)obj)->obval);
                  |                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                  |                           |
                  |                           double
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:77,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/pyhash.h:10:38: note: expected ‘PyObject *’ {aka ‘struct _object *’} but argument is of type ‘double’
               10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
                  |                                      ^~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:3311:12: error: too few arguments to function ‘_Py_HashDouble’
             3311 |     return _Py_HashDouble((double) ((Py@name@ScalarObject *)obj)->obval);
                  |            ^~~~~~~~~~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:77,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/pyhash.h:10:23: note: declared here
               10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
                  |                       ^~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src: In function ‘cfloat_arrtype_hash’:
            numpy/core/src/multiarray/scalartypes.c.src:3319:31: error: incompatible type for argument 1 of ‘_Py_HashDouble’
             3319 |     hashreal = _Py_HashDouble((double)
                  |                               ^~~~~~~~
                  |                               |
                  |                               double
             3320 |             (((PyC@name@ScalarObject *)obj)->obval).real);
                  |             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:77,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/pyhash.h:10:38: note: expected ‘PyObject *’ {aka ‘struct _object *’} but argument is of type ‘double’
               10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
                  |                                      ^~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:3319:16: error: too few arguments to function ‘_Py_HashDouble’
             3319 |     hashreal = _Py_HashDouble((double)
                  |                ^~~~~~~~~~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:77,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/pyhash.h:10:23: note: declared here
               10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
                  |                       ^~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:3325:31: error: incompatible type for argument 1 of ‘_Py_HashDouble’
             3325 |     hashimag = _Py_HashDouble((double)
                  |                               ^~~~~~~~
                  |                               |
                  |                               double
             3326 |             (((PyC@name@ScalarObject *)obj)->obval).imag);
                  |             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:77,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/pyhash.h:10:38: note: expected ‘PyObject *’ {aka ‘struct _object *’} but argument is of type ‘double’
               10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
                  |                                      ^~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:3325:16: error: too few arguments to function ‘_Py_HashDouble’
             3325 |     hashimag = _Py_HashDouble((double)
                  |                ^~~~~~~~~~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:77,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/pyhash.h:10:23: note: declared here
               10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
                  |                       ^~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src: In function ‘longdouble_arrtype_hash’:
            numpy/core/src/multiarray/scalartypes.c.src:3311:27: error: incompatible type for argument 1 of ‘_Py_HashDouble’
             3311 |     return _Py_HashDouble((double) ((Py@name@ScalarObject *)obj)->obval);
                  |                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                  |                           |
                  |                           double
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:77,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/pyhash.h:10:38: note: expected ‘PyObject *’ {aka ‘struct _object *’} but argument is of type ‘double’
               10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
                  |                                      ^~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:3311:12: error: too few arguments to function ‘_Py_HashDouble’
             3311 |     return _Py_HashDouble((double) ((Py@name@ScalarObject *)obj)->obval);
                  |            ^~~~~~~~~~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:77,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/pyhash.h:10:23: note: declared here
               10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
                  |                       ^~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src: In function ‘clongdouble_arrtype_hash’:
            numpy/core/src/multiarray/scalartypes.c.src:3319:31: error: incompatible type for argument 1 of ‘_Py_HashDouble’
             3319 |     hashreal = _Py_HashDouble((double)
                  |                               ^~~~~~~~
                  |                               |
                  |                               double
             3320 |             (((PyC@name@ScalarObject *)obj)->obval).real);
                  |             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:77,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/pyhash.h:10:38: note: expected ‘PyObject *’ {aka ‘struct _object *’} but argument is of type ‘double’
               10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
                  |                                      ^~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:3319:16: error: too few arguments to function ‘_Py_HashDouble’
             3319 |     hashreal = _Py_HashDouble((double)
                  |                ^~~~~~~~~~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:77,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/pyhash.h:10:23: note: declared here
               10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
                  |                       ^~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:3325:31: error: incompatible type for argument 1 of ‘_Py_HashDouble’
             3325 |     hashimag = _Py_HashDouble((double)
                  |                               ^~~~~~~~
                  |                               |
                  |                               double
             3326 |             (((PyC@name@ScalarObject *)obj)->obval).imag);
                  |             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:77,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/pyhash.h:10:38: note: expected ‘PyObject *’ {aka ‘struct _object *’} but argument is of type ‘double’
               10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
                  |                                      ^~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:3325:16: error: too few arguments to function ‘_Py_HashDouble’
             3325 |     hashimag = _Py_HashDouble((double)
                  |                ^~~~~~~~~~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:77,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/pyhash.h:10:23: note: declared here
               10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
                  |                       ^~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src: In function ‘half_arrtype_hash’:
            numpy/core/src/multiarray/scalartypes.c.src:3341:27: error: incompatible type for argument 1 of ‘_Py_HashDouble’
             3341 |     return _Py_HashDouble(npy_half_to_double(((PyHalfScalarObject *)obj)->obval));
                  |                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                  |                           |
                  |                           double
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:77,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/pyhash.h:10:38: note: expected ‘PyObject *’ {aka ‘struct _object *’} but argument is of type ‘double’
               10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
                  |                                      ^~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src:3341:12: error: too few arguments to function ‘_Py_HashDouble’
             3341 |     return _Py_HashDouble(npy_half_to_double(((PyHalfScalarObject *)obj)->obval));
                  |            ^~~~~~~~~~~~~~
            In file included from /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/Python.h:77,
                             from numpy/core/src/multiarray/scalartypes.c.src:3:
            /home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10/pyhash.h:10:23: note: declared here
               10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double);
                  |                       ^~~~~~~~~~~~~~
            numpy/core/src/multiarray/scalartypes.c.src: In function ‘longdouble_arrtype_hash’:
            numpy/core/src/multiarray/scalartypes.c.src:3312:1: warning: control reaches end of non-void function [-Wreturn-type]
             3312 | }
                  | ^
            numpy/core/src/multiarray/scalartypes.c.src: In function ‘float_arrtype_hash’:
            numpy/core/src/multiarray/scalartypes.c.src:3312:1: warning: control reaches end of non-void function [-Wreturn-type]
             3312 | }
                  | ^
            numpy/core/src/multiarray/scalartypes.c.src: In function ‘half_arrtype_hash’:
            numpy/core/src/multiarray/scalartypes.c.src:3342:1: warning: control reaches end of non-void function [-Wreturn-type]
             3342 | }
                  | ^
            error: Command "gcc -pthread -B /home/zhangzrjerry/anaconda3/envs/globalenv/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/zhangzrjerry/anaconda3/envs/globalenv/include -fPIC -O2 -isystem /home/zhangzrjerry/anaconda3/envs/globalenv/include -fPIC -DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Ibuild/src.linux-x86_64-3.1/numpy/core/src/umath -Ibuild/src.linux-x86_64-3.1/numpy/core/src/npymath -Ibuild/src.linux-x86_64-3.1/numpy/core/src/common -Inumpy/core/include -Ibuild/src.linux-x86_64-3.1/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/home/zhangzrjerry/anaconda3/envs/globalenv/include/python3.10 -Ibuild/src.linux-x86_64-3.1/numpy/core/src/common -Ibuild/src.linux-x86_64-3.1/numpy/core/src/npymath -Ibuild/src.linux-x86_64-3.1/numpy/core/src/common -Ibuild/src.linux-x86_64-3.1/numpy/core/src/npymath -c build/src.linux-x86_64-3.1/numpy/core/src/multiarray/scalartypes.c -o build/temp.linux-x86_64-cpython-310/build/src.linux-x86_64-3.1/numpy/core/src/multiarray/scalartypes.o -MMD -MF build/temp.linux-x86_64-cpython-310/build/src.linux-x86_64-3.1/numpy/core/src/multiarray/scalartypes.o.d" failed with exit status 1
            [end of output]
      
        note: This error originates from a subprocess, and is likely not a problem with pip.
      error: legacy-install-failure
      
      × Encountered error while trying to install package.
      ╰─> numpy
      
      note: This is an issue with the package mentioned above, not pip.
      hint: See above for output from the failure.
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.

[onnx 1.13.1 requires protobuf<4,>=3.20.2 和 paddlepaddle 2.4.2 requires protobuf<=3.20.0,>=3.1.0 相互冲突]

  1. PaddleRS版本:(PaddleRS release/1.0)
  2. PaddlePaddle版本:(如PaddlePaddle 2.4.2)
  3. 操作系统信息:(Windows)
  4. Python版本号:(如Python3.7)
  5. CUDA/cuDNN版本:( 如CUDA10.7/cuDNN 8.4.1等)

安装进度:
1、paddle 2.4.2 cpu 和GPU版本和cuDNN成功安装;

  1. python
  2. import paddle
  3. paddle.utils.run_check()

2、出错的地方:安装PaddleRS-release-1.0
切换至paddlers目录下,执行:
pip install -r requirements.txt
错误提示:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
paddlepaddle 2.4.2 requires protobuf<=3.20.0,>=3.1.0, but you have protobuf 3.20.3 which is incompatible.
paddlepaddle-gpu 2.4.2.post117 requires protobuf<=3.20.0,>=3.1.0, but you have protobuf 3.20.3 which is incompatible.

Successfully installed Babel-2.12.1 Flask-Babel-3.0.1 MarkupSafe-2.1.2 PyWavelets-1.3.0 Werkzeug-2.2.3 aiofiles-23.1.0 aiohttp-3.8.4 aiosignal-1.3.1 altair-4.2.2 anyio-3.6.2 async-timeout-4.0.2 asynctest-0.13.0 attrdict-2.0.1 attrs-22.2.0 bce-python-sdk-0.8.83 branca-0.6.0 brotli-1.0.9 cffi-1.15.1 chardet-5.1.0 click-8.1.3 colorama-0.4.6 entrypoints-0.4 et-xmlfile-1.1.0 fastapi-0.95.0 ffmpy-0.3.0 filelock-3.10.7 flask-2.2.3 foliume-0.0.1 frozenlist-1.3.3 fsspec-2023.1.0 future-0.18.3 geojson-3.0.1 gevent-22.10.2 geventhttpclient-2.0.2 gradio-3.24.1 gradio-client-0.0.7 greenlet-2.0.2 grpcio-1.53.0 h11-0.14.0 httpcore-0.16.3 httpx-0.23.3 huggingface-hub-0.13.3 imageio-2.27.0 importlib-metadata-6.1.0 importlib-resources-5.12.0 itsdangerous-2.1.2 jinja2-3.1.2 joblib-1.2.0 jsonschema-4.17.3 lap-0.4.0 linkify-it-py-2.0.0 llvmlite-0.39.1 markdown-it-py-2.2.0 mdit-py-plugins-0.3.3 mdurl-0.1.2 motmetrics-1.4.0 mpmath-1.3.0 multidict-6.0.4 munch-2.5.0 natsort-8.3.1 networkx-2.6.3 numba-0.56.4 onnx-1.13.1 opencv-contrib-python-4.7.0.72 openpyxl-3.1.2 orjson-3.8.9 paddleslim-2.3.4 pandas-1.3.5 pkgutil-resolve-name-1.3.10 protobuf-3.20.3 psutil-5.9.4 pycocotools-2.0.6 pycparser-2.21 pycryptodome-3.17 pydantic-1.10.7 pydub-0.25.1 pyrsistent-0.19.3 python-multipart-0.0.6 python-rapidjson-1.10 pytz-2022.7.1 pyyaml-6.0 pyzmq-25.0.2 rarfile-4.0 rfc3986-1.5.0 scikit-image-0.19.3 scikit-learn-0.23.2 scipy-1.7.3 semantic-version-2.10.0 shapely-2.0.1 sniffio-1.3.0 starlette-0.26.1 sympy-1.10.1 threadpoolctl-3.1.0 tifffile-2021.11.2 toolz-0.12.0 tqdm-4.65.0 tritonclient-2.32.0 uc-micro-py-1.0.1 uvicorn-0.21.1 visualdl-2.5.1 websockets-11.0 x2paddle-1.4.1 xmltodict-0.13.0 yarl-1.8.2 zipp-3.15.0 zope.event-4.6 zope.interface-6.0

3、卸载程序:
pip uninstall protobuf
pip install protobuf==3.20.0
提示错误:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
onnx 1.13.1 requires protobuf<4,>=3.20.2, but you have protobuf 3.20.0 which is incompatible.
Successfully installed protobuf-3.20.0
4、执行:pip install -r requirements.txt 又出现步骤二的问题。
本质是:onnx 1.13.1 requires protobuf<4,>=3.20.2 和 paddlepaddle 2.4.2 requires protobuf<=3.20.0,>=3.1.0 相互冲突,怎么解决呢?

[Bug]AttributeError: module 'paddlers.transforms' has no attribute 'DecodeImg'

(pytorch37pp) cl@cl:~/pp/PaddleRS/tutorials/train/semantic_segmentation$ python deeplabv3p.py
/home/cl/anaconda3/envs/pytorch37pp/lib/python3.7/site-packages/paddlers/models/ppcls/data/preprocess/ops/timm_autoaugment.py:38: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
_RANDOM_INTERPOLATION = (Image.BILINEAR, Image.BICUBIC)
/home/cl/anaconda3/envs/pytorch37pp/lib/python3.7/site-packages/paddlers/models/ppcls/data/preprocess/ops/timm_autoaugment.py:38: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
_RANDOM_INTERPOLATION = (Image.BILINEAR, Image.BICUBIC)
2022-11-23 17:59:51 [INFO] Decompressing ./data/rsseg.zip...
2022-11-23 18:00:00 [DEBUG] ./data/rsseg.zip decompressed.
Traceback (most recent call last):
File "deeplabv3p.py", line 32, in
T.DecodeImg(),
AttributeError: module 'paddlers.transforms' has no attribute 'DecodeImg'

验证出错

2023-03-09 15:21:30 [INFO] Start to evaluate(total_samples=2, total_steps=2)...
Traceback (most recent call last):
File "D:/CVcode/Side/PaddleRS-develop/tutorials/train/change_detection/cdnet.py", line 81, in
model.train(
File "D:\CVcode\Side\PaddleRS-develop\paddlers\tasks\change_detector.py", line 325, in train
self.train_loop(
File "D:\CVcode\Side\PaddleRS-develop\paddlers\tasks\base.py", line 429, in train_loop
eval_result = self.evaluate(
File "D:\CVcode\Side\PaddleRS-develop\paddlers\tasks\change_detector.py", line 467, in evaluate
outputs = self.run(self.net, data, 'eval')
File "D:\CVcode\Side\PaddleRS-develop\paddlers\tasks\change_detector.py", line 151, in run
pred = self.postprocess(pred, batch_restore_list)[0] # NCHW
IndexError: list index out of range

[General Issue]为什么加载预训练权重后反而效果不好?

欢迎您的提问。辛苦您提供以下信息,以方便我们快速定位和解决问题:

  1. PaddleRS版本:(请提供版本号和分支信息,如PaddleRS release/1.0)
    develop branch
  2. PaddlePaddle版本:(如PaddlePaddle 2.3.0)
    paddle2.4.1.post17
  3. 操作系统信息:(如Linux/Windows/MacOS)
    ubuntu20.04
  4. Python版本号:(如Python3.7/8)
    python3.9
  5. CUDA/cuDNN版本:( 如CUDA10.2/cuDNN 7.6.5等)
    cuda11.7,cudnn8.4.0,nccl2.14.3
  6. 其他内容: (增加其他与问题相关的内容)

我测试了关于bit、cdnet两个网络在Levir数据集上的表现,同时训练100epoch,当我没有加载pretrain_weights时:
bit的test结果为'iou':0.75169,'f1':0.85246,'oacc':0.98627,'kappa':0.85105,
cdnet的test结果为'iou':0.66071,'f1':0.79570,'oacc':0.97966,'kappa':0.78500。
而当我在run_task.py的model.tain()中加入,pretrain_weights='BIT_LEVIRCD'或者pretrain_weights='cdnet_LEVIRCD'后,100个epoch的结果反而为:
bit:'iou':0.68747,'f1':0.81480,'oacc':0.98233,'kappa':0.80557,
cdnet:'iou':0.50427,'f1':0.67045,'oacc':0.97045,'kappa':0.65529。
我不能理解为什么在加载预训练权重后,精度反而下降了很多?
两者的差距仅仅只是我在model.train中添加了pretrain_weights参数,如下:

9ad7c8054944d5158db2c4d0589a7a6

我添加预训练的日志文件为:
trainlog.log

【黑客松第四期】

PaddleRS在黑客松第四期中发布了四道题目,分别如下。关于题目的一些疑问可以在本issue留言讨论。

No.150:PaddleRS集成PaddleDetection的旋转框检测能力

  • 技术标签:Python、深度学习
  • 任务难度:进阶⭐️⭐️
  • 详细描述:
  • 提交内容:
    • 代码、文档,合入PaddleRS套件
    • PaddleRS贡献指南 链接
  • 技术要求:
    • 熟练掌握Python、PaddlePaddle、PaddleRS和目标检测

No.151:PaddleRS运行环境打包,并制作端到端遥感建筑物提取教程

  • 技术标签:Python、docker、文档
  • 任务难度:基础️⭐️
  • 详细描述:
    • 打包制作包含PaddleRS、EISeg、GeoView最新版本运行环境的docker镜像
    • 利用官方提供的示例遥感影像,基于前述镜像环境,跑通从数据标注(EISeg)、模型训练(PaddleRS)、部署应用(GeoView)的遥感建筑物提取全流程,并形成教程文档。
    • 遥感影像网盘地址: https://pan.baidu.com/s/1BXoQX6Z4KONbkm1gw6BXNw?pwd=ye92 提取码: ye92
  • 提交内容:
    • docker镜像、教程文档
  • 技术要求:
    • 熟练掌握Python、docker、PaddleRS,熟悉遥感场景

No.152:PaddleRS API 文档完善

  • 技术标签:文档
  • 任务难度:基础️⭐️
  • 详细描述:
    • 撰写中英文文档,对 PaddleRS 每个模型的构造参数进行详细说明(参考 paddlers/rs_models 中的 docstring)
    • 撰写中英文文档,对 PaddleRS 每个数据预处理/数据变换算子的构造参数进行详细说明
    • 完善 PaddleRS quick start 中英文文档,在已有的『模型训练』部分内容基础上,添加『模型精度验证』与『模型部署』部分内容。
  • 提交内容:
    • 教程文档
  • 技术要求:
    • 熟悉 PaddleRS

No.153:PaddleRS 英文文档

  • 技术标签:文档
  • 任务难度:基础️⭐️
  • 详细描述:
    • 将 PaddleRS docs 目录中包含的文档内容翻译为英文,并形成相应英文文档
    • 为 PaddleRS tutorials 目录中的示例脚本添加英文注释
  • 提交内容:
    • 教程文档
  • 技术要求:
    • 熟悉 PaddleRS,外语水平良好

[General Issue]4

欢迎您的提问。辛苦您提供以下信息,以方便我们快速定位和解决问题:

  1. PaddleRS版本:(请提供版本号和分支信息,如PaddleRS release/0.0.0)
  2. PaddlePaddle版本:(如PaddlePaddle 2.4.2)
  3. 操作系统信息:(如Windows 10)
  4. Python版本号:(如Python3.7)
    其他内容: (增加其他与问题相关的内容)
    问题1:使用git在windows无法正常下载paddlers版本
    问题2:在Windows 安装好了paddlers,python输入import padllers,显示:
    Can not use conditional_random_field. Please install pydensecrf first.
    Warning: Unable to use OC-SORT, please install filterpy, for example: pip install filterpy, see https://github.com/rlabbe/filterpy
    Warning: import ppdet from source directory without installing, run 'python setup.py install' to install ppdet firstly
    2023-04-02 15:59:58,190-WARNING: post-quant-hpo is not support in system other than linux

paddlers.version
'0.0.0.dev0'

在python下:输入:

from paddlers.deploy import Predictor
出错:
Can not use conditional_random_field. Please install pydensecrf first.
Warning: Unable to use OC-SORT, please install filterpy, for example: pip install filterpy, see https://github.com/rlabbe/filterpy
Warning: import ppdet from source directory without installing, run 'python setup.py install' to install ppdet firstly
2023-04-02 16:01:36,700-WARNING: post-quant-hpo is not support in system other than linux
Traceback (most recent call last):
File "", line 1, in
File "D:\others\Anaconda3\envs\paddleRS\lib\site-packages\paddlers_init_.py", line 23, in
with open(os.path.join(os.path.dirname(file), ".version"), 'r') as fv:
FileNotFoundError: [Errno 2] No such file or directory: 'D:\others\Anaconda3\envs\paddleRS\lib\site-packages\paddlers\.version'

导入paddlers过程中发生错误

import paddlers时出现以下报错
ImportError: cannot import name '_legacy_C_ops' from 'paddle' (/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/init.py)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.