GithubHelp home page GithubHelp logo

zuoqing1988 / zqcnn Goto Github PK

View Code? Open in Web Editor NEW
2.2K 103.0 507.0 293.41 MB

一款推理框架,同时有很多有用的demo,觉得好用请点星啊

License: MIT License

C 71.62% C++ 27.33% MATLAB 0.31% Python 0.64% Batchfile 0.01% CMake 0.09% Shell 0.01%

zqcnn's Introduction

简介

ZQCNN是一款推理框架,可以在windows, linux 和 arm-linux下运行。同时有一些人脸检测、识别相关的demo。

主开发环境 :VS2015 with Update 3

MKL下载地址:此处下载

核心模块支持linux:

如果按照build-with-cmake.md不能完全编译,可以只编译ZQ_GEMM,ZQCNN,和其他你想测试的程序

核心模块支持arm-linux:

如果按照build-with-cmake.md不能完全编译,可以只编译ZQ_GEMM,ZQCNN,和其他你想测试的程序

BUG: cmake .. -DSIMD_ARCH_TYPE=arm64 -DBLAS_TYPE=openblas_zq_gemm

理想情况下会使用openblas和ZQ_GEMM较快的一方来计算卷积(我通过在cortex-A72上测试时间来选择分支)。然而目前这个选项并不能达到预期效果, 需要手工注在ZQ_CNN_CompileConfig.h里定义

#define ZQ_CNN_USE_ZQ_GEMM 1
#define ZQ_CNN_USE_BLAS_GEMM 1

可以注释掉

line 67: #if defined(ZQ_CNN_USE_BOTH_BLAS_ZQ_GEMM)
line 70: #endif

训练相关

pytorch训练SSD: https://github.com/zuoqing1988/pytorch-ssd-for-ZQCNN

训练性别年龄:https://github.com/zuoqing1988/train-GenderAge

训练MTCNN:https://github.com/zuoqing1988/train-mtcnn

训练SSD: https://github.com/zuoqing1988/train-ssd

训练MTCNN用于人头检测:https://github.com/zuoqing1988/train-mtcnn-head

更新日志

2022-08-18日更新: 优化了视频模式下人脸106点pipeline,新加入了一个头部姿态视线模型

demo程序在SampleVideoFaceDetection_Interface.cpp

原始106点pb模型和头部姿态视线pb模型在TensorFlow_to_ZQCNN里

2022-04-20日更新: 支持pytorch-ssd-for-ZQCNN训练的SSD模型

2020-05-08日更新: 添加文字识别示例SampleOCR

暂时还没有文字检测能力,需要输入的图是裁剪好的

模型下载地址

链接:https://pan.baidu.com/s/1O75LRBjXWwPXqAshLMJV3w 提取码:f2q8

2020-03-22日更新: 提供一组可以检测到带口罩的人脸的MTCNN模型

model\det1-dw20-plus.zqparams
model\det1-dw20-plus.nchwbin

model\det2-dw24-p0.zqparams	
model\det2-dw24-p0.nchwbin

model\det3-dw48-p0.zqparams
model\det3-dw48-p0.nchwbin

2019-07-08日更新:ZQCNN模型转MNN模型代码

点此阅读

2019-05-28日更新:开源一个准商用级106点模型

ZQCNN格式:在model文件夹det5-dw112

mxnet格式:链接:https://pan.baidu.com/s/19DTG3rmkct8AiEu0l3DYjw 提取码:qjzk

2019-03-16日更新:达到800星,公布更准的106点landmark模型

ZQCNN格式:det5-dw96-v2smodel文件夹中det5-dw96-v2s.zqparams, det5-dw96-v2s.nchwbin

mxnet格式:Lnet106_96_v2s提取码:r5h2

2019-02-14日更新:达到700星,公布人脸检测精选模型

ZQCNN格式:精选6种Pnet、2种Rnet、2种Onet、2种Lnet

六种Pnet 输入尺寸 计算量(不计bbox) 备注
Pnet20_v00 320x240 8.5 M 对标libfacedetection
Pnet20_v0 320x240 11.6 M 对标libfacedetection
Pnet20_v1 320x240 14.6 M
Pnet20_v2 320x240 18.4 M 对标原版pnet
Pnet16_v0 256x192 7.5 M stride=4
Pnet16_v1 256x192 9.8 M stride=4
两种Rnet 输入尺寸 计算量 备注
Rnet_v1 24x24 0.5 M 对标原版Rnet
Rnet_v2 24x24 1.4 M
两种Onet 输入尺寸 计算量 备注
Onet_v1 48x48 2.0 M 不含landmark
Onet_v2 48x48 3.2 M 不含landmark
两种Lnet 输入尺寸 计算量 备注
Lnet_v2 48x48 3.5 M lnet_basenum=16
Lnet_v2 48x48 10.8 M lnet_basenum=32

2019-01-31日更新:达到600星,公布MTCNN人头检测模型

hollywoodheads数据训练的,效果一般,凑合用吧

人头检测mtcnn-headmxnet-v0&zqcnn-v0

2019-01-24日更新:核心模块支持linux

如果按照build-with-cmake.md不能完全编译,可以只编译ZQ_GEMM,ZQCNN,和其他你想测试的程序

2019-01-17日更新

更改了ZQ_CNN_MTCNN.h

(1)init时设置thread_num小于1时可以强制Pnet_stage执行多线程,也就是会分块,对于大图找小脸来说可以防止内存爆掉

(2)rnet/onet/lnet的尺寸可以不是24/48/48,但是只支持宽高相等

(3)rnet/onet/lnet分批处理,在脸非常多时可以减小内存占用

2019-01-15日更新:庆祝达到500星,发放106点landmark模型

mxnet格式&zqcnn格式

2019-01-04日更新:庆祝达到400星,发放快速人脸模型

mxnet格式

zqcnn格式

v3版本还不够好,后面还将出v4版本,大概就是下面这个图的意思

MTCNN-v4示意图

2018-12-25日更新:不开源的106点landmark

生活比较拮据,挣点外快。

landmark106-normal-1000.jpg是model\det5-dw48-1000.nchwbin生成的landmark

landmark106-normal.jpg,与landmark106-big.jpg是我训练的两个没开源的模型

其中normal模型2.1M,计算量11.4M,PC单线程耗时0.6-0.7ms,big模型7.56M,计算量36.4M,PC单线程耗时1.5-1.6ms

2018-12-20日更新:添加MTCNN106点landmark模型

在SampleMTCNN里试用(放出来的只是一个不太好的,更好的等着卖钱)

SampleLnet106有计时,单线程约0.6~0.7ms (E5-1650V4, 3.6GHz)

2018-12-03日更新:将模型编译到代码里面

ZQCNN.sln里 model2code 可以将模型编译成代码

model2code.exe param_file model_file code_file prefix

然后在你的工程里面添加

#include"code_file"

使用下面的函数加载模型

LoadFromBuffer(prefix_param, prefix_param_len, prefix_model, prefix_model_len)

2018-11-21日更新

支持mxnet-ssd训练的模型,mean_val需要设成127.5才能在SampleSSD里面正确运行。

但是使用ReLU训练的好像不正确,我用PReLU训练一个,重头训练的,只有mAP=0.48凑合着看吧,点此下载

更改模型之后必须用imagenet先训练分类模型,然后再训练SSD,才能把mAP弄上去。

2018-11-14日更新

(1)优化ZQ_GEMM,3.6GHz的机器上MKL峰值约46GFLOPS, ZQ_GEMM约32GFLOPS。使用ZQ_GEMM人脸模型总体时间约为使用MKL时1.5倍。

注意:使用VS2017编译出来的ZQ_GEMM比VS2015快,但是SampleMTCNN多线程运行是错的(可能OpenMP的支持规则不同?)。

(2)加载模型时可以去掉非常小的权重。当你发现模型比预料中慢很多时,多半是由于权重值太小造成的。

2018-11-06日更新

(1)去掉layers里所有omp多线程的代码,计算量太小,速度比单线程更慢

(2)cblas_gemm可以选择MKL,不过3rdparty带的mkl在我机器上很慢,dll比较大,我没放在3rdparty\bin里,请从此处下载

2018-10-30日更新2:MTCNN大图找小脸建议先用高斯滤波

2018-10-30日更新:BatchNorm的eps问题

(1)BatchNorm、BatchNormScale的默认eps都是0

(2)如果是用mxnet2zqcnn从mxnet转过来模型,转的过程中会把eps加到var上面当做新的var

(3)如果是从其他平台转过来模型,要么手工把eps加到var上面,要么在BatchNorm、BatchNormScale后面加上eps=?(?为该平台这个层的eps值)

注意:为了防止除0错,在除var的时候是这么计算的sqrt(__max(var+eps,1e-32)),也就是说如果var+eps小于1e-32,会与理论值略有不同。 不过今天修改之后下面几个人脸模型的LFW的精度反而与minicaffe的结果一模一样了。

2018-10-26日更新

MTCNN支持多线程,大图找小脸而且脸多的情况下,8线程可以取得单线程4倍以上效果,请用data\test2.jpg来测试

2018-10-15日更新

改进MTCNN的nms策略:1.每个scale的Pnet的nms的局部极大必须覆盖一定数量的非极大,数量在参数中设置; 2.当Pnet的分辨率太大时,nms进行分块处理。

2018-09-25日更新

支持insightface的GNAP,自动转模型使用mxnet2zqcnn,查看mxnet2zqcnn。可以试用MobileFaceNet-GNAP

2018-09-20日更新

(1)更新人脸识别模型tar-far精度的测试方法,可以按照步骤How-to-evaluate-TAR-FAR-on-your-dataset自行构造测试集测试模型精度。

(2)按照(1)我清洗CASIA-Webface构造了两个测试集webface1000X50webface5000X20,并测试了我开源的几个主要人脸识别模型的精度。

2018-09-13日更新

(1)支持从内存加载模型

(2)增加编译配置ZQ_CNN_CompileConfig.h,可以选择是否使用_mm_fmadd_ps, _mm256_fmadd_ps (可以测一下速度看看到底快了还是慢了)。

2018-09-12日更新 利用insightface训练112*96(即sphereface的尺寸)步骤: InsightFace: how to train 112*96

2018-08-15日更新

(1)添加自然场景文本检测,模型从TextBoxes转过来的。我个人觉得速度太慢,而且准确度不高。

注意这个项目里用的PriorBoxLayer与SSD里的PriorBoxLayer是不同的,为了导出ZQCNN格式的权重我修改了deploy.prototxt保存为deploy_tmp.prototxt。 从此处下载模型。

(2)添加图片鉴黄,模型从open_nsfw转过来的,准确度高不高我也没测过。

此处下载模型。

2018-08-10日更新

成功转了mxnet上的GenderAge-r50模型 以及Arcface-LResNet100E-IR,与转MobileFaceNet模型步骤一样。 查看mxnet2zqcnn

下面Model Zoo 有我转好的模型,比自动转出来的应该略快。

打开ZQCNN.sln运行SampleGenderAge查看效果。我E5-1650V4的CPU,单线程时间波动很大,均值约1900-2000ms,四线程400多ms。

2018-08-09日更新

添加mxnet2zqcnn,成功将mxnet上的MobileFaceNet转成ZQCNN格式(不能保证其他模型也能转成功,ZQCNN还不支持很多Layer)。查看mxnet2zqcnn

2018-08-07日更新

BUG修复:之前Convolution, DepthwiseConvolution, InnerProduct, BatchNormScale/Scale默认with_bias=true, 现在改成默认with_bias=false。也就是之前的代码无法加载不带bias的这几个Layer。

示例,如下这样一个Layer,以前会默认为有bias_term,现在默认没有bias_term

Convolution name=conv1 bottom=data top=conv1 num_output=10 kernel_size=3 stride=1

2018-08-06日更新

增加人脸识别在LFW数据库的精度测试。打开ZQlibFaceID.sln可以看到相关Project。

由于C++代码的计算精度与matlab略有差距,统计出的精度也有一些差别,但是相差在0.1%以内。

2018-08-03日更新

支持多线程(通过openmp加速)。请注意,目前多线程反而比单线程慢

2018-07-26日更新

支持MobileNet-SSD。caffemodel转我用的模型参考export_mobilenet_SSD_caffemodel_to_nchw_binary.m。需要编译出matcaffe才行。 你可以试试这个版本caffe-ZQ

2018-06-05日更新

跟上时代潮流、发布源码。 忘了说需要依赖openblas,我是直接用的mini-caffe里面的那个版本,自己编译出来的很慢。

Model Zoo

人脸检测

MTCNN-author-versionMTCNN转的格式

MTCNN-ZQ-version

人脸识别(如无说明,模型都是ms1m-refine-v2训练的)

模型 LFW精度(ZQCNN) LFW精度(OpenCV3.4.2) LFW精度(minicaffe) 耗时 (ZQCNN) 备注
MobileFaceNet-res2-6-10-2-dim128 99.67%-99.55%(matlab crop), 99.72-99.60%(C++ crop) 99.63%-99.65%(matlab crop), 99.68-99.70%(C++ crop) 99.62%-99.65%(matlab crop), 99.68-99.60%(C++ crop) 时间与dim256接近 网络结构与dim256一样,只不过输出维数不同
MobileFaceNet-res2-6-10-2-dim256 99.60%-99.60%(matlab crop), 99.62-99.62%(C++ crop) 99.73%-99.68%(matlab crop), 99.78-99.68%(C++ crop) 99.55%-99.63%(matlab crop), 99.60-99.62%(C++ crop) 单线程约21-22ms,四线程约11-12ms, 3.6GHz 网络结构在下载链接里,用faces_emore训练的
MobileFaceNet-res2-6-10-2-dim512 99.52%-99.60%(matlab crop), 99.63-99.72%(C++ crop) 99.70%-99.67%(matlab crop), 99.77-99.77%(C++ crop) 99.55%-99.62%(matlab crop), 99.62-99.68%(C++ crop) 时间与dim256接近 网络结构与dim256一样,只不过输出维数不同。感谢moli训练此模型
模型 LFW精度(ZQCNN) LFW精度(OpenCV3.4.2) LFW精度(minicaffe) 耗时 (ZQCNN) 备注
MobileFaceNet-res4-8-16-4-dim128 99.72%-99.72%(matlab crop), 99.72-99.68%(C++ crop) 99.82%-99.83%(matlab crop), 99.80-99.78%(C++ crop) 99.72%-99.72%(matlab crop), 99.72-99.68%(C++ crop) 时间与dim256接近 网络结构与dim256一样,只不过输出维数不同
MobileFaceNet-res4-8-16-4-dim256 99.78%-99.78%(matlab crop), 99.75-99.75%(C++ crop) 99.82%-99.82%(matlab crop), 99.80-99.82%(C++ crop) 99.78%-99.78%(matlab crop), 99.73-99.73%(C++ crop) 单线程约32-33ms,四线程约16-19ms, 3.6GHz 网络结构在下载链接里,用faces_emore训练的
MobileFaceNet-res4-8-16-4-dim512 99.80%-99.73%(matlab crop), 99.85-99.83%(C++ crop) 99.83%-99.82%(matlab crop), 99.87-99.83%(C++ crop) 99.80%-99.73%(matlab crop), 99.85-99.82%(C++ crop) 时间与dim256接近 网络结构与dim256一样,只不过输出维数不同。感谢moli训练此模型
模型\测试集webface1000X50 thresh@ FAR=1e-7 TAR@ FAR=1e-7 thresh@ FAR=1e-6 TAR@ FAR=1e-6 thresh@ FAR=1e-5 TAR@ FAR=1e-5
MobileFaceNet-res2-6-10-2-dim128 0.78785 9.274% 0.66616 40.459% 0.45855 92.716%
MobileFaceNet-res2-6-10-2-dim256 0.77708 7.839% 0.63872 40.934% 0.43182 92.605%
MobileFaceNet-res2-6-10-2-dim512 0.76699 8.197% 0.63452 38.774% 0.41572 93.000%
MobileFaceNet-res4-8-16-4-dim128 0.79268 9.626% 0.65770 48.252% 0.45431 95.576%
MobileFaceNet-res4-8-16-4-dim256 0.76858 9.220% 0.62852 46.195% 0.40010 96.929%
MobileFaceNet-res4-8-16-4-dim512 0.76287 9.296% 0.62555 44.775% 0.39047 97.347%
模型\测试集webface5000X20 thresh@ FAR=1e-7 TAR@ FAR=1e-7 thresh@ FAR=1e-6 TAR@ FAR=1e-6 thresh@ FAR=1e-5 TAR@ FAR=1e-5
MobileFaceNet-res2-6-10-2-dim128 0.70933 29.558% 0.51732 85.160% 0.45108 94.313%
MobileFaceNet-res2-6-10-2-dim256 0.68897 28.376% 0.48820 85.278% 0.42386 94.244%
MobileFaceNet-res2-6-10-2-dim512 0.68126 27.708% 0.47260 85.840% 0.40727 94.632%
MobileFaceNet-res4-8-16-4-dim128 0.71238 32.153% 0.51391 89.525% 0.44667 96.583%
MobileFaceNet-res4-8-16-4-dim256 0.68490 30.639% 0.46092 91.900% 0.39198 97.696%
MobileFaceNet-res4-8-16-4-dim512 0.67303 32.404% 0.45216 92.453% 0.38344 98.003%
模型\测试集TAO ids:6606,ims:87210 thresh@ FAR=1e-7 TAR@ FAR=1e-7 thresh@ FAR=1e-6 TAR@ FAR=1e-6 thresh@ FAR=1e-5 TAR@ FAR=1e-5
MobileFaceNet-res2-6-10-2-dim128 0.92204 01.282% 0.88107 06.837% 0.78302 41.740%
MobileFaceNet-res2-6-10-2-dim256 0.91361 01.275% 0.86750 07.081% 0.76099 42.188%
MobileFaceNet-res2-6-10-2-dim512 0.90657 01.448% 0.86061 07.299% 0.75488 41.956%
MobileFaceNet-res4-8-16-4-dim128 0.92098 01.347% 0.88233 06.795% 0.78711 41.856%
MobileFaceNet-res4-8-16-4-dim256 0.90862 01.376% 0.86397 07.083% 0.75975 42.430%
MobileFaceNet-res4-8-16-4-dim512 0.90710 01.353% 0.86190 06.948% 0.75518 42.241%
模型\测试集ZQCNN-Face_5000_X_20 thresh@ FAR=1e-8 TAR@ FAR=1e-8 thresh@ FAR=1e-7 TAR@ FAR=1e-7 thresh@ FAR=1e-6 TAR@ FAR=1e-6
MobileFaceNet-GNAP 0.73537 11.722% 0.69903 20.110% 0.65734 33.189%
MobileFaceNet-res2-6-10-2-dim128 0.64772 40.527% 0.60485 55.345% 0.55571 70.986%
MobileFaceNet-res2-6-10-2-dim256 0.61647 42.046% 0.57561 55.801% 0.52852 70.622%
MobileFaceNet-res2-6-10-2-dim512 0.59725 44.651% 0.55690 58.220% 0.51134 72.294%
MobileFaceNet-res4-8-16-4-dim128 0.64519 47.735% 0.60247 62.882% 0.55342 77.777%
MobileFaceNet-res4-8-16-4-dim256 0.58229 56.977% 0.54582 69.118% 0.49763 82.161%
MobileFaceNet-res4-8-16-4-dim512 0.58296 54.731% 0.54219 68.613% 0.49174 82.812%
MobileFaceNet-res8-16-32-8-dim512 0.58058 61.826% 0.53841 75.281% 0.49098 86.554%
模型\测试集ZQCNN-Face_5000_X_20 thresh@ FAR=1e-8 TAR@ FAR=1e-8 thresh@ FAR=1e-7 TAR@ FAR=1e-7 thresh@ FAR=1e-6 TAR@ FAR=1e-6
ArcFace-r34-v2(非本人训练) 0.61953 47.103% 0.57375 62.207% 0.52226 76.758%
ArcFace-r50 (ms1m-refine-v1非本人训练) 0.61299 50.594% 0.56658 65.757% 0.51637 79.207%
ArcFace-r100 (非本人训练) 0.57350 67.434% 0.53136 79.944% 0.48164 90.147%
模型\测试集ZQCNN-Face_12000_X_10-40 thresh@ FAR=1e-8 TAR@ FAR=1e-8 thresh@ FAR=1e-7 TAR@ FAR=1e-7 thresh@ FAR=1e-6 TAR@ FAR=1e-6
MobileFaceNet-res2-6-10-2-dim128 0.64507 39.100% 0.60347 53.638% 0.55492 69.516%
MobileFaceNet-res2-6-10-2-dim256 0.61589 39.864% 0.57402 54.179% 0.52596 69.658%
MobileFaceNet-res2-6-10-2-dim512 0.60030 41.309% 0.55806 55.676% 0.50984 70.979%
MobileFaceNet-res4-8-16-4-dim128 0.64443 45.764% 0.60060 61.564% 0.55168 76.776%
MobileFaceNet-res4-8-16-4-dim256 0.58879 52.542% 0.54497 67.597% 0.49547 81.495%
MobileFaceNet-res4-8-16-4-dim512 0.58492 51.752% 0.54085 67.104% 0.49010 81.836%
MobileFaceNet-res8-16-32-8-dim512 0.58119 61.412% 0.53700 75.520% 0.48997 86.647%
模型\测试集ZQCNN-Face_12000_X_10-40 thresh@ FAR=1e-8 TAR@ FAR=1e-8 thresh@ FAR=1e-7 TAR@ FAR=1e-7 thresh@ FAR=1e-6 TAR@ FAR=1e-6
ArcFace-r34-v2 (非本人训练) 0.61904 45.072% 0.57173 60.964% 0.52062 75.789%
ArcFace-r50(ms1m-refine-v1非本人训练) 0.61412 48.155% 0.56749 63.676% 0.51537 78.138%
ArcFace-r100 (非本人训练) 0.57891 63.854% 0.53337 78.129% 0.48079 89.579%

更多人脸模型请查看Model-Zoo-for-Face-Recognition

表情识别

FacialEmotion 七类表情用Fer2013训练

性别年龄识别

GenderAge-ZQ 使用train-GenderAge训练出来的模型

目标检测

MobileNetSSDMobileNet-SSD转的格式

MobileNetSSD-Mouth 用于SampleDetectMouth

文字检测

TextBoxesTextBoxes转的格式

图片鉴黄

NSFWopen_nsfw转的格式

相关文章

(1)人脸特征向量用整数存储精度损失多少?

(2)千万张脸的特征向量,计算相似度提速?

(3)打造一款比mini-caffe更快的Forward库

(4)向量点积的精度问题

(5)ZQCNN支持Depthwise Convolution并用mobilenet改了一把SphereFaceNet-10

(6)跟上时代潮流,发布一些源码

(7)ZQCNN支持SSD,比mini-caffe快大概30%

(8)ZQCNN的SSD支持同一个模型随意改分辨率

(9)ZQCNN格式的99.78%精度的人脸识别模型

(10)ZQCNN增加人脸识别在LFW数据集上的测试代码

(11)抱紧mxnet的大腿,着手写mxnet2zqcnn

(12)大规模人脸测试集,及如何打造自己的人脸测试集

(13)普通卷积、mobilenet卷积、全局平均池化的矩阵描述

(14)ZQ_FastFaceDetector更快更准的人脸检测库

Android编译说明

  1. 修改build.sh中的ndk路径和opencv安卓sdk的路径
  2. 修改CMakeLists.txt 从原来的 #add_definitions(-march=native) add_definitions(-mfpu=neon) add_definitions(-mfloat-abi=hard) 改为 #add_definitions(-march=native) add_definitions(-mfpu=neon) add_definitions(-mfloat-abi=softfp)
  3. 这样应该可以编译两个库ZQ_GEMM和ZQCNN了.如果要编译SampleMTCNN可以按照错误提示修改不能编译的部分,主要是openmp和计时函数.

zqcnn's People

Contributors

fawdlstty avatar garricklin avatar xiaodaxia avatar zuoqing1988 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zqcnn's Issues

请教 model zoo SphereFace06bn和Mobile-SphereFace10bn

关于这两个模型。SphereFace06bn的准确率要比Mobile-SphereFace10bn高啊,速度也快,是因为模型比较大而不推荐使用吗?
SphereFace06bn | 98.7%-98.8% | - | 不建议使用
Mobile-SphereFace10bn | 98.6%-98.7% 性价比高

SSE AVX 编译问题

我把你的 ZQ_CNN_USE_SSETYPE 改为 ZQ_CNN_SSETYPE_AVX,编译zq_cnn_relu_32f_align_c.c这个文件时报错了,用ZQ_CNN_SSETYPE_AVX2编译就不会报错。是不是你程序那个地方使用了#undef导致报错了啊?下面是vs2015编译信息:
zq_cnn_prelu_32f_align_c.c
1> zq_cnn_relu_32f_align_c.c
1>e:\publicsvn\zqcnn-v0.0\trunk\zqcnn\layers_c\zq_cnn_relu_32f_align_c_raw.h(213): warning C4013: “zq_mm_fmadd_ps”未定义;假设外部返回 int
1>e:\publicsvn\zqcnn-v0.0\trunk\zqcnn\layers_c\zq_cnn_relu_32f_align_c_raw.h(213): error C2440: “函数”: 无法从“int”转换为“__m256”
1>e:\publicsvn\zqcnn-v0.0\trunk\zqcnn\layers_c\zq_cnn_relu_32f_align_c_raw.h(213): warning C4024: “_mm256_store_ps”: 形参和实参 2 的类型不同
1>e:\publicsvn\zqcnn-v0.0\trunk\zqcnn\layers_c\zq_cnn_relu_32f_align_c_raw.h(216): error C2440: “函数”: 无法从“int”转换为“__m256”

关于ZQCNN中进行人头检测的模型加载问题

你公开的进行head检测的zqcnn的模型,包含6个模型文件;我在SampleMTCNN中按照如下顺序加载模型测试,
报错:failed to open file model/headdet1-dw24-fast.zqparams;
加载模型的代码如下:
mtcnn.Init("model/headdet1-dw20-fast.zqparams", "model/headdet1-dw20-fast-16.nchwbin",
"model/headdet1-dw24-fast.zqparams", "model/headdet1-dw24-fast-16.nchwbin",
"model/headdet1-dw48-fast.zqparams", "model/headdet1-dw48-fast-16.nchwbin",
thread_num, false);
请问是什么问题呢?

max face

请问zq里面加入最大人脸限制是要在哪个地方加好些,添加人脸上限限制是不是速度回更快些?

运行时报错

我更新了你的ZQCNN库,用VS2015调试了一下你的SampleMTCNN代码,在执行函数zq_cnn_conv_no_padding_32f_kernel3x3_C3_omp时报错了,提示Illegal Instruction,是我本机不支持AVX向量扩展么,还是需要安装些别的东西啊?

Train Mobilefacenet-res Model

Hi, I've tried out some mobilefacenet-res models and I'm impressed by the performance.

How do you come up with those net structures, is there any paper about those specific mobilefacenet-res models or you just try different network configurations and find out which one is better?

And is the training code for mobilefacenet-res available to finetune or maybe train the net from scratch? If not, will it be hard to modify the training code in insightface repository to traing mobilefacenet-res models.

Many thanks.

Speed question about MTCNN

Thanks for your work!
I tested SamplePet on Company's i7 server PC. It costs 300ms/frame for 'det1.zqparams' and 100ms for 'det1_dw20_fast.zqparams'. While on the same server PC, the original MTCNN runs 20ms/frame. Did I make some mistakes? HELP!

please use cmake.txt

hi 👍
This is good project,can u use cmake profile and set up linux,pls give me doc

关于人头检测的参数设置

您好,关于人头检测,请问在SampleMTCNN上参数如何设置才能复现项目 train-mtcnn-head 中的图片 head5.jpg 的结果,我是如下设置的,效果很差。
if (!mtcnn.Init("model/headdet1-dw20-fast.zqparams", "model/headdet1-dw20-fast-16.nchwbin",
"model/headdet2-dw24-fast.zqparams", "model/headdet2-dw24-fast-16.nchwbin",
"model/headdet3-dw48-fast.zqparams", "model/headdet3-dw48-fast-16.nchwbin", thread_num,false))
{
cout << "failed to init!\n";
return EXIT_FAILURE;
}

mtcnn.SetPara(image0.cols, image0.rows, 20, 0.4, 0.5, 0.7, 0.4, 0.5, 0.5, 0.709, 3, 20, 2, false);

望指教,谢谢。

mxnet 模型转成caffe 输出不一致

您好,
我用了您的mxnet2caffe 代码转换了mobilefacenet模型 运用check results 发现输出的结果不一样,想请教一下大神,哪里有问题

[
[[ 0.017276   -0.4222807  -0.07165871  0.7135863  -0.12447087  0.06586622
  -0.16812389 -0.13903007  0.04360439 -0.22122525  0.0435597   0.11936011
  -0.23474081 -0.3577653  -0.1679546  -0.13885504  0.16633391  0.25873622
  -0.33021694 -0.11786669 -0.02872146 -0.477086   -0.11106284  0.11695821
  -0.12934922  0.255884   -0.1942169   0.12630616 -0.23865224  0.18448791
   0.10559033 -0.088712   -0.04691517  0.26358733  0.18018812  0.14805603
  -0.2553117   0.5503354   0.5195001  -0.34429112 -0.50690997  0.46173176
   0.3151336   0.02643948 -0.2646044  -0.13466054 -0.02762383  0.37433448
  -0.44442838 -0.14106162 -0.03115291 -0.7350194  -0.55731094  0.02851256
  -0.01867473 -0.22451565  0.0659817   0.03267149 -0.07442598 -0.00755793
  -0.05205738  0.04383722  0.42879567 -0.25555742 -0.23541915  0.41604832
  -0.06175407 -0.2166583  -0.14290327 -0.01595623  0.64858043 -0.13120429
  -0.19330469 -0.2647119   0.02685797  0.38061866  0.1258779  -0.1579753
   0.01175252 -0.3725114  -0.4544143   0.2203731   0.07688574 -0.19915707
  -0.05907691 -0.26913345  0.12543568  0.20025031 -0.2774456   0.12992482
   0.0010437   0.06121505  0.1540927   0.10459166  0.08840342  0.43114156
   0.17922197 -0.493483   -0.7070367  -0.5172252  -0.46376437 -0.29713553
  -0.03089709  0.47260872  0.22349297  0.19986378  0.2616748   0.1465985
   0.1783604  -0.23194014  0.10261983  0.21745919  0.4326741  -0.04612612
   0.6061288   0.34292504  0.51188225 -0.23691243  0.13759308 -0.22194609
  -0.5215853  -0.13520943 -0.18358245  0.13728186 -0.27748924  0.07769431
  -0.33542892  0.42282426]]
<NDArray 1x128 @cpu(0)>]
[[ 6.2732756e-02 -5.9952173e-02  2.2413557e-02 -3.1469498e-02
   6.5951027e-02 -2.8367542e-02  3.2204002e-02  7.7424929e-02
  -9.7055621e-02 -6.9253430e-02 -4.5930523e-02 -7.6132409e-02
  -5.0844617e-02 -6.4346619e-02 -1.5729671e-02  5.3271189e-02
   5.3257262e-04 -5.2365875e-03  1.4655352e-02 -4.3126322e-02
  -1.0661008e-02 -8.7465690e-03 -2.8252389e-02 -7.0996016e-02
  -1.9464381e-03 -2.8986113e-02 -3.6444955e-02  3.6922205e-02
  -3.8776599e-02 -3.2794930e-02 -9.8875435e-03  6.2595280e-03
  -5.0185490e-03 -3.4949131e-02  4.1359983e-02  3.1499282e-02
  -2.8646499e-02 -5.9114099e-02  1.7148184e-02 -4.8444647e-02
  -9.7029276e-02  4.4342317e-04 -2.7198687e-02 -2.8752765e-02
   4.1433349e-02  1.2933503e-02 -4.9672965e-02 -3.5463576e-03
   4.7990292e-02  2.5810581e-03  3.1260908e-02  1.5094576e-02
  -1.3907658e-02 -2.7079349e-03  5.8484511e-03 -1.4168183e-02
   1.7932409e-02 -3.7768781e-02 -1.8586438e-02 -1.0990779e-02
  -4.7573805e-02 -5.8607054e-03 -4.9528219e-02 -2.2688817e-02
  -6.9002919e-02 -2.1926099e-03 -3.6008663e-02 -2.7441682e-02
  -2.9889676e-03 -3.6228251e-02 -8.7704889e-02 -3.7730277e-02
   4.5867212e-02 -1.6259417e-02  6.2029455e-03 -1.7036878e-02
  -3.9387021e-02  1.5584700e-02  3.4619402e-03 -4.7294483e-02
  -4.2392161e-02 -6.7054451e-02 -3.1520490e-02 -1.1477423e-01
   7.1737356e-03  5.5631641e-02  4.3232810e-02 -2.7725386e-02
   3.7312735e-02  1.1762280e-02 -1.0883956e-01  4.5307346e-02
  -3.2287091e-02 -1.7094158e-02 -1.6419319e-02 -3.3274885e-02
  -3.8385842e-02  4.6371106e-02 -4.4565663e-02  1.4135682e-02
  -2.6000340e-02 -4.3430896e-03 -5.8525845e-02  2.6870819e-02
  -1.5334387e-02  2.2968277e-05  6.3471328e-03 -4.4829208e-02
  -1.8722905e-02 -1.7468868e-02 -5.2707534e-02  3.7894405e-02
  -1.9262727e-02  7.9293303e-02 -2.5021762e-02  2.7414378e-02
   2.9555865e-02 -2.5431961e-03 -7.2565585e-02  3.5143670e-02
   4.9341552e-02 -8.4194131e-02  4.3513231e-02 -2.1488478e-03
  -2.2843815e-03  4.7319688e-02  6.3874900e-02  1.4988746e-02]]

前一个为mxnet输出结果, 后一个为caffe model输出结果

我首先运用

json2prototxt.py 转换json到prototxt
再在生成的prototxt文件中改了 bottom: “_mulscalar0”,将_mulscalar0改为上一层的”data”.

然后运用 mxnet2caffe.py 转换mxnet model 到caffe model

再就用 check_results.py 来输出结果如上,发现生成的128维embedding不同,
多谢大神指教

ZQ_CNN_Tensor4D.h中ROI函数93行疑问

使用您代码时debug到的一个小问题,memset(dst_slice_ptr - dstPixelStepdst_borderW + dstWidthStepdst_borderH, 0, sizeof(float)dstWidthStepdst_borderH);这行加Pad高方向指错位置了吧不应该是ROI区域的下沿么?

a small bug

在pnet,采用修改之前的计算方式:计算mapH 和mapW尺寸 和 pnet[0].Forward(pnet_images[i])计算出来的 scoreH = score->GetH(); scoreW = score->GetW();尺寸维度不相同。
修改之后的计算正常了。
void _compute_Pnet_single_thread(std::vector<std::vector>& maps,
std::vector& mapH, std::vector& mapW)
{
int scale_num = 0;
for (int i = 0; i < scales.size(); i++)
{
int changedH = (int)ceil(heightscales[i]);
int changedW = (int)ceil(width
scales[i]);
if (changedH < pnet_size || changedW < pnet_size)
continue;
scale_num++;
修改之前:
mapH.push_back((changedH - pnet_size) / pnet_stride + 1);
mapW.push_back((changedW - pnet_size) / pnet_stride + 1);*/
修改之后:
mapH.push_back((int)ceil((changedH - pnet_size)*1.0 / pnet_stride) + 1);
mapW.push_back((int)ceil((changedW - pnet_size)*1.0 / pnet_stride) + 1);
}
}
测试图片:
data\keliamoniz1.jpg 640x480
修改之前结果显示:
1
修改之后结果显示:
2

mtcnn 错误

您好,非常感谢您的分享. 我现在进行mtcnn 训练, 遇到了错误

F:\train-mtcnn-head>set MXNET_CUDNN_AUTOTUNE_DEFAULT=0

F:\train-mtcnn-head>python example\train_P_net20.py --gpus 0 --lr 0.
001 --image_set train_20_1 --prefix model/pnet20 --end_epoch 16 --lr_epoch 8,14
--frequent 10 --batch_size 1000 --thread_num 24
D:\ProgramData\Anaconda2\lib\site-packages\urllib3\contrib\pyopenssl.py:46: Depr
ecationWarning: OpenSSL.rand is deprecated - you should use os.urandom instead
import OpenSSL.SSL
Called with argument:
Namespace(batch_size=1000, begin_epoch=0, dataset_path='data/mtcnn', end_epoch=1
6, epoch=0, frequent=10, gpu_ids='0', image_set='train_20_1', lr=0.001, lr_epoch
='8,14', prefix='model/pnet20', pretrained='model/pnet20', resume=False, root_pa
th='data', thread_num=24)
init weights and bias:
hello3
F:\train-mtcnn-head\example\train.py:38: DeprecationWarning: �[91mCa
lling initializer with init(str, NDArray) has been deprecated.please use init(mx
.init.InitDesc(...), NDArray) instead.�[0m
init(k, args[k])
F:\train-mtcnn-head\example\train.py:55: DeprecationWarning: �[91mCa
lling initializer with init(str, NDArray) has been deprecated.please use init(mx
.init.InitDesc(...), NDArray) instead.�[0m
init(k, auxs[k])
lr 0.001 lr_epoch [8, 14] lr_epoch_diff [8, 14]
Traceback (most recent call last):
File "example\train_P_net20.py", line 62, in
args.begin_epoch, args.end_epoch, args.batch_size, args.thread_num, args.fre
quent, args.lr, lr_epoch, args.resume)
File "example\train_P_net20.py", line 17, in train_P_net20
20, True, True, frequent, not resume, lr, lr_epoch)
File "F:\train-mtcnn-head\example\train.py", line 90, in train_net

arg_params=args, aux_params=auxs, begin_epoch=begin_epoch, num_epoch=end_epo

ch)
File "D:\ProgramData\Anaconda2\lib\site-packages\mxnet\module\base_module.py",
line 460, in fit
for_training=True, force_rebind=force_rebind)
File "D:\ProgramData\Anaconda2\lib\site-packages\mxnet\module\module.py", line
429, in bind
state_names=self._state_names)
File "D:\ProgramData\Anaconda2\lib\site-packages\mxnet\module\executor_group.p
y", line 265, in init
self.bind_exec(data_shapes, label_shapes, shared_group)
File "D:\ProgramData\Anaconda2\lib\site-packages\mxnet\module\executor_group.p
y", line 361, in bind_exec
shared_group))
File "D:\ProgramData\Anaconda2\lib\site-packages\mxnet\module\executor_group.p
y", line 639, in _bind_ith_exec
shared_buffer=shared_data_arrays, **input_shapes)
File "D:\ProgramData\Anaconda2\lib\site-packages\mxnet\symbol\symbol.py", line
1518, in simple_bind
raise RuntimeError(error_msg)
RuntimeError: simple_bind error. Arguments:
data: (1000, 3L, 20L, 20L)
bbox_target: (1000, 4L)
label: (1000,)
[14:26:44] C:\projects\mxnet-distro-win\mxnet-build\src\storage\storage.cc:125:
Compile with USE_CUDA=1 to enable GPU usage

这个错误您遇到过吗? 是不是mxnet版本不对呢?

Face alignment method

Hi all, I'm going to test mobilefacenet-res4-8-16-4-dim512's performance.
Before testing, can anyone tell me how is the face alignment done here? Is it done using cv2 wrapaffine like in the original mxnet insightface implementation or something else?

I've seen a cpp file that may contain the alignment procedure but I am not familiar with cpp, so any help or explanation is appreciated.

关于mtcnn速度 求助

谢谢大佬的工作。但是我在测试过程中遇见了问题,想求助一下:
1、我在window下 i5-8400 (单线程)测试mtcnn, 640*480(4.jpg) 32ms (以及改了ZQ_CNN_SSETYPE_AVX2为ZQ_CNN_SSETYPE_AVX)不知道这个速度还能加速吗?需要怎么弄?
2、我试图将前两个模型换成你上传的fast版本,然后报模型初始化失败,请问如何解决?

谢谢了

Questions about inference speed of arcface-r50

Hi, there. Thank you for share this work. I have some questions about the inference time speed. The inference time table says arcface-r50 will take about 700ms on 3.6GHz CPU. I am wondering how this result is tested. To be specific, did you just use ZQCNN for acceleration? Or did you also use other library, like OpenMP or MKL-DNN, etc?

FacialEmotion caffe model

Hi all,

I'm playing with the FacialEmotion caffe model. I can see the input to the network is of dim (10, 1, 42, 42). But the fer2013 dataset seems to have 48x48 images.

Can anyone elaborate what is the input to the model.

Many thanks.

我用112*96训练报错了。请问应该怎么修改呢?

我的数据都是11296大小的,用的 MobileFaceNet-res2-6-10-2-dim256这个训练脚本。我按照 “InsightFace: how to train 11296“ 这个做了修改。 但是训练的是报了做个错误。。。
请问应该怎么修改呢。。
mxnet.base.MXNetError: [12:15:09] c:\jenkins\workspace\mxnet-tag\mxnet\src\operator\tensor./matrix_op-inl.h:659: Check failed: e <= len (104 vs. 96) slicing with end[1]=104 exceeds limit of 96

线程不安全

当在一个进程中同时用这个库做人脸检测(MTCNN)和人脸识别(MobilefacenetRes)时,人脸检测有问题,估计是底层类线程不安全。

Image Save

我想把你rnet图片保存下来看看,怎么写出来的不对啊,下面是我的代码:
ZQ_CNN_Tensor4D_NHW_C_Align128bit& net_Img = task_rnet_images[0];
int nWidth = net_Img.GetW();
int nHeight = net_Img.GetH();
auto data = net_Img.GetFirstPixelPtr();
cv::Mat mat_(nHeight, nWidth, CV_32FC4, data);
cv::imwrite("e:\temp.jpg", mat_);

人脸识别类内存泄漏

以下为测试代码:

for (int i = 0; i < 1000; i++)
{
	std::string prototxt_file =   "./model/model-r50-am.zqparams";
	std::string caffemodel_file = "./model/model-r50-am.nchwbin";
	std::string out_blob_name = "fc5";
	ZQ_FaceRecognizerArcFaceZQCNN* pFaceZQCNN = new ZQ_FaceRecognizerArcFaceZQCNN();
	if (!pFaceZQCNN->Init("", prototxt_file, caffemodel_file, out_blob_name))
	{
		cout << "failed to init arcface\n";
		return 0;
	}
	delete pFaceZQCNN;
}

每次循环时内存都会上涨,delete pFaceZQCNN 没有完全释放内存。

MTCNN+人脸识别同时运行时的问题

以下为可复现问题的代码。这里用MTCNN做人脸检测,然后提取人脸的特征,当循环到一定次数时(几百到一千次),人脸检测会失败(mtcnn.Find 函数返回false)。

#include "ZQ_CNN_MTCNN.h"
#include "ZQ_FaceRecognizerArcFaceZQCNN.h"

#include
#include "opencv2\opencv.hpp"

#include "ZQ_CNN_ComplieConfig.h"
#if ZQ_CNN_USE_BLAS_GEMM
#include <cblas.h>
#pragma comment(lib,"libopenblas.lib")
#endif

using namespace std;
using namespace cv;
using namespace ZQ;

int main()
{
#if ZQ_CNN_USE_BLAS_GEMM
openblas_set_num_threads(4);
#endif

for (int i = 0; i<10000;i++)
{
	printf("i = %d\n", i);
	Mat image0 = cv::imread("data\\AE_.jpg", 1);
	if (image0.empty())
	{
		cout << "empty image\n";
		return EXIT_FAILURE;
	}

	std::string prototxt_file =   "model\\mobilefacenet-v0.zqparams";
	std::string caffemodel_file = "model\\mobilefacenet-v0.nchwbin";
	std::string out_blob_name = "fc5";
	ZQ_FaceRecognizerArcFaceZQCNN* pFaceZQCNN = new ZQ_FaceRecognizerArcFaceZQCNN();
	if (!pFaceZQCNN->Init("", prototxt_file, caffemodel_file, out_blob_name))
	{
		cout << "failed to init arcface\n";
		return 0;
	}

	int FeatureLength = pFaceZQCNN->GetFeatDim();
	float * outFeature = new float[FeatureLength];

	std::vector<ZQ_CNN_BBox> thirdBbox;
	ZQ_CNN_MTCNN mtcnn;

	if (!mtcnn.Init("model\\det1.zqparams", "model\\det1.nchwbin", "model\\det2.zqparams",
		"model\\det2.nchwbin", "model\\det3.zqparams", "model\\det3.nchwbin"))
	{
		cout << "failed to init!\n";
		return EXIT_FAILURE;
	}

	mtcnn.SetPara(image0.cols, image0.rows, 60, 0.6, 0.7, 0.7, 0.5, 0.5, 0.5);
	if (!mtcnn.Find(image0.data, image0.cols, image0.rows, image0.step[0], thirdBbox))
	{
		cout << "failed to find face!\n";
		return EXIT_FAILURE;
	}

	float face5point_x[5] = { 0 };
	float face5point_y[5] = { 0 };

	for (int num = 0; num < 5; num++)
	{
		face5point_x[num] = *(thirdBbox[0].ppoint + num);
		face5point_y[num] = *(thirdBbox[0].ppoint + num + 5);
	}

	Mat NormFace = Mat(pFaceZQCNN->GetCropHeight(), pFaceZQCNN->GetCropWidth(), CV_8UC3);

	pFaceZQCNN->CropImage(image0.data, image0.cols, image0.rows, image0.step[0], ZQ_PIXEL_FMT_BGR, face5point_x, face5point_y,
		(unsigned char*)(NormFace.data), NormFace.step[0]);

	if (!pFaceZQCNN->ExtractFeature((unsigned char*)(NormFace.data), NormFace.step[0],
		ZQ_PIXEL_FMT_BGR, outFeature, true))
	{
		cout << "failed to ExtractFeature!\n";
		return EXIT_FAILURE;
	}

	delete pFaceZQCNN;
	delete[]outFeature;
}

return EXIT_SUCCESS;

}

prob.txt

你好,请问数据里的prob.txt是表示的什么?是怎么得出来的呢?

编译找不到函数定义

我编译最新的ZQCNN,文件ZQ_CNN_Forward_SSEUtils.cpp中有很多函数找不到定义,如 zq_cnn_avgpooling_nopadding_suredivided_32f_align256bit_kernel2x2_omp

ZQCNN移植到linux系统下,运行时出错

移植到linux系统下,运行SampleSphereFaceNet在执行ZQ::ZQ_CNN_Tensor4D::ConvertFromBGR时会挂掉

如果
ZQ_CNN_Tensor4D_NHW_C_Align128bit input0, input1
改为 ZQ_CNN_Tensor4D_NHW_C_Align0 input0,input1;
就没问题,这是为什么呢,ZQCNN以后会出linux版本的吗?

landmark106 convert to ncnn Error

你好,landmark的106点的模型转化为ncnn,出现了未定义的层,查看json应该是这个地方的问题
image
请教下左博,这个是不是需要在mxnet中添加你实现的这两个层

libopenblas问题

请问你有没有这个卷积运算库32位的lib和dll啊,我自己用cmake编译了一个,但是好像没你这个速度快。

pnet_stage 返回false

大神,我把算法在linux下重新建了下工程,然后用sampleMTCNN作为demo测试一下,模型加载是成功的,但是mtcnn.find106这步在pnet_stage时候就返回false了,百思不得其解,代码完全照搬没改动过的

Some questions about model GenderAge-r50

I convert the mxnet model of GenderAge-r50 according to wiki, and compile the sample of GenderAge sucessfully. But I found that all face image(3112112) get the result of 28/29 age and Male.

Can not download model files from Baidu

It appears to be impossible for me to download files from Baidu drive. It forces me to download an EXE file, which I can not run because I'm on Linux. (Even if Baidu programs weren't well-known spywares)

Is there another way to download these models?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.