dlunion / dbface Goto Github PK
View Code? Open in Web Editor NEWDBFace is a real-time, single-stage detector for face detection, with faster speed and higher accuracy
DBFace is a real-time, single-stage detector for face detection, with faster speed and higher accuracy
Reading folder of all the pictures of faces for testing was forced to stop??No error message, should be the model memory overflow, how to solve??
对reg_tlrb的赋值只有在没有旋转的分支里,这里为啥会忽略旋转增强的部分数据勒?
另外在增强阶段如果有部分landmark点或者bbox超出了图像范围,这类目标是如何处理的,还是按超出的坐标计算吗?
how to train the model
When release the training part
I trained your model, but the performance is not so goods. On widerFace Hard set it reached 0.792(yours is 0.847 ).
Could you please show some traning tricks, or upload your train.py?
run on CPU, error accurs.
TypeError: div(): argument 'other' (position 1) must be Tensor, not torch.Size
您的网络在hard子集上表现很出色,请问您用了哪些针对hard子集的数据增强方式吗?
如题目,我复现不出来,不清楚问题是什么,请问大家是怎么做的?
错误提示如下:
torch_importer.cpp:1022: error: (-213:The function/feature is not implemented) Unsupported Lua type in function 'cv::dnn::dnn4_v20200310::TorchImporter::readObject'
为什么是Lua type?我看了你的DBFace.py,不知道是哪里的定义,使得cv::dnn无法读取模型?
@dlunion
Dear Author:
Thank you very much.
The dummy is set to dummy = torch.zeros((1, 3, 32, 32)).cuda():
Line 34 in 1b408db
However, the input size of model seems to be, 800x800:
Not sure if you mean that we shall adjust the model input size (for example 512x512) and re-train the model, then use the very size (1,3,512,512) to generated the final onnx. Thank you very much.
Hi, thank you for your useful work.
I run your eval.py and simply modify the data path, but I got the following error:
Traceback (most recent call last):
File "eval.py", line 43, in <module>
files, anns = zip(*common.load_webface("/home/v-chenqy/data/widerface/val/wider_val.txt", "/home/v-chenqy/data/widerface/val/images"))
File "/home/v-chenqy/PyTorchFace/DBFace/train/small/common.py", line 470, in load_webface facials.append([float(item) for item in line.split(" ")])
File "/home/v-chenqy/PyTorchFace/DBFace/train/small/common.py", line 470, in <listcomp> facials.append([float(item) for item in line.split(" ")])
ValueError: could not convert string to float: '/24--Soldier_Firing/24_Soldier_Firing_Soldier_Firing_24_329.jpg'
您好,请问下,DBFace检测出来的人脸的置信度感觉普遍比较低,大部分在0.4~0.7之间,很少有0.88以上的,这个现象一般是什么原因呢?可以如何改进?
Hi,
I created the dataset for training by downloading WIDERFACE images and labels and putting them under 'webface' directory. And I tried training as follows:
$ python train/small/train-small-H-keep12-ignoresmall.py
Downloading: "http://zifuture.com:1000/fs/public_models/mbv3small-09ace125.pth" to /home/tfs/.cache/torch/hub/checkpoints/mbv3small-09ace125.pth
Traceback (most recent call last):
File "train/small/train-small-H-keep12-ignoresmall.py", line 276, in
app = App("webface/train/label.txt", "webface/WIDER_train/images")
File "train/small/train-small-H-keep12-ignoresmall.py", line 166, in init
self.model.init_weights()
File "/nas4/tfs/COVID-19/DBFace/train/small/dbface.py", line 242, in init_weights
self.bb.load_pretrain()
File "/nas4/tfs/COVID-19/DBFace/train/small/dbface.py", line 89, in load_pretrain
checkpoint = model_zoo.load_url(f"{_MODEL_URL_DOMAIN}/{_MODEL_URL_SMALL}")
File "/home/tfs/venv_dbface_nni/lib64/python3.6/site-packages/torch/hub.py", line 481, in load_state_dict_from_url
download_url_to_file(url, cached_file, hash_prefix, progress=progress)
File "/home/tfs/venv_dbface_nni/lib64/python3.6/site-packages/torch/hub.py", line 379, in download_url_to_file
u = urlopen(req)
File "/usr/lib64/python3.6/urllib/request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib64/python3.6/urllib/request.py", line 532, in open
response = meth(req, response)
File "/usr/lib64/python3.6/urllib/request.py", line 642, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib64/python3.6/urllib/request.py", line 570, in error
return self._call_chain(*args)
File "/usr/lib64/python3.6/urllib/request.py", line 504, in _call_chain
result = func(*args)
File "/usr/lib64/python3.6/urllib/request.py", line 650, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 504: Gateway Timeout
Can you please make the model mbv3small-09ace125.pth available at an accessible location, which we can download and make use of during training?
Please share the pretrained model and code to run this realtime?
hi,
i ran the main.py in Tesla M40, speed was so slow, 7401024 image's processing time was 73ms, 11502048 image's processing time was 1907ms, Was the speed correct?
大佬好,你这边试过将模型转为caffe么?
请问下,这个人脸框出来的置信度得分是多少范围啊,有没超过0.5的啊?
I can't understand how the landmark is calculated? What does x y r b mean?
@dlunion
你好,这边学习了b站的视频,非常感谢无私奉献
这边遇到一个简单的目标检测需求(针对人体,非人脸);请问设计理念上人脸和普通的人体有何区别,是否将dbface中的landmark分支去掉依然会有好的结果?(backbone+ssh+centernet)
I’m impressed to your great work DBFace. I want to use this repository.
Could you tell me the license of this repository?
Thank you so much in advance.
你好,请问widerface原始数据集给的annotation的格式是不是和DBface模型使用的格式不一致?
Hi,
I'am testing dbface on my RaspberryPI 4B.Whether I use main.py or main_small.py,speed is too too slow.
Did anybody tried that before?Any suggestions would be appreciated.
I want to inference 2 images in a batch,so how to modify main.py and so on? thank you
我在网上找到的annotation貌似没有landmark
你好,我使用的嵌入式设备对se机构不支持,请问对backbone的修改有什么比较好的建议么
我为了提高速度,检测图片前将目标区域裁剪下来,但目标区域似乎太小了,出现了这个错误。请问有修复方法吗?如果我直接全局识别图片的话就没有这个问题
您好,运行prj_ncnn里面的代码,检出人脸个数为0是什么原因呀?希望能得到您的回复,多谢,
can someone help me,please.
我尝试用train/small/test_onnx.py 去测试"/train/small/jobs/small-H-dense-wide64-UCBA-keep12-ignoresmall/model.onnx"模型文件,出现了onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : This is an invalid model. Error in Node: : No Op registered for Plugin with domain_version of 11 这个问题,这是什么原因呢?
could you provide the speed about DBFace retinaFace CenterFace LFFD?
Dear author:
Thanks for the efficient face detection repo. I tested it and found it works pretry well especially on tiny face case. Great!
May I ask for the training data and training source code. it's usefull for the community to adjust the network architecture and re-training to get individual solutions. It's would be grateful if you can kindly share the data and code.
Thank you again.
This looks amazing. Can you please add a license file so others can use it?
小白疑问1:模型保存后,即权重参数的维度已确定,为什么class DBFace(nn.Module) 可以接受不同尺寸的图片,那对于不同尺寸的输入,权重的维度是怎么匹配的?
小白疑问2:即对图片的输入大小没有resize成固定的大小,那么模型检测的最大人脸是多大?亲测:一张图片大小为(3, 6528, 3040)的高清人脸确检测不到?
小白疑问2:将图片缩放后在还原,检测结果略有误差。
Dear Author:
I have modified the eval.py as follow and run by python eval.py
files, anns = zip(*common.load_webface("/xxx/webface/val/label.txt", "/xxx/webface/WIDER_val/images"))
prefix = "/xxx/webface/WIDER_val/images/"
========================
For the hard part, the mAP of "has_ext" version shall be 0.728
But what I get is:
This is more like the accuracy of the "no_ext" version. Have I done something incorrectly? Thank you.
can this model be used for common object detection ?
Is there any document or tutorial, blogs about how dbface works ?
thanks a lot
如题,请问哪里有转换ncnn的代码?
请教下,large模型训练,可以直接将测试dbface的网络结构替代训练代码里的小模型吧?
尝试用daface-large模型训练出现:
2020-05-20 11:48:09,718 - INFO - train-large.py[:233] - iter: 871, lr: 0.000125, epoch: 6.44, loss: 5.04, hm_loss: 0.36, box_loss: 4.11, lmdk_loss: 0.56668
avg is zero
avg is zero
2020-05-20 11:49:07,651 - INFO - train-large.py[:233] - iter: 881, lr: 0.000125, epoch: 6.52, loss: 4.25, hm_loss: 0.04, box_loss: 4.21, lmdk_loss: 0.00000
avg is zero
2020-05-20 11:50:07,313 - INFO - train-large.py[:233] - iter: 891, lr: 0.000125, epoch: 6.59, loss: 2.70, hm_loss: 0.57, box_loss: 1.89, lmdk_loss: 0.24474
2020-05-20 11:51:05,086 - INFO - train-large.py[:233] - iter: 901, lr: 0.000125, epoch: 6.67, loss: 2.58, hm_loss: 0.05, box_loss: 2.51, lmdk_loss: 0.02192
2020-05-20 11:52:11,552 - INFO - train-large.py[:233] - iter: 911, lr: 0.000125, epoch: 6.74, loss: 1.40, hm_loss: 0.03, box_loss: 1.28, lmdk_loss: 0.09160
landmark 损失会变0,‘’avg is zero‘’不知道什么意思,损失变化震荡幅度比较大是正常的么?
这项工作有发论文吗?
你好,刚开始接触人脸检测,其中有几个模块不太了解为什么这么做。
1)首先关于CBNModule,为什么将激活函数换成HSwish函数?同理,为什么HSigmoid有什么好处呢?
2) ContextModule里,将features 分割成两部分,分别进行卷积后再合并?为什么这么做呢?
或者能麻烦将这几个模块的论文提示一下吗?
谢谢
It seems that this model could be reasonably ported to Javascript.
Tensorflow.js would be a way to go.
Limply searching for the conversion steps I found this:
https://drsleep.github.io/tutorial/Tutorial-PyTorch-to-TensorFlow-JS/
您好,将small模型的Mbv3SmallFast改成了mobilenetv2,上采样层设为反卷积,150个epoch后用提供的工具测试法发现easy,Medium只有80%左右,hard只有70%,请问这样修改模型合理么,还是训练上有什么tricks来提高准确度?
如何破?图片分辨率太高 好像检测不出来人脸了?
train.md似乎有点问题啊,能否更正一下呢?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.