srivignessh / deep-learning-faces Goto Github PK
View Code? Open in Web Editor NEWAutomatically exported from code.google.com/p/deep-learning-faces
License: GNU General Public License v3.0
Automatically exported from code.google.com/p/deep-learning-faces
License: GNU General Public License v3.0
Invalid MEX-file
'D:\DEEPLearning\2013\SVM_SOFTMAX\matlab\mexcuConvNNoo.mexw64': error
parameters.
Error in myclassify_conv_nn_softmax (line 174)
[~, model.theta] = mexcuConvNNoo( single(ww), params, model.callback_name);
in code write like this:
#define NARGIN 3
#define IN_W pRhs[0]
#define IN_params pRhs[1]
#define IN_callback pRhs[2]
#define NARGOUT 1
#define OUT_f pLhs[0]
#define OUT_W pLhs[1]
#define OUT_dW pLhs[1]
and which is different from the beginning explanation:function [f (W or dW)] =
mexcuConvNNoo( W, params, Dim, Callback).
Original issue reported on code.google.com by [email protected]
on 12 Aug 2013 at 8:25
dataset has been removed from Kaggle website
Original issue reported on code.google.com by [email protected]
on 7 Oct 2014 at 5:46
I have successfully compiled it on Ubuntu14.04(64bits) with Cuda6.5.
When I run "script_face_exp.m", I got the following errors:
--------------------------------------------------------------------
~~FERMI~~
{Input} (-1)-->layer{0} Convdata: nFilters:1 nIJ_grid:48 48, dropout:0.000
{Hidn} (0)-->layer{1} ImageMirror: nVisChannels:1 nVisIJ:[48 48],Error
using mexcuConvNNoo
Assertion Failed: trns_high not always >= trns_low
Error in myclassify_conv_nn_softmax (line 174)
[~, model.theta] = mexcuConvNNoo( single(ww), params,
model.callback_name);
Error in fe_cv_48 (line 143)
[model] = myclassify_conv_nn_softmax(model);
Error in script_face_exp (line 72)
[cv_average,cv_models]=fe_cv_48(hp.nSPLIT,hp.randseeds,hp.normalseeds,hp);
----------------------------------------------------------------
My GPU info is as below (./deviceQuery ):
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 645"
CUDA Driver Version / Runtime Version 6.5 / 6.5
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 1024 MBytes (1073414144 bytes)
( 3) Multiprocessors, (192) CUDA Cores/MP: 576 CUDA Cores
GPU Clock rate: 824 MHz (0.82 GHz)
Memory Clock rate: 2000 Mhz
Memory Bus Width: 128-bit
L2 Cache Size: 262144 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Bus ID / PCI location ID: 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime
Version = 6.5, NumDevs = 1, Device0 = GeForce GTX 645
Result = PASS
----------------------------------------------------------------
Thanks!
best,
-Lei
Original issue reported on code.google.com by [email protected]
on 15 Dec 2014 at 1:08
I encountered an error on compiling step.
I'm using Cuda 5.5, on ubuntu 12.04 64 bit
I'd followed the steps written in README,
Once I type "make mexf="./deep_nn/mexcuConvNNoo.mex
./deep_nn/mexcuConvNNooFF.mex",
it returns the error:
make: /pkgs/matlab-80/bin/mexext: Command not found
make: *** No rule to make target 'deep_nn/mexcuConvNNoo.', needed by 'all'.
Stop.
I've tried to figure it myself, but it seems to be difficult to me.
Sorry for that I have lack of knowledge on this, I'll doing on my best.
Sincerely
Tae-Ho Kim
Original issue reported on code.google.com by [email protected]
on 23 Nov 2013 at 11:59
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.