GithubHelp home page GithubHelp logo

vlfeat / matconvnet Goto Github PK

View Code? Open in Web Editor NEW
1.4K 1.4K 755.0 4.83 MB

MatConvNet: CNNs for MATLAB

License: Other

Makefile 1.01% MATLAB 29.84% C++ 12.58% Cuda 46.19% C 1.60% Shell 2.51% Python 6.24% M 0.03%

matconvnet's Introduction

VLFeat -- Vision Lab Features Library

Version 0.9.21

The VLFeat open source library implements popular computer vision algorithms specialising in image understanding and local featurexs extraction and matching. Algorithms incldue Fisher Vector, VLAD, SIFT, MSER, k-means, hierarchical k-means, agglomerative information bottleneck, SLIC superpixes, quick shift superpixels, large scale SVM training, and many others. It is written in C for efficiency and compatibility, with interfaces in MATLAB for ease of use, and detailed documentation throughout. It supports Windows, Mac OS X, and Linux.

VLFeat is distributed under the BSD license (see the COPYING file).

The documentation is available online and shipped with the library as doc/index.html. See also:

Quick start with MATLAB

To start using VLFeat as a MATLAB toolbox, download the latest VLFeat binary package. Note that the pre-compiled binaries require MATLAB 2009B and later. Unpack it, for example by using WinZIP (Windows), by double clicking on the archive (Mac), or by using the command line (Linux and Mac):

> tar xzf vlfeat-X.Y.Z-bin.tar.gz

Here X.Y.Z denotes the latest version. Start MATLAB and run the VLFeat setup command:

> run <VLFEATROOT>/toolbox/vl_setup

Here <VLFEATROOT> should be replaced with the path to the VLFeat directory created by unpacking the archive. All VLFeat demos can now be run in a row by the command:

> vl_demo

Check out the individual demos by editing this file: edit vl_demo.

Octave support

The toolbox should be laregly compatible with GNU Octave, an open source MATLAB equivalent. However, the binary distribution does not ship with pre-built GNU Octave MEX files. To compile them use

> cd <vlfeat directory>
> make MKOCTFILE=<path to the mkoctfile program>

Changes

  • 0.9.21 Maintenance release. Bugfixes.
  • 0.9.20 Maintenance release. Bugfixes.
  • 0.9.19 Maintenance release. Minor bugfixes and fixes compilation with MATLAB 2014a.
  • 0.9.18 Several bugfixes. Improved documentation, particularly of the covariant detectors. Minor enhancements of the Fisher vectors.
  • 0.9.17 Rewritten SVM implementation, adding support for SGD and SDCA optimisers and various loss functions (hinge, squared hinge, logistic, etc.) and improving the interface. Added infrastructure to support multi-core computations using OpenMP (MATLAB 2009B or later required). Added OpenMP support to KD-trees and KMeans. Added new Gaussian Mixture Models, VLAD encoding, and Fisher Vector encodings (also with OpenMP support). Added LIOP feature descriptors. Added new object category recognition example code, supporting several standard benchmarks off-the-shelf.
  • 0.9.16 Added VL_COVDET. This function implements the following detectors: DoG, Hessian, Harris Laplace, Hessian Laplace, Multiscale Hessian, Multiscale Harris. It also implements affine adaptation, estiamtion of feature orientation, computation of descriptors on the affine patches (including raw patches), and sourcing of custom feature frame.
  • 0.9.15 Added VL_HOG (HOG features). Added VL_SVMPEGASOS and a vastly improved SVM implementation. Added VL_IHASHSUM (hashed counting). Improved INTHIST (integral histogram). Added VL_CUMMAX. Improved the implementation of VL_ROC and VL_PR(). Added VL_DET() (Detection Error Trade-off (DET) curves). Improved the verbosity control to AIB. Added support for Xcode 4.3, improved support for past and future Xcode versions. Completed the migration of the old test code in toolbox/test, moving the functionality to the new unit tests toolbox/xtest.
  • 0.9.14 Added SLIC superpixels. Added VL_ALPHANUM(). Improved Windows binary package and added support for Visual Studio 2010. Improved the documentation layout and added a proper bibliography. Bugfixes and other minor improvements. Moved from the GPL to the less restrictive BSD license.
  • 0.9.13 Fixed Windows binary package.
  • 0.9.12 Fixes vl_compile and the architecture string on Linux 32 bit.
  • 0.9.11 Fixes a compatibility problem on older Mac OS X versions. A few bugfixes are included too.
  • 0.9.10 Improves the homogeneous kernel map. Plenty of small tweaks and improvements. Make maci64 the default architecture on the Mac.
  • 0.9.9 Added: sift matching example. Extended Caltech-101 classification example to use kd-trees.
  • 0.9.8 Added: image distance transform, PEGASOS, floating point K-means, homogeneous kernel maps, a Caltech-101 classification example. Improved documentation.
  • 0.9.7 Changed the Mac OS X binary distribution to require a less recent version of Mac OS X (10.5).
  • 0.9.6 Changed the GNU/Linux binary distribution to require a less recent version of the C library.
  • 0.9.5 Added kd-tree and new SSE-accelerated vector/histogram comparison code. Improved dense SIFT (dsift) implementation. Added Snow Leopard and MATLAB R2009b support.
  • 0.9.4 Added quick shift. Renamed dhog to dsift and improved implementation and documentation. Improved tutorials. Added 64 bit Windows binaries. Many other small changes.
  • 0.9.3 Namespace change (everything begins with a vl_ prefix now). Many other changes to provide compilation support on Windows with MATLAB 7.
  • beta-3 Completes to the ikmeans code.
  • beta-2 Many additions.
  • beta-1 Initial public release.

matconvnet's People

Contributors

albanie avatar ankush-me avatar aravindhm avatar ashafaei avatar clamesc avatar hbilen avatar hyenal avatar jaderberg avatar lanpa avatar lenck avatar mfigurnov avatar omkarparkhi avatar oya163 avatar shaoqingren avatar suhangpro avatar taehyunoh avatar thelinuxmaniac avatar tomazas avatar vedaldi avatar zoharby avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

matconvnet's Issues

launch failure after back propagation of my self-defined convolution layer

Does anybody have an experience of error when executing the:

wait(gpuDevice)

When the code is waiting after finish the back propagation of my self-defined convolution layer, it gives the error:

The CUDA Error: unspecified launch failure

Error in parallel.gpu.CUDADevice/wait

Error in back_prop: 
    wait(gpuDevice);

I feel this should not be a memory problem. Besides, when I define only one filter (HxWxDx1) in the self-defined convolution layer, it's fine. But when I define two, which makes the filters HxWxDx2.

Does anyone have a clue about this?

Thanks!

Local Response Normalization backpropagation

Hi!

Im checking the implementation in C++ of the local response normalization here:

https://github.com/vlfeat/matconvnet/blob/master/matlab/src/bits/normalize.cpp

Based on Hinton´s paper http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf we have that the response-normalized activity is given by the formula (notation adapted for readability):

activation

which totally fits with the implementation mentioned above. To get the back-propagation formulas we have that if

c

then

gradient

If im not wrong, this maps to the C++ implementation as

map

which should lead to the formula

formula1

however, the implementation is (lines 276-280)

formula2

Note the change zat(q) -> zat(t). Is there anything wrong there that I didnt notice?

Thank you!

Urko

assertion failed vl_simplenn.m (see also #34)

(This issue is similar to the #34 one)
I think i am missing something in the response (#34).Could you please elaborate on this?
Specifically, when i change the last layer from softmaxloss->softmax in e.g. cnn_cifar example, then i get an
"Error using .* Array dimensions must match for binary array op."
in
"Error in vl_nnsoftmax (line 30)
Y = Y .* bsxfun(@minus, dzdY, sum(dzdY .* Y, 3)) ;"

Optimization for vl_nnloss.m

Hi,

Does anyone has an idea of how to optimize the vl_nnloss layer? I've built a net work for semantic segmentation (output of size 128x128 with 100 as batchsize). But the current version can only process 25 images per sec (on K40c), which is very slow. In fact, the vl_nnloss is the most time-consuming part (90+%), since one has to compare the predicted per-pixel map with groundtruth. I am trying to optimize the current version.

Missing .cpp files for vl_nnconv, vl_nnpool, vl_nnnormalize?

I'm having trouble building from the command line with the Makefile on OS X 10.9, using Xcode 6.1 and Matlab R2014b (see below). So I decided to try to follow the Makefile and manually build the non-GPU version using calls to "mex" from within Matlab. In doing so, I realized (unless I'm misunderstanding) that the Makefile seems to reference vl_nnconv.cpp, vl_nnpool.cpp, and vl_nnnormalize.cpp. But I don't see any of those files in the repo, in matlab/src. I only see their .cu versions, but again, I'm hoping to build the non-CUDA version initially. Is that an error?

For the record, this is the error I get using the Makefile at the command line.

> make ARCH=maci64 MATLABROOT=/Applications/MATLAB_R2014b.app
/Applications/MATLAB_R2014b.app/bin/mex -c -largeArrayDims -lmwblas "matlab/src/bits/im2col.cpp"
Building with 'Xcode Clang++'.
xcodebuild: error: SDK "macosx1:10.9" cannot be located.
xcrun: error: unable to find utility "clang++", not a developer tool or in PATH

make: *** [matlab/src/bits/im2col.o] Error 255

Update: I was able to build using the Xcode project file with the no-GPU option.

Why does the binary classification give a large training error number?

I did a binary classification task that I changed the opt.errorType from "multiclass" to "binary" in cnn_train.m . Accordingly, when the training process update the error by the function:

info = updateError(opts, info, net, res, speed)

The code get into the section:

case 'binary'
error = bsxfun(@times, predictions, labels) < 0 ;
info.error(end) = info.error(end) + sum(error(:))/n ;

Rather than evaluate the error in multiclass section, the binary error computation is to count the number of predictions that are different from the ground truth labels. Then, I got a large error number (10,000) in each epoch (even it may decreases a little). I am not sure if this is a bug or this is the error values for binary classification?

vl_nnnoffset tester fail

Hi,

I just downlead the pack and compile it successfully (w/o GPU).
However, when I run "vl_test_nnlayers" I got this error (in case 8)
I delete case 8 and rerun vl_test_nnlayers, it works without error.
The problem of case 8 is that some values are less than tau.
assert(all(abs(a(:)-b(:)) < tau)) ;

How to solve this problem? Or, I can go without testing case 8?

Thank you

Error using vl_testsim (line 27)
Assertion failed.

Error in vl_testder (line 23)
vl_testsim(dzdx, dzdx_, tau);

Error in vl_test_nnlayers (line 381)
vl_testder(@(x) vl_nnnoffset(x,param), x, dzdy, dzdx, 1e-3*range) ;

switching between CPU and GPU when resuming

When i interrupt the example code, say cnn_mnist with opts.train.useGpu = false, and change it to GPU mode by doing opts.train.useGpu = true, it gives errors like

resuming by loading epoch 59
training: epoch 60: processing batch   1 of 600 ...Error using vl_nnconv
DATA and FILTERS are not both CPU or GPU arrays.

Also, if I complete run can_mnist in GPU mode, it will skip to run again in CPU mode.
Any idea that I can switch btw CPU and GPU freely and start from the beginning as I wish?

Thanks in advance.

Multi Task Learning

Hi,
How would it be possible to implement a multi task neural network using MatConvNet?
Basically, how can I split the output of the final shared layer and send it to multiple loss layers, and combine the errors for backpropagation accordingly, using MatConvNet?

What are the use of validation set and testing set

I understand validation set is used to tune some of the parameters in training, what are they? How can I make use of validation set in MatConvNet. To me validation seems to be the same as testing set.

Thanks!

How does the script calculate the train error

When I run mnist example, I understand that how to calculate the val error (which is go through all the testing examples). What about the train error, how does the we obtain that? Is that by simple validation in training?

This image is generated by some other dataset, not mnist
screenshot from 2015-01-15 11 23 30

Segmentation faults while running cnn_imagenet_minimal.m

Hi,

I followed the installation and compilation instructions on matconvnet to the best of my knowledge. I was then able to run vl_setupnn and vl_test_nnlayers successfully.

However, upon running cnn_imagenet_minimal.m, I get the following error :

Segmentation violation detected at Wed Nov 19 16:28:45 2014

Abnormal termination:
Segmentation violation

Stack Trace (from fault):
[ 0] 0x00007fea84d068ca /export/rel60_shadow/glue.umd.edu/software/matlab/2014a
/Linux/bin/glnxa64/../../sys/os/glnxa64/libstdc++.so.6+00489674
[ 1] 0x00007fe9a810bc3e /usr/local/cuda/lib64/libcudart.so.5.0+0010
1438
[ 2] 0x00007fe9a80f9c0f /usr/local/cuda/lib64/libcudart.so.5.0+0002
7663 __cudaRegisterFatBinary+00000031
[ 3] 0x00007fe93ce5cef1 /export/rel60_shadow/glue.umd.edu/software/matlab/2014a
/Linux/bin/glnxa64/libmwmagma.so+00204529
[ 4] 0x00007fe93cf927d6 /export/rel60_shadow/glue.umd.edu/software/matlab/2014a
/Linux/bin/glnxa64/libmwmagma.so+01472470
[ 5] 0x00007fe93ce50ff3 /export/rel60_shadow/glue.umd.edu/software/matlab/2014a
/Linux/bin/glnxa64/libmwmagma.so+00155635

This error was detected while a MEX-file was running. If the MEX-file
is not an official MathWorks function, please examine its source code
for errors. Please consult the External Interfaces Guide for information
on debugging MEX-files.

Any suggestions or help ?

Compile gpu version error

Hi guys,
I am compiling the gpu version and got following error:

/home/gg/matlab2013/bin/mex: 2: matlab/src/config/mex_CUDA_maci64.xml: Syntax error: newline unexpected
make: *** [matlab/mex/vl_nnconv.mexa64] Error 2

Anyone can help?

error message with CUDA 6.5 and ubuntu 14.04

Hi,
I have tried to compile the matconvnet with latest cuda 6.5, but it shows the error message:
g++: error: /usr/local/MATLAB/R2014b/bin/glnxa64/libcudart.so.6.5: No such file or directory

I found the place where we have this is in a xml file:
/matconvnet/matlab/src/config/mex_CUDA_glnxa64.sh
/matconvnet/matlab/src/config/mex_CUDA_glnxa64.xml

So I assume we need to update these files first.

Then I need to ln -s the corresponding lib files into the matlab that path. But I think these should not be the best solution, is any way to solve this?

vl_nnconv vs. Matlab's built-in convn

Hi,

I am trying to use matconvnet's vl_nnconv to speed up convolutions, however as I was testing, I noticed differences between the output of Matlab's built-in convn function and vl_nnconv.

Please consider the following simple codes:


A = single(magic(10));
B = single(magic(5));

c1 = convn(A, B, 'valid');
c2 = vl_nnconv(A, B, []);

assert(isequal(c1, c2))

Here are the results of these codes on my local machine:

c1

c1 =

   12000       13875       13000       14600       17250       17550
   16600       15500       12300       14675       15850       18075
   17625       14225       14250       14800       17025       18250
   16050       13975       15100       18600       18275       16825
   16300       15750       18950       19950       16975       15050
   18225       19000       21075       19475       15625       12675

c2

c2 =

   15950       12775       10400       13350       17200       20150
   13300       14400       15650       15225       17950       17675
   14225       18925       18250       17050       16125       15550
   17750       18525       18050       15200       14225       15025
   19450       20000       18750       15800       14875       14850
   19475       20000       21175       18225       15575       15275

I am not sure if I am using vl_nnconv correctly, so please let me know if I am not. If I am using the function correctly, do you know why there is a difference between the output of these two functions?

Thank you for your time and help!

vl_test_nnlayers cannot work

Hi,

I have compile MatconvNet successfully on Linux by using the command 'make ENABLE_GPU=y ENABLE_IMREADJPEG=y ARCH=glnxa64 MATLABROOT=/eecs/local/pkg/matlab-r2014a CUDAROOT=/eecs/local/pkg/cuda-6.5.14'. However, when I installed MatconvNet in matlab and tried to run vl_test_nnlayers, the error happened, and the outputs are shown below:

test number 1
testing vl_nnsoftamxloss multiple images convolutional
test number 2
testing vl_nnloss multiple images convolutional
testing vl_nnloss multiple images
testing vl_nnloss
test number 3
testing vl_nnsoftmax
test number 4
testing vl_nnconv with fully connected identity filters
Attempt to execute SCRIPT vl_nnconv as a function:
/eecs/research/asr/hengyue/matconvnet-1.0-beta7/matconvnet-1.0-beta7/matlab/vl_nnconv.m

Error in vl_test_nnlayers (line 110)
y = vl_nnconv(x,[],b,'verbose') ;

Seems like there are some problem about the script vl_nnconv.m. Is anyone have some hint to deal with this problem? Thank you:-)

convnet wrapped with function handle fails

Run the script " bugscript.m" once, and everything is fine. Run it the 2nd time, and the whole matlab crashes.

---------------- bugscript.m -------------------------------------------------------------
clear all; close all; clc; % if you comment this line out, it will not crash
% setup
run('../3rd party/matconvnet/matlab/vl_setupnn');
% cnn parameters
h = 2; w = 2;
net = load('../data/cnn/pretrained/imagenet-vgg-f.mat');
featureExtractor = @(im) cnnfeat_wrap( im, net );
% load image
im = repmat( imread('cameraman.tif'), [1 1 3] );
% extract features
feat = featureExtractor(im);
---------------- bugscript.m -------------------------------------------------------------

where the wrapper function is

---------------- cnnfeat_wrap.m -------------------------------------------------------------
function [ feat ] = cnnfeat_wrap( im, net )
lidx = 16;
res = vl_simplenn(net, single(im)-120, [], [], 'disableDropout', true);
feat = res(lidx).x;
end
---------------- cnnfeat_wrap.m -------------------------------------------------------------

Other details:
Macbook Pro 15 retina, late 2013
Mac OSX 10.9.5 (13F34)
XCode Version 5.1.1 (5B1008)
Matlab 2014a (8.3.0.532) maci64
CUDA 5.5
matconvnet, compiled with GPU enabled

Error message of vl_nnconv() when using GPU mode with OS X 10.9

I got an error message of function vl_nnconv() when I run the vl_test_nnlayers.m or cnn_mnist (or other examples) under the GPU mode on my iMac (OS X 10.9 + CUDA 6.5 + GeForce GTX 775M + Matlab 2014a). The error message is:

Error using vl_nnconv An input is not a numeric array (or GPU support not compiled).

The same code is running correctly under the CPU mode. In addition, I also installed the Matconvnet toolbox on another machine with Ubuntu 14.04 + CUDA 6.5 + GeForce GTX 770 + Matlab 2014a, where the same code runs well under either GPU or CPU mode.

In detail, the problem happens at line 153 of function vl_simplenn(), which calls:

res(i+1).x = vl_nnconv(res(i).x, l.filters, l.biases, 'pad', l.pad, 'stride', l.stride) ;

I checked all the input data of this function, where res(i).x, l.filters and l.biases are all GpuArray type.

Then, I tracked the code of Matconvnet. The problem happens at the function void packed_data_init_with_array(PackedData * map, mxArray const* array) of the file nnhelper.h, which is called by vl_nnconv.cu. This function regards all the input data ( res(i).x, l.filters and l.biases) as non-GpuArray type, which is determined by line 120 of file nnhelper.h:

if (!mxIsNumeric(array)) {
mexErrMsgTxt("An input is not a numeric array (or GPU support not compiled).") ;
}

The question here is I have no idea why mxIsNumeric() does not regard the input array as GpuArray?

I tried to search the solution to this problem online, but got nothing. Could anyone help me on this? Is that caused by the compatibility between OS X and the CUDA? Thanks!

Weights for examples

For extremely inbalanced datasets (i.e. needle in a haystack problem), how can I incorporate weights for examples? E.g., in the objective, I want to weigh positive examples, which are very rare, higher than the negative examples. What's the best way of doing this?

How can I do binary classification?

I saw that Matconvnet allow me to choose errortype as 'binary'. Is that the only parameter tht I need to change if I want to do binary classification (my label is 1 or 2).

Error using cnn_mnist, vl_nnconv

Hi!
In the MATLAB command, after I typed cnn_mnist, the error happens, the outputs are shown blow:

 cnn_mnist
resuming by loading epoch 24
training: epoch 25: processing batch   1 of 600 ...Error using vl_nnconv
DATA and FILTERS are not both CPU or GPU arrays.

Error in vl_simplenn (line 153)
      res(i+1).x = vl_nnconv(res(i).x, l.filters, l.biases, 'pad', l.pad, 'stride',
      l.stride) ;

Error in cnn_train (line 138)
    res = vl_simplenn(net, im, one, res, ...

Error in cnn_mnist (line 75)
[net,info] = cnn_train(net, imdb, @getBatch, ...

I have successfully compiled the MatconvNet on Linux by using the command 'make ENABLE_GPU=y ENABLE_IMREADJPEG=y ARCH=glnxa64 MATLABROOT = /usr/local/MATLAB/R2014a CUDAROOT = /usr/local/cuda'. And my gpu device information is show below:

  CUDADevice with properties:

                      Name: 'Tesla C1060'
                     Index: 1
         ComputeCapability: '1.3'
            SupportsDouble: 1
             DriverVersion: 6.5000
            ToolkitVersion: 5.5000
        MaxThreadsPerBlock: 512
          MaxShmemPerBlock: 16384
        MaxThreadBlockSize: [512 512 64]
               MaxGridSize: [65535 65535 1]
                 SIMDWidth: 32
               TotalMemory: 4.2948e+09
                FreeMemory: 3.9581e+09
       MultiprocessorCount: 30
              ClockRateKHz: 1296000
               ComputeMode: 'Default'
      GPUOverlapsTransfers: 1
    KernelExecutionTimeout: 0
          CanMapHostMemory: 1
           DeviceSupported: 1
            DeviceSelected: 1

Can anyone help me deal with this problem? Thank you so so much!

error on cnn_minst

hi
thanks a lot for offering that great tool box. I am trying to run cnn_mint but I have this error:

learning rate changed (0.000000 --> 0.001000): resetting momentum
training: epoch 01: processing batch 1 of 600 ...Error using .*
Array dimensions must match for binary array op.

Error in vl_nnsoftmax (line 30)
Y = Y .* bsxfun(@minus, dzdY, sum(dzdY .* Y, 3)) ;

Error in vl_simplenn (line 211)
res(i).dzdx = vl_nnsoftmax(res(i).x, res(i+1).dzdx) ;

Error in cnn_train (line 140)
res = vl_simplenn(net, im, one, res, ...

Error in cnn_mnist (line 75)
[net, info] = cnn_train(net, imdb, @GetBatch, ...

I really appreciate it if you help to solve that error because I am trying to use the library to classify my own data.

Can I abort top-5 error

If my number of possible label is only 2 (binary classification, matconvnet will give me error. I believe this is because I don't have at least 5 possible labels. How can I solve this?

Index exceeds matrix dimensions.

Error in cnn_train>updateError (line 271)
info.topFiveError(end) = info.topFiveError(end) + ...

Error in cnn_train (line 167)
info.train = updateError(opts, info.train, net, res, batch_time) ;

Error in stroke_nn_medicalData_011415 (line 76)
[net, info] = cnn_train(net, medical_record_imdb, @GetBatch, ...

error in GPU compilation (Mac+Matlab2014b)

Hi!
I cannot get rid of an error when I was compiling the GPU version as instructed on the home page.
(First complied CPU version, all ok. Then I 'make clean' and 'distclean' so that I can make again.) I was running on a Macbook Pro Retina with newly installed OS 10.9.5 and Matlab 2014b.

what I did was:

make ENABLE_GPU=y ARCH=maci64 MATLABROOT=/Applications/MATLAB_R2014b.app CUDAROOT=/Developer/NVIDIA/CUDA-6.5

then the error was:

...
echo matlab/mex/vl_nnconv.mexmaci64
matlab/mex/vl_nnconv.mexmaci64
MW_NVCC_PATH='/Developer/NVIDIA/CUDA-6.5/bin/nvcc' /Applications/MATLAB_R2014b.app/bin/mex \
       -output "matlab/mex/vl_nnconv.mexmaci64" \
       "matlab/src/vl_nnconv.cu" matlab/src/bits/im2col.o matlab/src/bits/pooling.o matlab/src/bits/normalize.o matlab/src/bits/subsample.o matlab/src/bits/im2col_gpu.o matlab/src/bits/pooling_gpu.o matlab/src/bits/normalize_gpu.o matlab/src/bits/subsample_gpu.o \
       -DENABLE_GPU -f matlab/src/config/mex_CUDA_maci64.xml -largeArrayDims -lmwblas -L/Developer/NVIDIA/CUDA-6.5/lib -lcublas -lcudart \
       2> >( sed 's/^\(.*\)(\([0-9][0-9]*\)): \([ew].*\)/\1:\2: \3/g' >&2 )
No supported compiler or SDK was found. For options, visit http://www.mathworks.com/support/compilers/R2014b/maci64.html.
make: *** [matlab/mex/vl_nnconv.mexmaci64] Error 255
rm matlab/src/bits/pooling_gpu.o matlab/src/bits/im2col.o 
...

I did installed Xcode and Xcode command line, gfortran.
Does anyone has any idea to help? Big thanks in advance.

Boundary check in vl_nnpool.cu

Hi there,

Just want to make sure if it should be

if (data.geom.height + (padTop+padBottom) < poolHeight || data.geom.width + (padLeft+padRight) < poolWidth) { 
    ... 
}

instead of

if (data.geom.height < poolHeight || data.geom.width < poolWidth) { 
    ... 
}

at Line 243 in matconvnet/matlab/src/vl_nnpool.cu .

Let me know if I'm correct.

CUDA Kernel error

hi,

I successfully compiled the matconvnet and installed nvidia driver (340) and cuda 5.5 toolkit.
However when i run cnn_mnist.m with useGPU = true.

i got following error messages: col2im:CUDA Kernel error: invalid device function
im2col:CUDA Kernel error: invalid device function

Error using gpuArray/subsref an expected error occurred trying to launch a kernel ,the cuda error was invalid device function

details:
Matlab R2014a,
GTX 750 ti,
ubuntu 12.04.
CUDA 5.5 toolkit
matconvnet, compiled with GPU enabled

'jpeglib.h' file not found when compile "vl_imreadjpeg" by make ENABLE_IMREADJPEG=y

Hi

I can compile the MatConvNet successfully on my Mac OS X by the instruction:

make ENABLE_GPU=y ARCH=maci64 MATLABROOT=/Applications/MATLAB_R2014a.app CUDAROOT=/Developer/NVIDIA/CUDA-6.5

But I failed at the last step when I want to compile vl_imreadjpeg by

make ENABLE_IMREADJPEG=y

which shows a fatal error:

fatal error: 'jpeglib.h' file not found

include <jpeglib.h>

This error happens at line 17 of the file vl_imreadjpeg.c. My platform is OS X 10.9 + Xcode 6.1.1 (Clang) + CUDA 6.5 + Matlab 2014a.

I checked my installed libraries, where the Homebrew tells me the "jpeg-8d" has been installed.

Thank you for your help!

CUDA kernel error (unspecified launch failure)

Hi, I hit an error when I want to pass my image through net on GPU (Tesla k20c / Ubuntu 1204/Matlab 2014a).
here's the error

im2col: CUDA kernel error (unspecified launch failure)
Error using gpuArray/max
An unexpected error occurred during CUDA execution. The CUDA error was:
unspecified launch failure

Error in vl_nnrelu (line 17)
  y = max(x, single(0)) ;

Error in vl_simplenn (line 165)
      res(i+1).x = vl_nnrelu(res(i).x) ;
...

Both net and input image were moved to gpu. Thanks for help!

Error using zeros Leading inputs must be numeric.

while running mnist or cifar test on win7 x64 matlab r2012a I get

Error using zeros
Leading inputs must be numeric.

Error in cnn_train (line 36)
net.layers{i}.filtersMomentum = zeros('like',net.layers{i}.filters) ;

Error in cnn_mnist (line 75)
[net,info] = cnn_train(net, imdb, @GetBatch, ...

Error in run (line 57)
evalin('caller', [s ';']);

Error in filters groups

Hi,
I am doing binary classification for grayscale images 256x256 size, their structure being mostly small objects (cancer mammograms) in a empty background. For first layer I choose convolutional with 32 filters of 4x4 size but I get an error:

Error using vl_nnconv The number of filter groups does not divide the total number of filters.

Error in vl_simplenn (line 153)
res(i+1).x = vl_nnconv(res(i).x, l.filters, l.biases, 'pad', l.pad, 'stride', l.stride) ;

Maybe the vl_nnconv has some implementation details. Any help will be kindly appreciated. Thanks.

Using vl_nnconv to perform large convolution

Hi all,

When the code is doing back prop through the conv layer, it reports an error that says "unexpected error during CUDA execution: CUDA_ERROR_LAUNCH_FAILED".

The matrix sizes involved in the back propagation is as below:

vl_nnconv: mode gpu; backward
vl_nnconv: stride: [1 1], pad: [1 2 3 3], numGroups: 1, has bias: 1, has filters: 1, fully connected: 0
vl_nnconv: data: 63 x 84 x 512 x 1 [10.3 MB]
vl_nnconv: filters: 4 x 7 x 512 x 2 [0.1 MB]
vl_nnconv: biases: 1 x 2 x 1 x 1 [0.0 MB]
vl_nnconv: derOutput: 63 x 84 x 2 x 1 [0.0 MB]
vl_nnconv: derData: 63 x 84 x 512 x 1 [10.3 MB]
vl_nnconv: derFilters: 4 x 7 x 512 x 2 [0.1 MB]
vl_nnconv: derBiases: 1 x 2 x 1 x 1 [0.0 MB]
vl_nnconv: temp: 63 x 84 x 14336 x 1 [289.4 MB]
vl_nnconv: temp (cached): 63 x 84 x 14336 x 1 [289.4 MB]
vl_nnconv: allOnes: 63 x 84 x 1 x 1 [0.0 MB]
vl_nnconv: allOnes (cached): 375 x 500 x 1 x 1 [0.7 MB]

By attaching cuda-gdb with cuda-memcheck, we can detect the memory error:

Illegal access to address (@global)0xb063d7900 detected.

Program received signal CUDA_EXCEPTION_1, Lane Illegal Address.
[Switching focus to CUDA kernel 2, grid 167, block (72,0,0), thread (96,0,0), device 0, sm 7, warp 1, lane 0]
0x00007fff73e39b40 in ??<<<(74,20,1),(256,1,1)>>> ()

By stepping, I locate the error line: at line 130 in matlab/src/vl_nnconv.cu

cublasSgemm(...

To reproduce the problem, we could run

data = gpuArray(rand(63, 84, 512, 1, 'single'));
filters = gpuArray(rand(4, 7, 512, 2, 'single'));
biases = gpuArray(rand(1, 2, 1, 1, 'single'));
derOutput = gpuArray(rand(63, 84, 2, 1, 'single'));

[derData derFilters derBiases] = vl_nnconv(data, filters, biases, derOutput, 'pad', [1 2 3 3], 'stride', [1 1]);

I checked the memory usage, I feel it will not exceed the limit. Here is my GPU info:

  CUDADevice with properties:

                      Name: 'Tesla K40c'
                     Index: 1
         ComputeCapability: '3.5'
            SupportsDouble: 1
             DriverVersion: 6.5000
            ToolkitVersion: 5.5000
        MaxThreadsPerBlock: 1024
          MaxShmemPerBlock: 49152
        MaxThreadBlockSize: [1024 1024 64]
               MaxGridSize: [2.1475e+09 65535 65535]
                 SIMDWidth: 32
               TotalMemory: 1.2079e+10
                FreeMemory: 1.1946e+10
       MultiprocessorCount: 15
              ClockRateKHz: 745000
               ComputeMode: 'Default'
      GPUOverlapsTransfers: 1
    KernelExecutionTimeout: 0
          CanMapHostMemory: 1
           DeviceSupported: 1
            DeviceSelected: 1

I'm not quite familiar with cuBLAS. My hunch is that some limitation of memory might be violated during executing cublasSgemm.

May I have some suggestions from you guys? Thanks!

Peiyun

compilation problem

My computer is setup with matlab 2012b, gcc4.4.6 and cuda5.5. When I modified the makefile as the instruction and then make, I found there is a error appears that: /usr/local/MATLAB/R2012b/bin/glnxa64/libscudart.so.5.0 not find.Then I modified the mex_CUDA_glnx64.sh. I modified the libscudart.so.5.0 to libscudart.so.5.5. I make a soft link as /usr/local/MATLAB/R2012b/bin/glnxa64/libscudart.so.5.5.Then another error said:"matlab/src/vlconv.cu":file not recognized:file format not recognized. My nvcc is right set up and I successfully compile the caffe project. Can anybody help me solve this problem?Thanks a lot.

Numerical reproducibility

First of all, this library is really awesome. Kudos to you guys!

One weird issue I just found about numerical reproducibility: the GPU-mode backward pass in certain networks (more specifically, caffe-ref/-alex and vgg-f/-m) produces slightly different gradients (at least in the bottom-most input layers) every time it runs, but is fine in other networks (e.g. vgg-s and verydeep-16/-19). The GPU-mode forward pass, as well as the CPU-mode forward and backward passes, on the other hand, is fine in all networks. Any idea what may cause this? I'm using GTX Titan and Matlab R2014b with matconvnet compiled against CUDA 6.5 here.

Thanks a lot!

Impossible to lunch the test

Hi,

I compiled matconv with Xcode 5.1.1, Matlab R2014a and Cuda 5.5 (or 6.5, I tried both) and it said "Built Successfully". Some mex filed appeared :
vl_imreadjpeg.mexmaci64
vl_nnconv.mexmaci64
vl_nnnormalize.mexmaci64
vl_nnpool.mexmaci64.

But when I run your example (after the vl_setupnn) I have this error in matlab prompt:

"Attempt to execute SCRIPT vl_nnconv as a function:
/path_to_matconv/matconvnet-master/matlab/vl_nnconv.m

Error in vl_test_nnlayers (line 99)
y = vl_nnconv(x,[],b,'verbose') ;".

I am not sure of where the problem can be.

Thank you,

Luc

The speed comparation

Has any body madea speed comparation with cuda convnet2 or caffe?I try to reimplement a paper. Using matconvnet is really wonderful.However,I have run 5000 epoch using GPU GTX 550 and 1 million backpropagation which costs me roughly 2 days.The author of the paper have run 8*10^8 backpropagation which costs roughly 3 days using GTX 770 GPU and cuda convnet2. Is there anything wrong? I think maybe there is something wrong with my code.But I still want to wonder the speed comparation with cuda convnet2 or caffe.Thanks!

Mex compilation of vl_nnconv.nu failed

When I tried to compile the toolbox, all was well until I got the following error ( Matlab R2013a,
GeForce GT 610, ubuntu 12.04, CUDA 6.5)

== Output from make ==
echo matlab/mex/vl_nnconv.mexa64
matlab/mex/vl_nnconv.mexa64
MW_NVCC_PATH='/usr/local/cuda-6.5/bin/nvcc' /usr/local/MATLAB/R2013a/bin/mex
-output "matlab/mex/vl_nnconv.mexa64"
"matlab/src/vl_nnconv.cu" matlab/src/bits/im2col.o matlab/src/bits/pooling.o matlab/src/bits/normalize.o matlab/src/bits/subsample.o matlab/src/bits/im2col_gpu.o matlab/src/bits/pooling_gpu.o matlab/src/bits/normalize_gpu.o matlab/src/bits/subsample_gpu.o
-DENABLE_GPU -f matlab/src/config/mex_CUDA_glnxa64.sh -largeArrayDims -lmwblas -L/usr/local/cuda-6.5/lib64 -lcublas -lcudart
2> >( sed 's/^(.)(([0-9][0-9])): ([ew].*)/\1:\2: \3/g' >&2 )
gcc-4.4: No such file or directory

mex: compile of ' "matlab/src/vl_nnconv.cu"' failed.

make: *** [matlab/mex/vl_nnconv.mexa64] Error 1
== End ==

Makefile options:
ENABLE_GPU=1
ENABLE_IMREADJPEG=1
DEBUG=1
ARCH=glnxa64
CUDAROOT=/usr/local/cuda-6.5
MATLABROOT=/usr/local/MATLAB/R2013a

Why is there a reference to gcc-4.4 even though gcc -v shows my system version as 4.8.2. Also, during compilation, mex displayed the following warning(not sure how relevant it is)
Warning: You are using gcc version "4.8.2". The version
currently supported with MEX is "4.4.x".
For a list of currently supported compilers see:
http://www.mathworks.com/support/compilers/current_release/

Euclidean Loss

Hi,
I was wondering what changes would I need to make to incorporate an Euclidean loss function in place of the standard log loss function.
Would simply passing the outputs of the softmax layer and calculating the difference suffice?
If I use this as the final layer, do I need to compute the derivative?

mnist code example question

What does this mean?

imdb.images.set = [ones(1,numel(y1)) 3*ones(1,numel(y2))] ;
imdb.meta.sets = {'train', 'val', 'test'} ;

Windows makefile for matconvnet

Hello
I am having trouble with the makefile for windows.I am using the latest version of matconvnet 1.0beta7.

Thanks for the help

assertition failed vl_simplenn.m

Hi,

After run cnn_mnist.m, I have few net-epoch-n.mat models.

To predict one image on mnist , following the example in http://www.vlfeat.org/matconvnet/pretrained

When calling

net=load('./data/mnist-baseline/net-epoch-5.mat');
res=vl_simplenn(net.net, im); % im is just one mnist image whose size is 28*28

I got the error:

Error using vl_nnsoftmaxloss (line 42) Assertion failed.
Error in vl_simplenn (line 164)
res(i+1).x = vl_nnsoftmaxloss(res(i).x, l.class) ;

sprintf and fullfile

on win 7 x64 matlab R2012a I get error
Warning: Escape sequence 'm' is not valid. See 'help sprintf' for valid
escape sequences.

In cnn_train at 94
In cnn_mnist at 75
In run at 57

modelPath = fullfile(opts.expDir, 'net-epoch-%d.mat') ;
because fullfile reverse slashes and then
filename= sprintf(modelPath, epoch);
doesn't work

also more corerct is to use filesep

opts.expDir = ['data' filesep 'exp' ];

(GPU) cannot run vl_test_nnlayers when GPU is enabled

I compiled the code with GPU option enable and there are no error messages, however, when I try to run vl_test_nnlayers, it gave me this:

vl_test_nnlayers(true)
test number 1
testing vl_nnsoftamxloss multiple images convolutional
test number 2
testing vl_nnloss multiple images convolutional
testing vl_nnloss multiple images
testing vl_nnloss
test number 3
testing vl_nnsoftmax
test number 4
testing vl_nnconv with fully connected identity filters
Invalid MEX-file
'~/matconvnet-master/matlab/mex/vl_nnconv.mexa64':
libcudart.so.6.5: cannot open shared object file: No such
file or directory

I am using CUDA 6.5 and gcc 4.4.

Thanks!

Error raised in vl_nnsoftmaxloss

Hi,

I launched a training with my own images but i have an error launched in vl_nnsoftmaxloss.

The error is raised after this:
2.12 s (27.2 images/s) err 98.2 err5 93.9
training: epoch 01: processing batch 32 of 4403 ...
(sometimes raised before or after).

Here is the error:
Error in vl_nnsoftmaxloss (line 62)
t = Xmax + log(sum(ex,3)) - reshape(X(c_), [sz(1:2) 1 sz(4)]) ;

Error in vl_simplenn (line 163)
res(i+1).x = vl_nnsoftmaxloss(res(i).x, l.class) ;

Error in cnn_train (line 134)
res = vl_simplenn(net, im, one, res, ...

Error in cnn_test (line 80)
[net,info] = cnn_train(net, imdb, fn, opts.train, 'conserveMemory', true) ;

The error is raised in the X(c_).

X is of size (5x5x912x64), hence X has 1459200 elements. c_ is of size 1600. When I monitored the elements of c_, the max is always near 1459200, but always under until the error is raised. At this time, max(c_) = 1460550. Is it possible ?

Is there a way to avoid this case ?

Thank you for your help,

Luc

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.