GithubHelp home page GithubHelp logo

practical-cnn's People

Contributors

vedaldi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

practical-cnn's Issues

How to initialize the dzdy in practice?

For example, in exercise 3, why you compute dzdx3 as follows:
dzdx3 = ...
- single(res.x3 < 1 & pos) / sum(pos(:)) + ...
+ single(res.x3 > 0 & neg) / sum(neg(:)) ;
In other words, how to obtain the initial project tensor p in practice?

Error with vl_simplenn_tidy

Hello everyone,

While trying to run initializeCharacterCNN, I'm gettin this error:

Undefined function 'vl_simplenn_tidy' for input arguments of type 'struct'

Can anyone plz help me..

Thanks..

Failed in Gpu mode

When I use [setup('useGpu',true)] to compile, Matlab 2014a displayed

[vl_compilenn: CUDA: MEX config file: '/usr/local/MATLAB/R2014a/toolbox/practical-cnn-2015a/matconvnet/matlab/src/config/mex_CUDA_glnxa64.xml'
Building with 'g++'.
MEX completed successfully.
Building with 'g++'.
MEX completed successfully.
Building with 'g++'.
MEX completed successfully.
Building with 'g++'.
MEX completed successfully.
Building with 'nvcc'.
MEX completed successfully.
Building with 'nvcc'.
MEX completed successfully.
Building with 'nvcc'.
MEX completed successfully.
Building with 'nvcc'.
MEX completed successfully.
Building with 'nvcc'.
MEX completed successfully.
Building with 'nvcc'.
MEX completed successfully.
Building with 'nvcc'.
MEX completed successfully.
*Warning: GPU support does not seem to be compiled in MatConvNet. Trying to compile it now *]

When I use the Gpu mode in exercise4.m, it displayed

[Error using gpuArray
An unexpected error occurred during CUDA execution. The CUDA error was:
Unknown error code

Error in exercise4 (line 50)
imdb.images.data = gpuArray(imdb.images.data) ;]

I really don't what's wrong with it. When I use matconvnet-master from Github, the Gpu mode is OK.

Can you help me with the problem? Thank you!

How to run './extras/download.sh' and use vlfeat

I downloaded the files and everything but I've had trouble getting VLFeat to work. In particular its suggest to do:

0. Set the current directory to the practical base directory.
1. From Bash:
   1. Run `./extras/download.sh`. This will download the
      `imagenet-vgg-verydeep-16.mat` model as well as a binary
      copy of the VLFeat library and a copy of MatConvNet.
   2. Run `./extra/genfonts.sh`. This will download the Google Fonts
      and extract them as PNG files.
   3. Run `./extra/genstring.sh`. This will create
      `data/sentence-lato.png`.
2. From MATLAB run `addpath extra ; packFonts ;`. This will create
   `data/charsdb.mat`.
3. Test the practical: from MATLAB run all the exercises in order.

but it doesn't quite work because extra is not defined or anywhere to be found. Whats going on?

in particular I am trying to do:

% Visualize the output y
figure(2) ; clf ; vl_imarraysc(y) ; colormap gray ;

of the tutorial but matlab throws errors.

Thanks!

quastion about derivative - part 2

hi,
in part 2 there is a calculation of dzdx_empirical ,
can somone help and explain the calculation? why is there a sum? and why is there a division by eta and not a division by etx*ex (as i expected to be)?

thanks!

% Check the derivative numerically
ex = randn(size(x), 'single') ;
eta = 0.0001 ;
xp = x + eta * ex ;
yp = vl_nnconv(xp, w, []) ;

dzdx_empirical = sum(dzdy(:) .* (yp(:) - y(:)) / eta) ;
dzdx_computed = sum(dzdx(:) .* ex(:)) ;

fprintf(...
'der: empirical: %f, computed: %f, error: %.2f %%\n', ...
dzdx_empirical, dzdx_computed, ...
abs(1 - dzdx_empirical/dzdx_computed)*100) ;

exercise5, when repeated, gives different results each time: on Matlab R2016a, Ubuntu 14.04, gcc 4.7

Each time exercise5 is run, the results are different. Figure headings are

1st time: bell pepper (946), score 0.848
2nd time: bell pepper (946) score 0.303
3rd time: balloon( 418) score 0.647
: varying answers after that

There is no change by setting vl_simplenn options to 'disableDropout', 'true', --the results still vary. This is on a Ubuntu 14.04, Matlab R2016a, with gcc 4.7. There are no compiler warnings when setup is run (with default options).

When exercise5 is run on a Mac Powerbook, with Matlab R2015a, it is stable. Results are the same every time.

Question about ex3

Your code:z = y .* (res.x3 - 1);
My codes:
idx_pos = (pos==1);
idx_neg = (neg==1);
z = max(0,1-res.x3.*idx_pos)+max(0,res.x3.*idx_neg);
My Question: Is res.x3-1 suitable for computing the loss of negative examples?

Exercise 4 - using validation data in train

@vedaldi Thank you for the detailed tutorial, I learned a lot by playing with it.
In exercise 4 it seems the imageMean is calculated on all the data (train+validation). In realistic scenario the validation set is given after the training phase.
What do you think about the following change:
exercise4.m on line 45:
imageMean = mean(imdb.images.data(:)) ;
should maybe replaced by
imageMean = mean(imdb.images.data(imdb.images.set==1)) ;

mistake in the BP formulation

In the BP formulation(1), the x(L) should be x(L-1), and other formulations have the same problem.
The corresponding formulation in the matconvnet manual is right.

error rate.

Hello,

I was implementing exercise 3.4 and found that by removing smoothing process the error reduces significantly. This is non-intuitive to me because blobs are circular in shape and I speculate that smoothing would have lower error. Any help is appreciated.

Thanks,
Aakash

error in classificaton other letters

Hello.
I'm using your library for Hebrew letters classification. After updating the imdb variable, I've got today an exception of index out of " Index exceeds matrix dimensions." in vlnnloss)

The new image/ kabels db is identical to the first one(classing chars).

Appreciate your help,

-------------------------------------------- CODE ---------------------------------------------------------------------
layer| 0| 1| 2| 3| 4| 5| 6| 7| 8|
type| input| conv| mpool| conv| mpool| conv| relu| conv|softmxl|

name n/a
support n/a 5 2 5 2 4 1 2 1
filt dim n/a 1 n/a 20 n/a 50 n/a 500 n/a
num filts n/a 20 n/a 50 n/a 500 n/a 26 n/a
stride|    n/a|      1|      2|      1|      2|      1|      1|      1|      1|
   pad|    n/a|      0|      0|      0|      0|      0|      0|      0|      0|

----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
rf size| n/a| 5| 6| 14| 16| 28| 28| 32| 32|
rf offset| n/a| 3| 3.5| 7.5| 8.5| 14.5| 14.5| 16.5| 16.5|

rf stride n/a 1 2 2 4 4 4 4 4
data size NaNxNaN NaNxNaN NaNxNaN NaNxNaN NaNxNaN NaNxNaN NaNxNaN NaNxNaN NaNxNaN
data depth NaN 20 20 50 50 500 500 26 1
data num 100 100 100 100 100 100 100 100 1
---------- ------- ------- ------- ------- ------- ------- ------- ------- -------
data mem NaN NaN NaN NaN NaN NaN NaN NaN NaN
param mem n/a 2KB 0B 98KB 0B 2MB 0B 203KB 0B

parameter memory|2MB (4.8e+05 parameters)|
data memory|NaN (for batch size 100)|

train: epoch 01: 1/ 76: 132.9 Hz obj:3.2 top1err:0.96 top5err:0.84 [100/100]
train: epoch 01: 2/ 76: Index exceeds matrix dimensions.

Error in vl_nnloss (line 209)
t = Xmax + log(sum(ex,3)) - X(ci) ;

Error in vl_simplenn (line 293)
res(i+1).x = vl_nnloss(res(i).x, l.class) ;

Some questions in CNN output formula

Hi! Very glad to be the first person starting an issue!

I am a new learner to CNNs and recently I found some problem when reading the MatConvNet documentation or VGG Convolutional Neural Networks Practical.

In Part 1.1 of the practical there is a question "Note that x is indexed by i+i′ and j+j′, but that there is no plus sign between k and k′. Why?" This is also my question.
qq 20150317101734

(1) In that formula, why does x has four dimensions but f has only three dimensions? When defining x,y and f we found clearly that x and y are 3-dimensional, f is 4-dimensional. Is there any typo in this formula?
(2) Why do the first two dimensions of x are i+i′ and j+j′? Does that mean that (row-1, col-1) of x is not involved in the convolution process? If so, I don't think it is right.

Segmentation using "softmaxloss/vl_nnloss"

Hello,

I am trying to segment medical image using "softmaxloss" which refer to "vl_nnloss". I can segment my image using this example. But when I use "softmaxloss" layer, removing the last layer (forward and backward pass) as follows:

%remove this
net = addCustomLossLayer(net, @l2LossForward, @l2LossBackward) ;

%add this
net.layers{end+1} = struct(...
'name', 'loss', ...
'type', 'softmaxloss') ;

But my objective function generate error '0'. I couldn't figure out what else I should do to make this work with "softmaxloss/vl_nnloss" instead of regression. I will appreciate any kind of help.

output

Sincerely
Hosna

error with vl_nnconv

The Matlab shows such words : "Attempt to execute SCRIPT vl_nnconv as a function" . And I find there is no code in vl_nnconv.m. Is there any trick I don't know ?

Tasks

Hello!
I try to change the getBatchWithJitter() function to improve CNN in Exercise4.m but, unfortunatelly, I do not have completely different result.
Can you give me a hint, please?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.