GithubHelp home page GithubHelp logo

medal's Introduction

-------------------------------------------------------------------------------
Matlab Environment for Deep Architecture Learning (MEDAL) - version 0.1
-------------------------------------------------------------------------------

   o   o
  / \ / \ EDAL
 o   o   o

Model Objects:
	mlnn.m        -- Multi-layer neural network
	mlcnn.m       -- Multi-layer convolutional neural network
	rbm.m         -- Restricted Boltzmann machine (RBM)
	mcrbm.m       -- Mean-covariance (3-way Factored) RBM
	drbm.m        -- Dynamic/conditional RBM 
	dbn.m         -- Deep Belief Network 
	crbm.m        -- Convolutional RBM
	ae.m          -- Shallow autoencoder 
	dae.m         -- Deep Autoencoder 
	
-------------------------------------------------------------------------------
To begin type:

>> startLearning

in the medal directory

To get an idea of how the model objects work, check out the demo script:

>> deepLearningExamples('all')

These examples are by no means optimized, but are for getting familiar with 
the code.If you have any questions or bugs, send them my way:
 
[email protected]
-------------------------------------------------------------------------------
References:

*Neural Networks/Backpropagations:
 Rumelhart, D. et al. "Learning representations by back-propagating errors".
 Nature 323 (6088): 533–536. 1986.

*Restricted Boltzmann Machines/Contrastive Divergence
 Hinton, G. E. "Training Products of Experts by Minimizing Contrastive
 Divergence". Neural Computation 14 (8): 1771–1800. 2002

*Deep Belief Networks:
 Bengio, Y., Lamblin, P., Popovici, P., Larochelle, H. "Greedy Layer-Wise
 Training of Deep Networks" NIPS 2006

*Deep & Denoising Autoencoders
 Hinton, G. E. and Salakhutdinov, R. R "Reducing the dimensionality of data with
 neural networks." Science, Vol. 313. no. 5786, pp. 504 - 507, 28 July 2006.

*Pascal, V. et al. “Stacked denoising autoencoders: Learning useful
 representations in a deep network with a local denoising criterion.“ The
 Journal of Machine Learning Research 11:3371-3408. 2010

*Mean-Covariance/3-way Factored RBMs:
 Ranzato M. et al. "Modeling Pixel Means and Covariances Using
 Factorized Third-Order Boltzmann Machines." CVPR 2012.

*Dynamic/Conditional RBMs:
 Taylor G. et al. "Modeling Human Motion Using Binary Latent
 Variables" NIPS 2006.

*Convolutional MLNNs:
 LeCun, Y., et al. "Gradient-based learning applied to document recognition".
 Proceedings of the IEEE, 86(11), 2278–2324. 2008

 Krizhevsky, A et al. "ImageNet Classification with Deep Convolutional Neural
 Networks." NIPS 2012.

*Convolutional RBMs:
 Lee, H. et al. “Convolutional deep belief networks for scalable unsupervised
 learning of hierarchical representations.”, ICML 2009

*Rectified Linear Units
 Nair V., Hinton GE. (2010) Rectified Linear Units Improve Restricted Boltzmann Machines. IMCL 2010.

 Glorot, X. Bordes A. & Bengio Y. (2011). "Deep sparse rectifier neural 
 networks". AISTATS 2011.

*Dropout Regularization:
 Hinton GE et al. Technical Report, Univ. of Toronto, 2012.
 
*General
 Hinton, G. E. "A practical guide to training restricted Boltzmann machines"
 Technical Report, Univ. of Toronto, 2010.
-------------------------------------------------------------------------

medal's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

medal's Issues

Unexpected output of mlcnn

Why the values of property "mlcnn.netOut" does not match with values of "testLablels". After executiong demoMLCNN.m.
Even elements of mlcnn.netOut are between 0 and 1 but not exactly 0 and 1.

TrainingConvolutional DBN

Hi I was curious if dbn can call crbm instead of rbm to build a convolutional dbn as in 2009 ICML paper of Lee.

how to change the architecture of RBM

the rbm gets initialized by a weight matrix of 2 X 25 and bias matrices also get initialized similarly. Could u let me know where they get initialized so that I can change the dimensions for my dataset?

mlcnn does not train

"mlcnn.m" has a method named: "updateParams".
Unfortunately in this version, it does not update the output layer's weights, so the whole network is like a car with a working engine but without a mechanism to transfer engine's power to the tires.
I have a suggestion to make it work, to do so I have changed the body of "updateParams" method to:

function self = updateParams(self)
%net = updateParams()
%--------------------------------------------------------------------------
%Update network parameters based on states of netowrk gradient, perform
%regularization such as weight decay and weight rescaling
%--------------------------------------------------------------------------
wPenalty = 0;
for lL = 2:self.nLayers % Changed from: for lL = 2:self.nLayers-1 by Masih Azad
switch self.layers{lL}.type
% CURRENTLY, ONLY UPDATE FILTERS AND FM BIASES
% PERHAPS, IN THE FUTURE, WE'LL BE FANCY, AND DO FANCY UPDATES
case {'conv'} % Changed from: case {'conv','output'} by Masih Azad
lRate = self.layers{lL}.lRate;
for jM = 1:self.layers{lL}.nFM
% UPDATE FEATURE BIASES
self.layers{lL}.b(jM) = self.layers{lL}.b(jM) - ...
lRate*self.layers{lL}.db(jM);

            % UPDATE FILTERS
            for iM = 1:self.layers{lL-1}.nFM
                if self.wPenalty > 0 % L2 REGULARIZATION
                    wPenalty = self.layers{lL}.filter(:,:,iM,jM)*self.wPenalty;
                elseif self.wPenalty < 0 % L1-REGULARIZATION (SUB GRADIENTS)
                    wPenalty = sign(self.layers{lL}.filter(:,:,iM,jM))*abs(self.wPenalty);
                end
                self.layers{lL}.filter(:,:,iM,jM) = ...
                self.layers{lL}.filter(:,:,iM,jM) - ...
                lRate*(self.layers{lL}.dFilter(:,:,iM,jM)+wPenalty);
            end
        end
        %% Added by Masih Azad
        case {'output'}
            lRate = self.layers{lL}.lRate;
            self.layers{lL}.b = self.layers{lL}.b - ...
                                lRate*self.layers{lL}.db;

            if self.wPenalty > 0 % L2 REGULARIZATION
                wPenalty = self.layers{lL}.W*self.wPenalty;
            elseif self.wPenalty < 0 % L1-REGULARIZATION (SUB GRADIENTS)
                wPenalty = sign(self.layers{lL}.W)*abs(self.wPenalty);
            end                
            self.layers{lL}.W = self.layers{lL}.W - lRate*(self.layers{lL}.dW+wPenalty);
        %%
    end
end

end

is there a substraction operation between Deep Learning layers ?

Hi guys,
I'm trying to build a DeepLearning network which contains a subtraction operation between two layers, and I can't find a way to do that while MATLAB has just an addition operation (additionLayer)?
for more clarification, the equation below discribes the step that I should do:
C = X - AvgPooling(X)
any suggestions guys?

error in demoGaussianRBM_Faces

when i was running the below code .I get this error..

demoGaussianRBM_Faces

Here we train an RBM with Continuous inputs (Faces Dataset).
Undefined function 'notDefined' for input arguments of type 'char'.

Error in rbm (line 72)
if notDefined('arch')

Error in demoGaussianRBM_Faces (line 22)
r = rbm(arch)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.