GithubHelp home page GithubHelp logo

Comments (5)

jotaf98 avatar jotaf98 commented on May 8, 2024

Hi, I tried your code (using Matlab 2017a) and it runs fine; but I guess it would have that problem if you ever have a <= 0.

Although the error message is not great, this is more of a numerical issue -- the way to avoid it is to ensure that a>0 at all times. An easy way to achieve it would be by replacing a with exp(a).

An added bonus is that then both a and b are on the same scale (exponential in x). But you can try other strategies to ensure positive a.

from autonn.

damdaepark avatar damdaepark commented on May 8, 2024

Hi, Thanks for your quick reply.

I found that the problem just depended on whether I commented out the net.useGpu(1) line or not.
I tried the exponentiation technique as you mentioned like below:

clear all; clc; close all;
x_true = [-3:7e-5:3]'; % try x_true = [1:7e-5:3];
y_true = 3*x_true.^2;
x = Input('name','x');
y = Input('name','y');
a = Param('value', 1); a.diagnostics = true;
b = Param('value', 1); b.diagnostics = true;
y_hat = exp(a)*x.^exp(b);
loss = sum((y-y_hat).^2);

Layer.workspaceNames();
net = Net(loss);
net.useGpu(1); % try comment this line
stats = Stats({'loss'});
solver = solvers.Adam();
solver.learningRate = 1e-1;
 
for i = 1:1000
net.eval({'x', x_true, 'y', y_true});
solver.step(net);
plotDiagnostics(net,100)
stats.update(net);
stats.print();
stats.push('train');
end

But I still got the following error:

Error using gpuArray/bsxfun
POWER: needs to return a complex result, but this is not supported for real input X and Y on the GPU. Use
POWER(COMPLEX(X),COMPLEX(Y,0)) instead.

If you get the same error as in my case, try comment out the line of net.useGpu(1).
Or just constrain the values of x_true to positive ones by using x_true = [1:7e-5:3].
Because they worked well for me.

I'm currently using R2018a, CUDA 9.2.

from autonn.

jotaf98 avatar jotaf98 commented on May 8, 2024

Thanks, I was able to reproduce it. I just pushed the overload for complex() so that you can implement Matlab's suggested solution, and it works, e.g.:

y_hat = complex(exp(a) * x) .^ complex(exp(b));

Of course, I don't advise actually having complex numbers in this sort of problem; somehow ensuring that the power won't produce them is of course the best fix :)

from autonn.

damdaepark avatar damdaepark commented on May 8, 2024

Did you update AutoNN recently? I tried it but just got a message:

Error using complex
Input A must be numeric and full.
Error in Untitled (line 8)
y_hat = complex(exp(a) * x) .^ complex(exp(b));

And also find that there is no complex method for Layer object:

Methods for class Layer:

Layer                 displayCustom         ipermute              plotPDF               sqrt                  vl_nnmaxout           
abs                   double                ldivide               plus                  squeeze               vl_nnnoffset          
accumarray            end                   le                    power                 subsref               vl_nnnormalize        
acos                  eq                    log                   rand                  sum                   vl_nnnormalizelp      
all                   evalOutputSize        lt                    randi                 tan                   vl_nnpdist            
and                   exp                   max                   randn                 times                 vl_nnpool             
any                   eye                   mean                  randperm              transpose             vl_nnrelu             
asin                  find                  min                   rdivide               uminus                vl_nnsigmoid          
atan                  flip                  minus                 repelem               uplus                 vl_nnsoftmax          
atan2                 fliplr                mldivide              replace               vertcat               vl_nnsoftmaxloss      
bsxfun                flipud                mrdivide              repmat                vl_nnaffinegrid       vl_nnspnorm           
cat                   gather                mtimes                reshape               vl_nnbilinearsampler  vl_nnwsum             
circshift             ge                    nan                   rot90                 vl_nnbnorm            xor                   
colon                 getReceptiveFields    ne                    sequentialNames       vl_nnconv             zeros                 
copy                  gpuArray              nnz                   shiftdim              vl_nnconvt            
cos                   gt                    not                   sin                   vl_nncrop             
ctranspose            horzcat               ones                  single                vl_nndropout          
deepCopy              inf                   or                    size                  vl_nnloss             
display               inv                   permute               sort                  vl_nnlstm             

Static methods:

create                fromDagNN             setDiagnostics        
fromCompiledNet       fromFunction          workspaceNames 

By the way, finally, I found what was wrong.
I fixed bsxfun_der.m from

  elseif isequal(op, @power)
    da = dy .* a .^ (b - 1) .* b ;
    if nargout > 2
      % prevents error if log(a) becomes complex, but is not needed anyway
      % because b is constant
      db = dy .* (a .^ b) .* log(a) ;
    end

to

  elseif isequal(op, @power)
    da = dy .* a .^ (b - 1) .* b ;
    if nargout > 2
      % prevents error if log(a) becomes complex, but is not needed anyway
      % because b is constant
      db = dy .* (a .^ b) .* log(complex(a)) ; % here!
    end

And it worked.

the problem was taking the log of negative numbers; I think log(negative number) works in the CPU environment only and does not work in GPU.

from autonn.

jotaf98 avatar jotaf98 commented on May 8, 2024

Yes, I meant that I pushed an update that adds the complex() overload. To see it you'd need to update your repo.

Changing bsx_fun is a nice solution though. Initially I hesitated to do it because it slows down the CPU mode but in practice all we use is the GPU and the CPU is mostly for debugging, so I see the value in that.

from autonn.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.