Comments (6)
Implicitly, I think so, yes. The README has a detailed description of how auto diff works. It's all about Variables and Functions.
from scorch.
There are a couple reasons for this:
- not all functions on Tensors are implemented on Variables.
- the backward pass is usually faster when you write the derivation explicitly
For some functions I have implemented both, because I find autodiff more elegant than explicitly writing it. For example: Dropout.
https://github.com/botkop/scorch/blob/master/src/main/scala/scorch/nn/Dropout.scala
from scorch.
I tried a simple linear regression ,the loss printed out never changes.Do I have to implement the backward myself?
def lrTest = {
val nf1 = 2
val nf2 = 2
val numClasses = 2
val fc1 = Linear(1, 1) // an affine operation: y = Wx + b
val f: Tensor => Tensor = { t: Tensor =>
t * 3 + ns.array(1d)
}
val optimizer = Adam(Seq(fc1) flatMap (_.parameters), lr = 0.0001)
(10 to 10) foreach { i =>
(1 to 10000) foreach { _ =>
val xx = ns.array(i)
val ypred = fc1.forward(Variable(xx))
val y = f(xx)
val loss = Variable(ns.mean(ns.square(ypred.data - y)))
println(s"los : ${loss.data}")
println(s" y : $y ,, y pred : ${ypred.data}")
optimizer.zeroGrad()
loss.backward()
optimizer.step()
}
}
}
from scorch.
This is more or less what you want, I think:
val fc1 = Linear(1, 1) // an affine operation: y = Wx + b
val f: Tensor => Tensor = { t: Tensor =>
t * 3 + 1.0
}
val optimizer = SGD(fc1.parameters, lr = 0.01)
(1 to 10) foreach { i =>
val x = ns.array(i)
val vx = Variable(x)
val y = Variable(f(x))
val ypred = fc1(vx)
val loss = scorch.mean((ypred - y) ** 2)
println(s"loss: ${loss.data}, x: $x, y: ${y.data}, ypred: ${ypred.data}")
optimizer.zeroGrad()
loss.backward()
optimizer.step()
}
from scorch.
or simply:
val fc1 = Linear(1, 1) // an affine operation: y = Wx + b
def f(v: Variable): Variable = v * 3 + 1
val optimizer = SGD(fc1.parameters, lr = 0.01)
(1 to 10) foreach { i =>
val x = Variable(i)
val y = f(x)
val ypred = fc1(x)
val loss = scorch.mean((ypred - y) ** 2)
println(s"loss: ${loss.data}, x: ${x.data}, y: ${y.data}, ypred: ${ypred.data}")
optimizer.zeroGrad()
loss.backward()
optimizer.step()
}
If you want to backpropagate through a function, then you must use the functions defined in scorch, or write your own. In the latter case, you will have to define the backward method yourself.
from scorch.
thanks,it works!
Is auto diff in this project implemented with tape based mechanism?
from scorch.
Related Issues (4)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from scorch.