GithubHelp home page GithubHelp logo

Texture Object example? about gpu.js HOT 8 CLOSED

cazala avatar cazala commented on May 13, 2024
Texture Object example?

from gpu.js.

Comments (8)

fuzzie360 avatar fuzzie360 commented on May 13, 2024 4

I've just fixed this in git hash 66edda8

Here is a quick example. You can try this out by clearing the browser cache and opening a console on the front page.

var gpu = new GPU();

var mat_mult = gpu.createKernel(function(A, B) {
    var sum = 0;
    for (var i=0; i<512; i++) {
        sum += A[this.thread.y][i] * B[i][this.thread.x];
    }
    return sum;
}).dimensions([512, 512]).outputToTexture(true);

var mat_scale = gpu.createKernel(function(A, b) {
    return A[this.thread.y][this.thread.x] * b;
}).dimensions([512, 512]);

var C = mat_mult(A, B);
console.log(C); // A texture object

var D = C.toArray();
console.log(D[0][0]); // Some value

E = mat_scale(C, 2);
console.log(E[0][0]); // Some value * 2

I will add this to Getting Started eventually.

Have a nice holiday everyone!

from gpu.js.

robertleeplummerjr avatar robertleeplummerjr commented on May 13, 2024

Interested as well. Like a mega kernel.

from gpu.js.

PicoCreator avatar PicoCreator commented on May 13, 2024

An example (not done directly by the gpu.js team) can be found here : http://staceytay.com/raytracer/, regarding the outputToTexture usage. (not 2 kernels usage)

However from what I understand, having 2 gpu.js instance will behind the scene setup 2 seperate canvas/webgl instance.

It would be interesting to test if the browser actually optimizes the transfer between the two webgl instances. However until tested i am highly doubtful that would be the case. It may even be browser specific, the round-trip penalty reduction.

from gpu.js.

fuzzie360 avatar fuzzie360 commented on May 13, 2024

@PicoCreator sorry, that is not what outputToTexture is suppose to do.

@cazala you are exactly right. I will add benchmark and a demo soon

from gpu.js.

fuzzie360 avatar fuzzie360 commented on May 13, 2024

SIgh, I just realised texture object's toArray() function is broken

from gpu.js.

robertleeplummerjr avatar robertleeplummerjr commented on May 13, 2024

@cavala I think we can actually get much more performance out of the gpu if, rather than splitting up each matrix operation in each layer we combine those steps into a linear operation of the neuron through each matrix. So rather than this:

const addMatrix = gpu.createKernel(function(a, b) {
  return a + b;
});
const multiplyMatrix = gpu.createKernel(function(a, b) {
  return a * b;
});
const reluMatrix = gpu.createKernel(function(a) {
  return Math.relu(a);
});

const output = reluMatrix(multiplyMatrix(multiplyMatrix(addMatrix(matrix1, matrix2), matrix3), matrix4));

The kernel would look more like this:

gpu
  .addFunction(addNeuron)
  .addFunction(multiplyNeuron)
  .addFunction(reluNeuron);
const layer = gpu.createKernel(function(matrix1, matrix2, matrix3, matrix4) {
  return relu(
    multiplyNeuron(
      multiplyNeuron(
        addNeuron(
          matrix1[this.kernel.x],
          matrix2[this.kernel.x]
        )
        matrix3[this.kernel.x]
       )
      matrix4[this.kernel.x]
    )
  );
});

const output = layer(matrix1, matrix2, matrix3, matrix4);

So this is kinda pseudo script, but the idea is that the gpu would be responsible for the entire linear operation of the neuron through the layer, rather than handling a single matrix of neurons. In this way we can achieve a much deeper concurrency and better utilize the gpu, because all takes place in a single program, rather than a bunch of them.

So in short, rather than having to deal with getting the texture, and the overhead of multiple programs, the problem simply goes away.

from gpu.js.

robertleeplummerjr avatar robertleeplummerjr commented on May 13, 2024

One point is that there will be duplicate data when multiplying matrices when compared to the linear operation, but that shouldn't be a problem because we are talking about for a single layer, not the entire net, and we can probably find a means of sharing that between linear operations if it did become a problem. Too, this is gpu, so we are much more free to be inefficient, so that the end result is faster because we have the processing room to do what is needed.

from gpu.js.

robertleeplummerjr avatar robertleeplummerjr commented on May 13, 2024

Via 323a4e1#diff-51c307fdac3d7534e8c6e2b52ed0d10aR1 we now have: https://github.com/gpujs/gpu.js#combining-kernels

from gpu.js.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.