GithubHelp home page GithubHelp logo

Comments (11)

qcolombet avatar qcolombet commented on May 29, 2024 2

Hi @jsubag,

  1. Yes, but that is not mutually exclusive. When we build for training, we end up with variables being used both by save nodes and elsewhere.
  2. At the instruction level, the variables visible from the outside are represented with the WeightVar class. Those would be the input/output of the program. Now to know what is an output, you would need to check which one are written into. That should be doable by walking through the users of the variables and check if they are ever used as output (@out in the textual representation).

Now, for figuring out what and when should these be copied around, we are reworking the graph IR to have variables accessed exclusively through save and a new load nodes. We would potentially lower then into some new IR instructions that would make it easy to determine when such copies should occur (basically, just where you see such instructions).

from glow.

jsubag avatar jsubag commented on May 29, 2024 1

Hi @qcolombet ,

Thanks for clarifying - that was pretty much what I understood.
I think all of these corner cases will be neatly resolved with Load/Save instructions.
Also, this makes it easier for backends to optimize data transfer (e.g. copy output tensors as soon as they're ready instead of at the end of the forward pass).

Btw, will this change be applied to all variables or just public ones? and will constants be treated differently (weights/biases)?

from glow.

nadavrot avatar nadavrot commented on May 29, 2024

@jsubag Jacob, thanks for working on this. Input/output annotation is a solution to some problem. I am not sure if it's the only solution and if it's the best solution. We need to make sure that any new attributes match the design of the rest of the system. For example, how do new attributes interact with public/private, mutability, etc.

Let's talk about the problem. What usecase is the current system not handling well?

from glow.

jsubag avatar jsubag commented on May 29, 2024

@nadavrot Thanks for pointing out WeightVar::MutabilityKind, I missed that in the code :)
In that case, are the following categories correct?

  1. Private+Constant: constants known during compilation e.g. weights/biases/initial hidden states
  2. Public+Constant: constant during inference (not known during compilation) e.g. input tensors
  3. Private+Mutable: internal tensors
  4. Public+Mutable: tensors mutable during inference and exposed to the application e.g. output tensors

This however doesn't map the case where a tensor is used as both input and output, e.g. weights during training.
Unless this condition is illegal in Glow and the resulting function will create separate tensors for input and output.
Can you clarify this point please?

Thanks.

from glow.

jfix71 avatar jfix71 commented on May 29, 2024

@jsubag We actually do not have anything that can be Public+Constant currently; Public implies Mutable. So both input and output tensors are considered to be Public+Mutable.

Perhaps it makes more sense to allow input tensors to be Public+Constant, though, as they are still constant during execution even if modified/accessed before or after. If we made such a change, then if a tensor was used as both input and output then it would be Public+Mutable, as it currently is.

from glow.

jsubag avatar jsubag commented on May 29, 2024

@jfix71 I think the thing I'm missing is a clear way to figure out if a tensor is considered an input and/or output.
The motivation is figuring out what should be copied to/from memory resources of the backend before and after the forward pass.
Even though the common cases are easy to identify by traversing the graph, it probably makes more sense to either have an explicit notation on them or some shared utility function that all backends can use.

from glow.

nadavrot avatar nadavrot commented on May 29, 2024

@jsubag @jfix71 I think that the description of the problem in the last post is great. The next step would be to post proposal for a design that solves the problem. @qcolombet can help in brainstorming ideas for a design. What do you think?

from glow.

qcolombet avatar qcolombet commented on May 29, 2024

Hi @jsubag,

I am not sure I get the problem. Outputs should be easy to identify, they are used by Save node. Having a backend utility function sounds reasonable though.

Do you see cases where we would need to explicitly tag a variable?

from glow.

jsubag avatar jsubag commented on May 29, 2024

Hi @qcolombet ,

  1. Is it fair to say then that all public tensors targeted by Save nodes are outputs and all other public tensors are inputs?
    This implies inputs and outputs are mutually exclusive, does that fit with weights usage during training?

  2. Is there something propagated from Save nodes to the instruction/variable level?
    I.e. if a backend wants to execute a Function by going over its instructions how can it determine if an instruction destination variable is a Function output?

from glow.

qcolombet avatar qcolombet commented on May 29, 2024

Btw, will this change be applied to all variables or just public ones? and will constants be treated differently (weights/biases)?

All variables public or not, constant or not.

from glow.

nadavrot avatar nadavrot commented on May 29, 2024

@jsubag This issue is related to #1334

from glow.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.