GithubHelp home page GithubHelp logo

Comments (7)

peterjc123 avatar peterjc123 commented on July 4, 2024

The model you posted has been inspected, in which the size of buffers is only 36KB. So, the only way to lower the size of the model is to rewrite the GRU operation using tfl.UnidirectionalGRU or subgraph related ops like tfl.While and tfl.Call. Also, the implementation of separated_rnn_gate_calc=False for GRU may help.

from tinyneuralnetwork.

Juelianqvq avatar Juelianqvq commented on July 4, 2024

The model you posted has been inspected, in which the size of buffers is only 36KB. So, the only way to lower the size of the model is to rewrite the GRU operation using tfl.UnidirectionalGRU or subgraph related ops like tfl.While and tfl.Call. Also, the implementation of separated_rnn_gate_calc=False for GRU may help.

  • In contrast of LSTM, since nt relies on rt, it looks unlikely to put GRU's weights together because you need to split rt from the output(Maybe I'm wrong). Will it take effect? I'm worried about the performance of optimizing this part.
  • It seems that developing tfl.UnidirectionalGRU as a custom op needs another compilation, which means the new integrated op cannot be executed on others' PC and it brings me a heavy burden since we only use TFLite as an intermediate format.
  • With above concerns, what's your suggestion and what's the fastest way as you think? Looking forward to your reply.

from tinyneuralnetwork.

peterjc123 avatar peterjc123 commented on July 4, 2024

In contrast of LSTM, since nt relies on rt, it looks unlikely to put GRU's weights together because you need to split rt from the output(Maybe I'm wrong). Will it take affect? I'm worried about the performance of optimizing this part.

# separated_rnn_gate_calc=False
rzt_left = FC_i{r,z}(x)
rzt_right = FC_h{r,z}(h)
rzt_sum = rzt_left + rzt_right
rzt = sigmoid(rzt_sum)
rt, zt = split(rzt, 2)

# separated_rnn_gate_calc=True
rt_left = FC_ir(x)
rt_right = FC_hr(h)
rt_sum = rt_left + rt_right
rt = sigmoid(rzt_sum)
zt_left = FC_zr(x)
zt_right = FC_hz(h)
zt_sum = zt_left + zt_right
zt = sigmoid(zt_sum)

So it will be optimized from 8 ops (10 tensors) to 5 ops (8 tensors) for each time step.

It seems that developing tfl.UnidirectionalGRU as a custom op needs another compilation, which means the new integrated op cannot be executed on others' PC and it brings me a heavy burden since we only use TFLite as an intermediate format.

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/core/kernels/register.cc

Unfortunately, it is a custom op as of now (May 30, 2024).

With above concerns, what's your suggestion and what's the fastest way as you think? Looking forward to your reply.

I don't know. It depends on your needs. It the target model with its size around 80-100K is desired, I guess separated_rnn_gate_calc=False should be enough. But if you want something lower than that, then subgraph related things are your only hope.

from tinyneuralnetwork.

Juelianqvq avatar Juelianqvq commented on July 4, 2024

So it will be optimized from 8 ops (10 tensors) to 5 ops (8 tensors) for each time step.

Take a glance at the previous implementation of AtenGRUOperator, it definitely has some space(6 FC) for further optimization(4FC or even 2FC). And it sounds very easy for me to realize the separated_rnn_gate_calc=False option because I have realized it before(though failed to pass bidirectional test).

I don't know. It depends on your needs. It the target model with its size around 80-100K is desired, I guess separated_rnn_gate_calc=False should be enough. But if you want something lower than that, then subgraph related things are your only hope.

I'm quite interested in translating such challengable builtin operators. Based on my current ability, may I ask what the procedure it will be to support tfl.while, I have no answer now due to my limitation of understanding.

from tinyneuralnetwork.

peterjc123 avatar peterjc123 commented on July 4, 2024

I'm quite interested in translating such challengable builtin operators. Based on my current ability, may I ask what the procedure it will be to support tfl.while, I have no answer now due to my limitation of understanding.

Well, it is doable and not hard at all if only GRU is involved, but for a better design, it takes some time.

from tinyneuralnetwork.

Juelianqvq avatar Juelianqvq commented on July 4, 2024

Well, it is doable and not hard at all if only GRU is involved, but for a better design, it takes some time.

One week is enough? I'm available 24/7 as long as it can be realized incrementally, lol. You can focus on your business first and any instruction is helpful to me when you free.

P.S. not only float structure but also quantized gru with while can be supported

from tinyneuralnetwork.

peterjc123 avatar peterjc123 commented on July 4, 2024

One week is enough? I'm available 24/7 as long as it can be realized incrementally, lol. You can focus on your business first and any instruction is helpful to me when you free.

Well, I cannot guarantee on that. But I can do QA and guide you throughout the process.

P.S. not only float structure but also quantized gru with while can be supported

This is just copy paste I think. The main difficulty is to add a new subgraph for the GRU operation, so doesn't bother.

from tinyneuralnetwork.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.