GithubHelp home page GithubHelp logo

ADD FEATURE [REQUEST] about ait HOT 10 CLOSED

Shaistrong avatar Shaistrong commented on July 28, 2024
ADD FEATURE [REQUEST]

from ait.

Comments (10)

hlky avatar hlky commented on July 28, 2024

Yes, I implemented ESRGAN for AITemplate.

Checklist for integration:

  • bring compilation code to this repo
  • improve inference code (alpha channel, tiled, etc)
  • compile modules
  • document memory planning (AITemplate pre-plans required vram usage according to profile)
  • mapping for ESRGAN finetunes

I've tested RealESRGAN_x4plus and RealESRGAN_x4plus_anime_6B, if you use others list them here and I'll test/create mapping for the weights

from ait.

Shaistrong avatar Shaistrong commented on July 28, 2024

Yes, I implemented ESRGAN for AITemplate.

Checklist for integration:

  • bring compilation code to this repo
  • improve inference code (alpha channel, tiled, etc)
  • compile modules
  • document memory planning (AITemplate pre-plans required vram usage according to profile)
  • mapping for ESRGAN finetunes

I've tested RealESRGAN_x4plus and RealESRGAN_x4plus_anime_6B, if you use others list them here and I'll test/create mapping for the weights

The AIT workflow I made is getting quite popular, so yeah.. I use an upscaler called "4x_Ultrasharp" it's an ESRGAN upscaler, a really good one. 4x_Ultrasharp: https://openmodeldb.info/models/4x-UltraSharp

from ait.

Shaistrong avatar Shaistrong commented on July 28, 2024

also, @hlky , many users that have 3000 series GPUs report getting barely any speed boost when using this node. could this be AIT being more efficient for 4000 series GPUs?

from ait.

hlky avatar hlky commented on July 28, 2024

In my experience as resolution increases past around 2048x2048 the performance increase drops off, the same applies to other optimization libraries also, large generation gain less performance. Still, I generally see 1.5x-2x or more, for example earlier XL 1032x1032 took 10s with AIT and 30s with Comfy.
Let me know some more details so I can look into it further and provide a better answer; gpus, resolutions, models.

Regarding 4000 series, it would likely benefit more with modules profiled on 4000 series, I couldn't say whether it's more efficient, I'd need more data.

from ait.

hlky avatar hlky commented on July 28, 2024

Also @Shaistrong I find that with AIT my speeds are lower than expected unless I lock my gpu clocks.
From admin prompt on Windows:
nvidia-smi -lgc 1920

Without locking, I observe that gpu clocks boost too slowly, speed increases as generation progresses, and while performance is faster than standard the generation is complete before reaching full speed

from ait.

Shaistrong avatar Shaistrong commented on July 28, 2024

Also @Shaistrong I find that with AIT my speeds are lower than expected unless I lock my gpu clocks. From admin prompt on Windows: nvidia-smi -lgc 1920

Without locking, I observe that gpu clocks boost too slowly, speed increases as generation progresses, and while performance is faster than standard the generation is complete before reaching full speed

Huh, does this need to be specifically 1920, or is that value dependent on something? Also by 30 second with Comfy you mean without the AIT node, right? @hlky EDIT: changed it to 2805, it got insanely fast, is this bad?

from ait.

hlky avatar hlky commented on July 28, 2024

The maximum depends on gpu, that is the maximum for my gpu

without the AIT node

Yes

it got insanely fast, is this bad?

Fast is good, no? :)
Maybe it will increase your temperatures though.

from ait.

Shaistrong avatar Shaistrong commented on July 28, 2024

The maximum depends on gpu, that is the maximum for my gpu

without the AIT node

Yes

it got insanely fast, is this bad?

Fast is good, no? :) Maybe it will increase your temperatures though.

will it get even faster if I compile my own modules and replace them with the shipped ones?

from ait.

hlky avatar hlky commented on July 28, 2024

will it get even faster if I compile my own modules and replace them with the shipped ones?

It is possible that profiling could result in different kernel selection on other 30xx series gpus. Though I wouldn't say there's a need for people to compile their own unless they're wanting to experiment whether it is the case, and it's more likely to make a difference for 40xx series.

from ait.

Shaistrong avatar Shaistrong commented on July 28, 2024

will it get even faster if I compile my own modules and replace them with the shipped ones?

It is possible that profiling could result in different kernel selection on other 30xx series gpus. Though I wouldn't say there's a need for people to compile their own unless they're wanting to experiment whether it is the case, and it's more likely to make a difference for 40xx series.

yeah, I figured. so, as a user of a 4000 series card- should I try to compile modules myself?

from ait.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.