GithubHelp home page GithubHelp logo

vapoursynth-rife-ncnn-vulkan's Introduction

RIFE

GitHub release GitHub All Releases GitHub last commit

Real-Time Intermediate Flow Estimation for Video Frame Interpolation, based on rife-ncnn-vulkan.

Usage

rife.RIFE(vnode clip[, int model=5, int factor_num=2, int factor_den=1, int fps_num=None, int fps_den=None, string model_path=None, int gpu_id=None, int gpu_thread=2, bint tta=False, bint uhd=False, bint sc=False, bint skip=False, float skip_threshold=60.0, bint list_gpu=False])
  • clip: Clip to process. Only RGB format with float sample type of 32 bit depth is supported.

The models folder needs to be in the same folder as the compiled binary.

By default models are exported with ensemble=False and Fast=True

  • model: Model to use.

    • 0 = rife
    • 1 = rife-HD
    • 2 = rife-UHD
    • 3 = rife-anime
    • 4 = rife-v2
    • 5 = rife-v2.3
    • 6 = rife-v2.4
    • 7 = rife-v3.0
    • 8 = rife-v3.1
    • 9 = rife-v3.9 (ensemble=False / fast=True)
    • 10 = rife-v3.9 (ensemble=True / fast=False)
    • 11 = rife-v4 (ensemble=False / fast=True)
    • 12 = rife-v4 (ensemble=True / fast=False)
    • 13 = rife-v4.1 (ensemble=False / fast=True)
    • 14 = rife-v4.1 (ensemble=True / fast=False)
    • 15 = rife-v4.2 (ensemble=False / fast=True)
    • 16 = rife-v4.2 (ensemble=True / fast=False)
    • 17 = rife-v4.3 (ensemble=False / fast=True)
    • 18 = rife-v4.3 (ensemble=True / fast=False)
    • 19 = rife-v4.4 (ensemble=False / fast=True)
    • 20 = rife-v4.4 (ensemble=True / fast=False)
    • 21 = rife-v4.5 (ensemble=False)
    • 22 = rife-v4.5 (ensemble=True)
    • 23 = rife-v4.6 (ensemble=False)
    • 24 = rife-v4.6 (ensemble=True)
    • 25 = rife-v4.7 (ensemble=False)
    • 26 = rife-v4.7 (ensemble=True)
    • 27 = rife-v4.8 (ensemble=False)
    • 28 = rife-v4.8 (ensemble=True)
    • 29 = rife-v4.9 (ensemble=False)
    • 30 = rife-v4.9 (ensemble=True)
    • 31 = rife-v4.10 (ensemble=False)
    • 32 = rife-v4.10 (ensemble=True)
    • 33 = rife-v4.11 (ensemble=False)
    • 34 = rife-v4.11 (ensemble=True)
    • 35 = rife-v4.12 (ensemble=False)
    • 36 = rife-v4.12 (ensemble=True)
    • 37 = rife-v4.12-light (ensemble=False)
    • 38 = rife-v4.12-light (ensemble=True)
    • 39 = rife-v4.13 (ensemble=False)
    • 40 = rife-v4.13 (ensemble=True)
    • 41 = rife-v4.13-lite (ensemble=False)
    • 42 = rife-v4.13-lite (ensemble=True)
    • 43 = rife-v4.14 (ensemble=False)
    • 44 = rife-v4.14 (ensemble=True)
    • 45 = rife-v4.14-lite (ensemble=False)
    • 46 = rife-v4.14-lite (ensemble=True)

    My experimental custom models (only works with 2x)

    • 47 = sudo_rife4 (ensemble=False / fast=True)
    • 48 = sudo_rife4 (ensemble=True / fast=False)
    • 49 = sudo_rife4 (ensemble=True / fast=True)
  • factor_num, factor_den: Factor of target frame rate. For example factor_num=5, factor_den=2 will multiply input clip FPS by 2.5. Only rife-v4 model supports custom frame rate.

  • fps_num, fps_den: Target frame rate. Only rife-v4 model supports custom frame rate. Supersedes factor_num/factor_den parameter if specified.

  • model_path: RIFE model path. Supersedes model parameter if specified.

  • gpu_id: GPU device to use.

  • gpu_thread: Thread count for interpolation. Using larger values may increase GPU usage and consume more GPU memory. If you find that your GPU is hungry, try increasing thread count to achieve faster processing.

  • tta: Enable TTA(Test-Time Augmentation) mode.

  • uhd: Enable UHD mode.

  • sc: Avoid interpolating frames over scene changes. You must invoke misc.SCDetect on YUV or Gray format of the input beforehand so as to set frame properties.

  • skip: Skip interpolating static frames. Requires VMAF plugin.

  • skip_threshold: PSNR threshold to determine whether the current frame and the next one are static.

  • list_gpu: Simply print a list of available GPU devices on the frame and does no interpolation.

Compilation

Requires Vulkan SDK.

git submodule update --init --recursive --depth 1
meson build
ninja -C build
ninja -C build install

vapoursynth-rife-ncnn-vulkan's People

Contributors

elexor avatar holywu avatar krakow10 avatar mafiosnik777 avatar styler00dollar avatar tntwise avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

vapoursynth-rife-ncnn-vulkan's Issues

model=30

model=30 probably should be "/rife-v4.9_ensembleTrue" (now it's the same as model=29).

Bad(interlaced, overlayed like) inference output of `rife-4.5` and `rife-4.6` models

Using the same script of inference implemented by nihui, the following models produce different outputs as followed:
(uhd_mode = False and tta_mode = False)

  • rife-v4_ensembleTrue_fastFalse or rife-v4_ensembleFalse_fastTrue
    image
    , which is quite normal, while:

  • rife-4.5 and rife-4.6, no matter ensemble is True or False
    image
    , definitely wrong.

There might be a problem with the quantitization or conversion of the ensemble feature.
Would it be convenient for you to share the scripts for model conversions(pytorch model to ncnn .params & .bins)? Nihui says she converted the rife-v4 model using pnnx, which could make a difference I guess.

Failed to load model

I am trying to get this running but it never seems to work. Windows, Python 3.11.7, VS R65.
I have all the model folders with the PARAM and BIN files inside "models" folder inside Vapoursynth\plugins, where librife.dll is.
This path is added to environment variables. All loads correctly except the models. I have checked permissions and even with admin priviliges it doesn't work.
I have tried to set exact path to model with the "model_path=" parameter but it also gives the same error Failed to load model.
What am I doing wrong? In the past it was working for me perfectly with HomeOfVapoursynthEvolution version and your first versions as well.

faster seek ?

This works very well together with mpv. However, when seeking a video in mpv (with rife enabled as script), the video takes a few seconds before getting to the new position. In the meantime, gpu use rises to 100%. Is this some buffering effect?

Is there a way to make this "seek" faster?

guide to models

there are currently 44 models, could you add a guide on which to choose, or a link to some guide? should we generally always choose the latest model? should i ask on a different repo?

Usage?

Thanks for sharing this great work.
I'm used to running github projects like this that use python scrips, but I'm lost with how to set this one up. Would you know of any detailed instructions or tutorials for this?

vkDestroyCommandPool: Invalid device [VUID-vkDestroyCommandPool-device-parameter] starting with Release r9_mod_v7

I'm using r9_mod_v10 through:

from FillDuplicateFrames import FillDuplicateFrames
fdf = FillDuplicateFrames(clip=clip, method="RIFE")

which calls

class FillDuplicateFrames:
  # constructor
  def __init__(self, clip: vs.VideoNode, thresh: float=0.001, method: str='SVP', debug: bool=False, rifeSceneThr: float=0.15, device_index: int=0):
      self.clip = core.std.PlaneStats(clip, clip[0]+clip)
      self.thresh = thresh
      self.debug = debug
      self.method = method
      self.smooth = None
      self.rifeSceneThr = rifeSceneThr
      self.device_index = device_index
          
  def interpolate(self, n, f):
    out = self.get_current_or_interpolate(n)
    if self.debug:
      return out.text.Text(text="avg: "+str(f.props['PlaneStatsDiff']),alignment=8)            
    return out

  def interpolateWithRIFE(self, clip, n, start, end, rifeModel=22, rifeTTA=False, rifeUHD=False):
    if clip.format.id != vs.RGBS:
      raise ValueError(f'FillDuplicateFrames: "clip" needs to be RGBS when using \'{self.method}\'!')
      
    if self.rifeSceneThr != 0:
      clip = core.misc.SCDetect(clip=clip,threshold=self.rifeSceneThr)
    
    num = end - start
    self.smooth = core.rife.RIFE(clip, model=rifeModel, factor_num=num, tta=rifeTTA,uhd=rifeUHD,gpu_id=self.device_index)
    self.smooth_start = start
    self.smooth_end   = end
    return self.smooth[n-start]
   
  def interpolateWithMV(self, clip, n, start, end):   
    num = end - start
    sup = core.mv.Super(clip, pel=2, hpad=0, vpad=0)
    bvec = core.mv.Analyse(sup, blksize=16, isb=True, chroma=True, search=3, searchparam=1)
    fvec = core.mv.Analyse(sup, blksize=16, isb=False, chroma=True, search=3, searchparam=1)
    self.smooth = core.mv.FlowFPS(clip, sup, bvec, fvec, num=num, den=1, mask=2)
    self.smooth_start = start
    self.smooth_end   = end
    out = self.smooth[n-start]
    if self.debug:
      return out.text.Text(text="MV",alignment=9)
    return out

  def interpolateWithSVP(self, clip, n, start, end):   
    if clip.format.id != vs.YUV420P8:
      raise ValueError(f'FillDuplicateFrames: "clip" needs to be YUV420P8 when using \'{self.method}\'!')
    if self.method == 'SVP':
      super = core.svp1.Super(clip,"{gpu:1}")
    else: # self.method == 'SVPCPU':
      super = core.svp1.Super(clip,"{gpu:0}")
    vectors = core.svp1.Analyse(super["clip"],super["data"],clip,"{}")
    num = end - start
    self.smooth = core.svp2.SmoothFps(clip,super["clip"],super["data"],vectors["clip"],vectors["data"],f"{{rate:{{num:{num},den:1,abs:true}}}}")
    self.smooth_start = start
    self.smooth_end   = end
    out = self.smooth[n-start]
    if self.debug:
      return out.text.Text(text="SVP",alignment=9)
    return out
  
  def get_current_or_interpolate(self, n):
    if self.is_not_duplicate(n):
      #current non dublicate selected
      if self.debug:
        return self.clip[n].text.Text(text="Input (1)", alignment=9)
      return self.clip[n]

    #dublicate frame, frame is interpolated
    for start in reversed(range(n+1)):
      if self.is_not_duplicate(start):
        break
    else: #there are all black frames preceding n, return current n frame // will be executed then for-look does not end with a break
      if self.debug:
        return self.clip[n].text.Text(text="Input (2)", alignment=9)
      return self.clip[n]
  
    for end in range(n, len(self.clip)):
      if self.is_not_duplicate(end):
        break
    else:
      #there are all black frames to the end, return current n frame
      if self.debug:
        return self.clip[n].text.Text(text="Input(3)", alignment=9)
      return self.clip[n]

    #does interpolated smooth clip exist for requested n frame? Use n frame from it.
    if self.smooth is not None and start >= self.smooth_start and end <= self.smooth_end:
      if self.debug:
        return self.smooth[n-start].text.Text(text=self.method, alignment=9)
      return self.smooth[n-start]

    #interpolating two frame clip  into end-start+1 fps
    clip = self.clip[start] + self.clip[end]
    clip = clip.std.AssumeFPS(fpsnum=1, fpsden=1)
    if self.method == 'SVP' or self.method == 'SVPcpu':  
      return self.interpolateWithSVP(clip, n, start, end)
    elif self.method == 'RIFE':
      return self.interpolateWithRIFE(clip, n, start, end)
    elif self.method == 'MV':
      return self.interpolateWithMV(clip, n, start, end)
    else:
      raise ValueError(f'ReplaceBlackFrames: "method" \'{self.method}\' is not supported atm.')

  def is_not_duplicate(self, n):
    return self.clip.get_frame(n).props['PlaneStatsDiff'] > self.thresh
  
  @property
  def out(self):
    self.clip = core.std.PlaneStats(self.clip, self.clip[0] + self.clip)
    return core.std.FrameEval(self.clip, self.interpolate, prop_src=self.clip)

which sadly throws the above error:

VSPipe.exe  c:\Users\Selur\Desktop\Testing.vpy -c y4m NUL
[0 NVIDIA GeForce RTX 4080]  queueC=2[8]  queueG=0[16]  queueT=1[2]
[0 NVIDIA GeForce RTX 4080]  bugsbn1=0  bugbilz=0  bugcopc=0  bugihfa=0
[0 NVIDIA GeForce RTX 4080]  fp16-p/s/a=1/1/1  int8-p/s/a=1/1/1
[0 NVIDIA GeForce RTX 4080]  subgroup=32  basic/vote/ballot/shuffle=1/1/1/1
[0 NVIDIA GeForce RTX 4080]  fp16-matrix-16_8_8/16_8_16/16_16_16=1/1/1
[1 Intel(R) Arc(TM) A380 Graphics]  queueC=1[1]  queueG=0[1]  queueT=2[1]
[1 Intel(R) Arc(TM) A380 Graphics]  bugsbn1=0  bugbilz=0  bugcopc=0  bugihfa=0
[1 Intel(R) Arc(TM) A380 Graphics]  fp16-p/s/a=1/1/1  int8-p/s/a=1/1/1
[1 Intel(R) Arc(TM) A380 Graphics]  subgroup=32  basic/vote/ballot/shuffle=1/1/1/1
[1 Intel(R) Arc(TM) A380 Graphics]  fp16-matrix-16_8_8/16_8_16/16_16_16=0/0/0
[2 AMD Radeon(TM) Graphics]  queueC=1[2]  queueG=0[1]  queueT=2[1]
[2 AMD Radeon(TM) Graphics]  bugsbn1=0  bugbilz=0  bugcopc=0  bugihfa=0
[2 AMD Radeon(TM) Graphics]  fp16-p/s/a=1/1/1  int8-p/s/a=1/1/1
[2 AMD Radeon(TM) Graphics]  subgroup=64  basic/vote/ballot/shuffle=1/1/1/1
[2 AMD Radeon(TM) Graphics]  fp16-matrix-16_8_8/16_8_16/16_16_16=0/0/0
FATAL ERROR! reclaim_blob_allocator get wild allocator 00000298BE73F700
FATAL ERROR! reclaim_staging_allocator get wild allocator 00000298BE73FC10
FATAL ERROR! reclaim_blob_allocator get wild allocator 00000298BE73FDC0
FATAL ERROR! reclaim_staging_allocator get wild allocator 00000298BE740240
ERROR:            vkDestroyCommandPool: Invalid device [VUID-vkDestroyCommandPool-device-parameter]
ERROR:            vkDestroyCommandPool: Invalid device [VUID-vkDestroyCommandPool-device-parameter]

I tested some older librife versions and with Release r9_mod_v6 it works, all versions after that one produce the crash.

Would be nice if this could be fixed.

Do the models: rife-v4.5 and rife-v4.6 have the fast=False parameter?

First of all, many thanks for updating VapourSynth-RIFE-ncnn-Vulkan with the newest models.

I see that the models from rife-v4 to rife-v4.4 have two versions: the fastest and the slowest (the highest quality).
For the models from rife-v4.5 to rife-v4.6, the two versions differ in only one parameter: ensemble.
I have a question: do both versions of these two latest models have the parameter: fast=False?

Convert model process

HI, can smb please explain how to convert the new Rife model from hzwer/Practical-RIFLE to ncnn format?
I've tried to use:

  1. torch.jit.trace() and then convert .pt file through pnnx to ncnn
  2. torch.onnx._export() and then onnx2ncnn from ncnn git

BUT NONE OF THEM WORKS.
The models I received do not run in the Vulkan app on my phone. but I run models from this project that @styler00dollar converted.
Can someone explain how to convert models correctly?

vsedit closing with Release r9_mod_v4

Using

# Imports
import vapoursynth as vs
# getting Vapoursynth core
core = vs.core
# Loading Plugins
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/MiscFilter/MiscFilters/MiscFilters.dll")
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/FrameFilter/RIFE/librife.dll")
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/vslsmashsource.dll")
# source: 'G:\TestClips&Co\files\test.avi'
# current color space: YUV420P8, bit depth: 8, resolution: 640x352, fps: 25, color matrix: 470bg, yuv luminance scale: limited, scanorder: progressive
# Loading G:\TestClips&Co\files\test.avi using LWLibavSource
clip = core.lsmas.LWLibavSource(source="G:/TestClips&Co/files/test.avi", format="YUV420P8", stream_index=0, cache=0, fpsnum=25, prefer_hw=0)
# Setting detected color matrix (470bg).
clip = core.std.SetFrameProps(clip, _Matrix=5)
# Setting color transfer info, when it is not set
clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
# Setting color primaries info, when it is not set
clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
# Setting color range to TV (limited) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
# making sure frame rate is set to 25
clip = core.std.AssumeFPS(clip=clip, fpsnum=25, fpsden=1)
clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0)
clip = core.misc.SCDetect(clip=clip,threshold=0.300)
# adjusting color space from YUV420P8 to RGBS for vsRIFE
clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
# adjusting frame count&rate with RIFE, target fps: 60fps
#clip = core.rife.RIFE(clip, model=22, fps_num=60, fps_den=1, sc=True) # new fps: 60
# adjusting output color from: RGBS to YUV420P8 for x264Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited", dither_type="error_diffusion")
# set output frame rate to 60fps
clip = core.std.AssumeFPS(clip=clip, fpsnum=60, fpsden=1)
# Output
clip.set_output()

with the new Release r9_mod_v4 causes all editors I tried:
https://github.com/YomikoR/VapourSynth-Editor
https://github.com/Selur/vsViewer
https://github.com/AmusementClub/VapourSynth-Editor
to simply crash. (they simply close, no error message)

Using;
VSPipe.exe c:\Users\Selur\Desktop\test_2.vpy -c y4m - | i:\Hybrid\64bit\x264.exe --demuxer y4m -o g:\Output\script.mkv
works fine.
With Release r9_mod_v3 the viewers work fine.

=> Any idea what might cause all the viewers I tried to crash?

I'm on Windows 11, with a Geforce RTX 4080 with v528.49 Studio drivers.

librife.dylib is broken

Its size is too small and looks like a shortcut. I think only original builder can load it on his own machine.

librife.dylib is Arm64 only

Trying to run this on an Intel mac. I get:
Failed to load /Library/Frameworks/VapourSynth.Framework/lib/vapoursynth/librife.dylib. Error given: dlopen(/Library/Frameworks/VapourSynth.framework/lib/vapoursynth/librife.dylib, 0x0001): tried: '/Library/Frameworks/VapourSynth.framework/lib/vapoursynth/librife.dylib' (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64h' or 'x86_64'))

Looks like it's Apple Silicon only. Is there any Intel binary? Or any tip on how to compile for Intel?

pipe:: Invalid data found when processing input

It's giving me an error on my PC

pipe:: Invalid data found when processing input

It's all versions that give this error

the version of
nihui/rife-ncnn-vulkan
works all models

Will it be my GPU?
Device Name GeForce 920M
1gb vram
Vulkan 1.1.95

Or could it be that some dll?

I used this code

"clip=core.rife.RIFE(clip, model=9, gpu_thread=16, tta=False, uhd=False, sc=True, factor_num=5, factor_den=2)"

[
import vaporsynth as vs
from vaporsynth import core
core.num_threads=4
core.max_cache_size=4096
clip = core.raws.Source(r"-")
sup = core.mv.Super(clip)
bw = core.mv.Analyse(sup,isb=True)
clip = core.mv.SCDetection(clip,bw,thscd1=600,thscd2=130)
clip = core.resize.Bicubic(clip,format=vs.RGBS,matrix_in=1)
clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, width=1280, height=720)
clip=core.rife.RIFE(clip, model=9, gpu_thread=16, tta=False, uhd=False, sc=True, factor_num=5, factor_den=2)
clip=core.resize.Bicubic(clip=clip, format=vs.YUV420P10, matrix_s="470bg")
clip=core.fmtc.bitdepth(clip, bits=8)

#sup = core.mv.Super(clip)
#fw = core.mv.Analyse(sup)
#bw = core.mv.Analyse(sup,isb=True)
#clip = core.mv.FlowBlur(clip,sup,bw,fw,blur=50.0)

#sup = core.mv.Super(clip)
#fw = core.mv.Analyse(sup)
#bw = core.mv.Analyse(sup,isb=True)
#clip = core.mv.FlowFPS(clip,sup,bw,fw,60,1,blend=False)

clip.set_output()
]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.