GithubHelp home page GithubHelp logo

facefusion / facefusion Goto Github PK

View Code? Open in Web Editor NEW
14.5K 141.0 2.0K 14.81 MB

Next generation face swapper and enhancer

Home Page: https://join.facefusion.io

License: Other

Python 99.66% CSS 0.34%
deepfake faceswap ai webcam deep-fake face-swap lip-sync lipsync

facefusion's Introduction

FaceFusion

Next generation face swapper and enhancer.

Build Status License

Preview

Preview

Installation

Be aware, the installation needs technical skills and is not for beginners. Please do not open platform and installation related issues on GitHub. We have a very helpful Discord community that will guide you to complete the installation.

Get started with the installation guide.

Usage

Run the command:

python run.py [options]

options:
  -h, --help                                                                                                                                            show this help message and exit
  -s SOURCE_PATHS, --source SOURCE_PATHS                                                                                                                choose single or multiple source images or audios
  -t TARGET_PATH, --target TARGET_PATH                                                                                                                  choose single target image or video
  -o OUTPUT_PATH, --output OUTPUT_PATH                                                                                                                  specify the output file or directory
  -v, --version                                                                                                                                         show program's version number and exit

misc:
  --force-download                                                                                                                                      force automate downloads and exit
  --skip-download                                                                                                                                       omit automate downloads and remote lookups
  --headless                                                                                                                                            run the program without a user interface
  --log-level {error,warn,info,debug}                                                                                                                   adjust the message severity displayed in the terminal

execution:
  --execution-providers EXECUTION_PROVIDERS [EXECUTION_PROVIDERS ...]                                                                                   accelerate the model inference using different providers (choices: cpu, ...)
  --execution-thread-count [1-128]                                                                                                                      specify the amount of parallel threads while processing
  --execution-queue-count [1-32]                                                                                                                        specify the amount of frames each thread is processing

memory:
  --video-memory-strategy {strict,moderate,tolerant}                                                                                                    balance fast frame processing and low VRAM usage
  --system-memory-limit [0-128]                                                                                                                         limit the available RAM that can be used while processing

face analyser:
  --face-analyser-order {left-right,right-left,top-bottom,bottom-top,small-large,large-small,best-worst,worst-best}                                     specify the order in which the face analyser detects faces
  --face-analyser-age {child,teen,adult,senior}                                                                                                         filter the detected faces based on their age
  --face-analyser-gender {female,male}                                                                                                                  filter the detected faces based on their gender
  --face-detector-model {many,retinaface,scrfd,yoloface,yunet}                                                                                          choose the model responsible for detecting the face
  --face-detector-size FACE_DETECTOR_SIZE                                                                                                               specify the size of the frame provided to the face detector
  --face-detector-score [0.0-1.0]                                                                                                                       filter the detected faces base on the confidence score
  --face-landmarker-score [0.0-1.0]                                                                                                                     filter the detected landmarks base on the confidence score

face selector:
  --face-selector-mode {many,one,reference}                                                                                                             use reference based tracking or simple matching
  --reference-face-position REFERENCE_FACE_POSITION                                                                                                     specify the position used to create the reference face
  --reference-face-distance [0.0-1.5]                                                                                                                   specify the desired similarity between the reference face and target face
  --reference-frame-number REFERENCE_FRAME_NUMBER                                                                                                       specify the frame used to create the reference face

face mask:
  --face-mask-types FACE_MASK_TYPES [FACE_MASK_TYPES ...]                                                                                               mix and match different face mask types (choices: box, occlusion, region)
  --face-mask-blur [0.0-1.0]                                                                                                                            specify the degree of blur applied the box mask
  --face-mask-padding FACE_MASK_PADDING [FACE_MASK_PADDING ...]                                                                                         apply top, right, bottom and left padding to the box mask
  --face-mask-regions FACE_MASK_REGIONS [FACE_MASK_REGIONS ...]                                                                                         choose the facial features used for the region mask (choices: skin, left-eyebrow, right-eyebrow, left-eye, right-eye, glasses, nose, mouth, upper-lip, lower-lip)

frame extraction:
  --trim-frame-start TRIM_FRAME_START                                                                                                                   specify the the start frame of the target video
  --trim-frame-end TRIM_FRAME_END                                                                                                                       specify the the end frame of the target video
  --temp-frame-format {bmp,jpg,png}                                                                                                                     specify the temporary resources format
  --keep-temp                                                                                                                                           keep the temporary resources after processing

output creation:
  --output-image-quality [0-100]                                                                                                                        specify the image quality which translates to the compression factor
  --output-image-resolution OUTPUT_IMAGE_RESOLUTION                                                                                                     specify the image output resolution based on the target image
  --output-video-encoder {libx264,libx265,libvpx-vp9,h264_nvenc,hevc_nvenc,h264_amf,hevc_amf}                                                           specify the encoder use for the video compression
  --output-video-preset {ultrafast,superfast,veryfast,faster,fast,medium,slow,slower,veryslow}                                                          balance fast video processing and video file size
  --output-video-quality [0-100]                                                                                                                        specify the video quality which translates to the compression factor
  --output-video-resolution OUTPUT_VIDEO_RESOLUTION                                                                                                     specify the video output resolution based on the target video
  --output-video-fps OUTPUT_VIDEO_FPS                                                                                                                   specify the video output fps based on the target video
  --skip-audio                                                                                                                                          omit the audio from the target video

frame processors:
  --frame-processors FRAME_PROCESSORS [FRAME_PROCESSORS ...]                                                                                            load a single or multiple frame processors. (choices: face_debugger, face_enhancer, face_swapper, frame_colorizer, frame_enhancer, lip_syncer, ...)
  --face-debugger-items FACE_DEBUGGER_ITEMS [FACE_DEBUGGER_ITEMS ...]                                                                                   load a single or multiple frame processors (choices: bounding-box, face-landmark-5, face-landmark-5/68, face-landmark-68, face-landmark-68/5, face-mask, face-detector-score, face-landmarker-score, age, gender)
  --face-enhancer-model {codeformer,gfpgan_1.2,gfpgan_1.3,gfpgan_1.4,gpen_bfr_256,gpen_bfr_512,gpen_bfr_1024,gpen_bfr_2048,restoreformer_plus_plus}     choose the model responsible for enhancing the face
  --face-enhancer-blend [0-100]                                                                                                                         blend the enhanced into the previous face
  --face-swapper-model {blendswap_256,inswapper_128,inswapper_128_fp16,simswap_256,simswap_512_unofficial,uniface_256}                                  choose the model responsible for swapping the face
  --frame-colorizer-model {ddcolor,ddcolor_artistic,deoldify,deoldify_artistic,deoldify_stable}                                                         choose the model responsible for colorizing the frame
  --frame-colorizer-blend [0-100]                                                                                                                       blend the colorized into the previous frame
  --frame-colorizer-size {192x192,256x256,384x384,512x512}                                                                                              specify the size of the frame provided to the frame colorizer
  --frame-enhancer-model {lsdir_x4,nomos8k_sc_x4,real_esrgan_x2,real_esrgan_x2_fp16,real_esrgan_x4,real_esrgan_x4_fp16,real_hatgan_x4,span_kendata_x4}  choose the model responsible for enhancing the frame
  --frame-enhancer-blend [0-100]                                                                                                                        blend the enhanced into the previous frame
  --lip-syncer-model {wav2lip_gan}                                                                                                                      choose the model responsible for syncing the lips

uis:
  --ui-layouts UI_LAYOUTS [UI_LAYOUTS ...]                                                                                                              launch a single or multiple UI layouts (choices: benchmark, default, webcam, ...)

Documentation

Read the documentation for a deep dive.

facefusion's People

Contributors

henryruhs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

facefusion's Issues

canr run with directml

When running whit directml i go this error

python run.py --execution-providers dml

run.py: error: argument --execution-providers: invalid choice: 'dml' (choose from 'tensorrt', 'cuda', 'cpu')

Can't we use more ram in this progy

Hii
I m using facefussion
And it's only using 3gb ram .. despite having I have 20gb ram

Can u tell how can we increase python using more ram.. any idea or tweak ?

Thanx

memory not freed on cuda?

I haven't tested with the other processors but it seems when the preview is generated it takes up a few gigs of vram, and then when i run the generation it takes up ~1gb per thread or maybe little more, if i generate another video without closing the entire app it will climb to 16gb of vram in use (vram + shared ram), i think you may have forgot to either unload the models from vram after use, or to check if their already loaded before loading another copy.

Swap Image

When trying to do a swap between two images (instead of image to video) on windows (with and without CUDA), I'm getting the error below. But on my Mac, it does work...

[ERROR:[email protected]] global cap_ffmpeg_impl.hpp:1237 open Could not find decoder for codec_id=61
[ERROR:[email protected]] global cap_ffmpeg_impl.hpp:1286 open VIDEOIO/FFMPEG: Failed to initialize VideoCapture
[ERROR:[email protected]] global cap.cpp:166 cv::VideoCapture::open VIDEOIO(CV_IMAGES): raised OpenCV exception:

OpenCV(4.8.0) D:\a\opencv-python\opencv-python\opencv\modules\videoio\src\cap_images.cpp:253: error: (-5:Bad argument) CAP_IMAGES: can't find starting number (in the name of file): C:\Users\me\AppData\Local\Temp\gradio\9d41fb2ad430c0ef4eb9389469bfed0e06daa250\photo.png in function 'cv::icvExtractPattern'

trim sliders not showing for big files

sometimes the trim sliders (and also the preview window) would not show. May have something to do with the file sizes.

Example - same video but one is 720p and one is 1080p:
trim_slider
no_trim_slider

OUTPUT VIDEO QUALITY value range is wrong!

This value goes from 0 to 100. It's missleading because it makes you think that it goes from 0% quality to 100%, and some people will be happy with a quality of 85% because reduced filesize over 100%, but 85% is still a very high number because this is not how it works. This value is then mapped to ffmpeg's crf parameter, and it goes from 0-51. But it's NOT linear, and this is where the problem lives. A crf of 25 is not 50% perceived quality.

The formula that map the values is this:

output_video_quality = round(51 - (facefusion.globals.output_video_quality * 0.5))

So this mapping produces:

output_video_quality 0 -------> crf: 51... ok?
output_video_quality 100 ---> crf: 1.... ok too? (can't be loseless?)

But other values are missleading!

If someone uses output_video_quality=85 thinking it will be a good tradeoff between quality and size.. that's insanely false! Because the crf value will be 8.5 and that is a waste! That's not even near to 85% quality, it's near perfect quality with huge disk space waste xD

ffmpeg's default crf value is 23! And this number is probably what 85% quality should be.

Why create another scale from 0 to 100? Why not have the slider with the range of 0-51 and explain that this is ffmpeg's crf?

gpu is not used

my machine is 4090
the execute line:python run.py -s ./src/11.png -t ./dst/dat111.mp4 -o ./out/docker_target.mp4 --execution-provider cuda
and the speed is same with cpu
I run this with the docker you give.

Reference face position

How are we actually supposed to provide the position of reference frame in this code (--reference-face-position REFERENCE_FACE_POSITION)? Please provide with any example.

Videos are copied twice to RAM

When a new video is uploaded to the UI, it appears 2 times (with different folder names) inside /tmp/gradio, which is mounted to the RAM in my system, so the RAM usage is doubled. Any idea why this happen? I can't find where the problem is

Support for Gradio Sharing/Tunneling

I am creating a Google Colab notebook and want to enable a public Gradio tunnel. This is done by setting share=True to launch(). I propose using a flag --public-url

I can add this feature myself if you assign me to it.

Typo in FAQ page

There is a missing space in the in the word whereas:

For further instruction:

Screenshot_4

Solution: Just give a space between two word where as

FPS detection not working properly

Title

Incorrect FPS Detection with --keep-frames Flag When ffprobe Outputs Multiple Lines

Description

When using the --keep-frames flag (also known as "Keep FPS" in the GUI), the detect_fps function in utilities.py fails to accurately determine the video frame rate. This happens if ffprobe returns multiple lines of FPS information. The function expects a single line and does not handle multiple lines, leading to a parsing error.

Steps to Reproduce

  1. Run the Docker container using docker-compose -f docker-compose.cuda.yml up.
  2. Run the application with a video file and use the --keep-frames flag or "Keep FPS" in the GUI.
  3. Observe the error message or incorrect FPS detection in the console.

Expected Behavior

The detect_fps function should accurately detect the FPS of the video, even when the --keep-frames flag is used and ffprobe returns multiple lines of output.

Actual Behavior

The function fails to parse multiple lines and throws a parsing error when converting the FPS information to integers.

Debug Information

  • Debug logs show that the raw ffprobe output contains multiple lines, which is not handled by the existing code.
  • Error Message: Error in calculating FPS: invalid literal for int() with base 10: '1001\n30000'

Suggested Fix

Modify the detect_fps function to only consider the first line of the ffprobe output for FPS calculation. Add a line to split the raw output by newline characters and use only the first element for further processing.

# Consider only the first line of the output
def detect_fps(target_path : str) -> Optional[float]:
    logging.basicConfig(level=logging.DEBUG)
    
    commands = [ 'ffprobe', '-v', 'error', '-select_streams', 'v:0', '-show_entries', 'stream=r_frame_rate', '-of', 'default=noprint_wrappers = 1:nokey = 1', target_path ]
    raw_output = subprocess.check_output(commands).decode().strip()
        
    # Consider only the first line of the output (Use spaces for indentation here)
    first_line_output = raw_output.split('\n')[0]

    output = first_line_output.split('/')  # Modified this line to use first_line_output
    try:
        numerator, denominator = map(int, output)
        calculated_fps = numerator / denominator
        logging.debug(f"Calculated FPS: {calculated_fps}")
        return calculated_fps
    except Exception as e:
        logging.error(f"Error in calculating FPS: {e}")
        return None

Environment

  • Operating System: Docker container (based on the image specified in docker-compose.cuda.yml)
  • Python Version: 3.10.12
  • Application Version: 1.0

Error Handling

Hey guys, although captures AttributeError and ValueError exceptions in the get_many_faces function, doesn't provide any meaningful error handling. It's important to handle exceptions properly to provide clear feedback when something goes wrong. For exemple: except AttributeError as attr_error:
raise FaceAnalysisError(f"Attribute error in get_many_faces: {attr_error}")

累了就噜啦噜啦嘞可口可乐了

Feature request : GPU selection

it could be great to have a command line parameter to choose witch GPU we want to use (in case we have multiple NVIDIA GPUs)

When running in silent mode, and using --frame-processors face_enhancer, the -s option is required even though it doesn't do anything

I like to run facefusion in silent mode on the command line to do my swaps. But when I enhance the frames using "--frame-processors face_enhancer", I find that if I leave out the -s option, it opens facefusion in interactive UI mode. But if I specify a dummy parameter, say "-s dummy", it remains in silent mode and works as expected.

I guess I can always use the workaround and specify "-s dummy", but it would be nice if the -s option was not required for face enhancements.

changing default port and listen on all interfaces

I wanted to install run on a distant server on a specific port and I can't find a way to do this.

Is it possible to add parameters to select which port the interface should open and also to define on which IP it should listen ?

thanks a lot.

Frame processor face_enhancer could not be loaded

Error

Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CUDAExecutionProvider': {'do_copy_in_default_stream': '1', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'device_id': '0', 'gpu_external_alloc': '0', 'enable_cuda_graph': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'cudnn_conv_use_max_workspace': '1', 'cudnn_conv1d_pad_to_nc1d': '0', 'tunable_op_enable': '0', 'tunable_op_tuning_enable': '0', 'enable_skip_layer_norm_strict_mode': '0'}, 'CPUExecutionProvider': {}}
Traceback (most recent call last):
File "F:\ML\facefusion\facefusion\processors\frame\core.py", line 31, in load_frame_processor_module
frame_processor_module = importlib.import_module('facefusion.processors.frame.modules.' + frame_processor)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lodhi\miniconda3\envs\facefusion\Lib\importlib_init_.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 1204, in _gcd_import
File "", line 1176, in _find_and_load
File "", line 1147, in _find_and_load_unlocked
File "", line 690, in load_unlocked
File "", line 940, in exec_module
File "", line 241, in call_with_frames_removed
File "F:\ML\facefusion\facefusion\processors\frame\modules\face_enhancer.py", line 4, in
from gfpgan.utils import GFPGANer
File "C:\Users\lodhi\miniconda3\envs\facefusion\Lib\site-packages\gfpgan_init
.py", line 2, in
from .archs import *
File "C:\Users\lodhi\miniconda3\envs\facefusion\Lib\site-packages\gfpgan\archs_init
.py", line 2, in
from basicsr.utils import scandir
ModuleNotFoundError: No module named 'basicsr'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\lodhi\miniconda3\envs\facefusion\Lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lodhi\miniconda3\envs\facefusion\Lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lodhi\miniconda3\envs\facefusion\Lib\site-packages\gradio\blocks.py", line 1109, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lodhi\miniconda3\envs\facefusion\Lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lodhi\miniconda3\envs\facefusion\Lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "C:\Users\lodhi\miniconda3\envs\facefusion\Lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lodhi\miniconda3\envs\facefusion\Lib\site-packages\gradio\utils.py", line 706, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "F:\ML\facefusion\facefusion\uis\components\processors.py", line 34, in update_frame_processors
frame_processor_module = load_frame_processor_module(frame_processor)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ML\facefusion\facefusion\processors\frame\core.py", line 36, in load_frame_processor_module
sys.exit(wording.get('frame_processor_not_loaded').format(frame_processor = frame_processor))
SystemExit: Frame processor face_enhancer could not be loaded
inswapper-shape: [1, 3, 128, 128]
Traceback (most recent call last):
File "F:\ML\facefusion\facefusion\processors\frame\core.py", line 31, in load_frame_processor_module
frame_processor_module = importlib.import_module('facefusion.processors.frame.modules.' + frame_processor)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lodhi\miniconda3\envs\facefusion\Lib\importlib_init_.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 1204, in _gcd_import
File "", line 1176, in _find_and_load
File "", line 1147, in _find_and_load_unlocked
File "", line 690, in load_unlocked
File "", line 940, in exec_module
File "", line 241, in call_with_frames_removed
File "F:\ML\facefusion\facefusion\processors\frame\modules\face_enhancer.py", line 4, in
from gfpgan.utils import GFPGANer
File "C:\Users\lodhi\miniconda3\envs\facefusion\Lib\site-packages\gfpgan_init
.py", line 2, in
from .archs import *
File "C:\Users\lodhi\miniconda3\envs\facefusion\Lib\site-packages\gfpgan\archs_init
.py", line 2, in
from basicsr.utils import scandir
ModuleNotFoundError: No module named 'basicsr'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\lodhi\miniconda3\envs\facefusion\Lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lodhi\miniconda3\envs\facefusion\Lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lodhi\miniconda3\envs\facefusion\Lib\site-packages\gradio\blocks.py", line 1109, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lodhi\miniconda3\envs\facefusion\Lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lodhi\miniconda3\envs\facefusion\Lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "C:\Users\lodhi\miniconda3\envs\facefusion\Lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\lodhi\miniconda3\envs\facefusion\Lib\site-packages\gradio\utils.py", line 706, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "F:\ML\facefusion\facefusion\uis\components\preview.py", line 83, in update
preview_frame = extract_preview_frame(target_frame)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ML\facefusion\facefusion\uis\components\preview.py", line 105, in extract_preview_frame
frame_processor_module = load_frame_processor_module(frame_processor)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ML\facefusion\facefusion\processors\frame\core.py", line 36, in load_frame_processor_module
sys.exit(wording.get('frame_processor_not_loaded').format(frame_processor = frame_processor))
SystemExit: Frame processor face_enhancer could not be loaded

Description

I am getting this error when I try pressing face_enhancer and frame_enhancer

Error while installing dependencies on Arch Linux with a virtual environment

Context:
I am attempting to set up the facefusion project on my Dell laptop running Arch Linux. My system specifications are as follows:

  • Laptop: Dell laptop
  • CPU: Intel i7 (older generation)
  • RAM: 8GB with swap
  • Operating System: Arch Linux

Steps Taken:

  1. Installed ffmpeg using pacman: pacman -S ffmpeg
  2. Created a virtual environment named myenv
    virtualenv myenv
    source myenv/bin/activate
  3. Installed packages using pip:
    pip install --no-cache-dir -r requirements.txt

Issue:
During the installation of the required packages, I encountered an error that prevented successful installation. Here is the error message I received:

pip install --no-cache-dir -r requirements.txt                                                                                                             ✔  myenv  

Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu118
Ignoring onnxruntime: markers 'python_version != "3.9" and sys_platform == "darwin" and platform_machine != "arm64"' don't match your environment
Ignoring onnxruntime-coreml: markers 'python_version == "3.9" and sys_platform == "darwin" and platform_machine != "arm64"' don't match your environment
Ignoring onnxruntime-silicon: markers 'sys_platform == "darwin" and platform_machine == "arm64"' don't match your environment
Collecting gfpgan==1.3.8
  Downloading gfpgan-1.3.8-py3-none-any.whl (52 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 52.2/52.2 kB 1.2 MB/s eta 0:00:00
Collecting gradio==3.40.1
  Downloading gradio-3.40.1-py3-none-any.whl (20.0 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 20.0/20.0 MB 2.5 MB/s eta 0:00:00
Collecting insightface==0.7.3
  Downloading insightface-0.7.3.tar.gz (439 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 439.5/439.5 kB 2.7 MB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Installing backend dependencies ... done
  Preparing metadata (pyproject.toml) ... done
Collecting numpy==1.24.3
  Downloading numpy-1.24.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 17.3/17.3 MB 1.6 MB/s eta 0:00:00
Collecting onnx==1.14.0
  Downloading onnx-1.14.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (14.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.6/14.6 MB 2.6 MB/s eta 0:00:00
Collecting onnxruntime-gpu==1.15.1
  Downloading onnxruntime_gpu-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (121.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 121.6/121.6 MB 2.6 MB/s eta 0:00:00
Collecting opencv-python==4.8.0.74
  Downloading opencv_python-4.8.0.74-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (61.7 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.7/61.7 MB 2.3 MB/s eta 0:00:00
Collecting opennsfw2==0.10.2
  Downloading opennsfw2-0.10.2-py3-none-any.whl (12 kB)
Collecting pillow==10.0.0
  Downloading Pillow-10.0.0-cp311-cp311-manylinux_2_28_x86_64.whl (3.4 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.4/3.4 MB 2.6 MB/s eta 0:00:00
Collecting protobuf==4.23.4
  Downloading protobuf-4.23.4-cp37-abi3-manylinux2014_x86_64.whl (304 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 304.5/304.5 kB 2.8 MB/s eta 0:00:00
Collecting psutil==5.9.5
  Downloading psutil-5.9.5-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (282 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 282.1/282.1 kB 2.6 MB/s eta 0:00:00
Collecting realesrgan==0.3.0
  Downloading realesrgan-0.3.0-py3-none-any.whl (26 kB)
Collecting tensorflow==2.13.0
  Downloading tensorflow-2.13.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (524.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 524.2/524.2 MB 1.7 MB/s eta 0:00:00
Collecting tqdm==4.65.0
  Downloading tqdm-4.65.0-py3-none-any.whl (77 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 77.1/77.1 kB 10.8 MB/s eta 0:00:00
Collecting basicsr>=1.4.2
  Downloading basicsr-1.4.2.tar.gz (172 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 172.5/172.5 kB 2.8 MB/s eta 0:00:00
  Preparing metadata (setup.py) ... error
  error: subprocess-exited-with-error
  
  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [105 lines of output]
      /home/lo/App/facefusion/myenv/lib/python3.11/site-packages/setuptools/__init__.py:85: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated. Requirements should be satisfied by a PEP 517 installer. If you are using pip, you can try `pip install --use-pep517`.
        dist.fetch_build_eggs(dist.setup_requires)
      ERROR: Exception:
      Traceback (most recent call last):
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_internal/cli/base_command.py", line 160, in exc_logging_wrapper
          status = run_func(*args)
                   ^^^^^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_internal/cli/req_command.py", line 247, in wrapper
          return func(self, options, args)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_internal/commands/wheel.py", line 170, in run
          requirement_set = resolver.resolve(reqs, check_supported_wheels=True)
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 92, in resolve
          result = self._result = resolver.resolve(
                                  ^^^^^^^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_vendor/resolvelib/resolvers.py", line 481, in resolve
          state = resolution.resolve(requirements, max_rounds=max_rounds)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_vendor/resolvelib/resolvers.py", line 348, in resolve
          self._add_to_criteria(self.state.criteria, r, parent=None)
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_vendor/resolvelib/resolvers.py", line 172, in _add_to_criteria
          if not criterion.candidates:
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_vendor/resolvelib/structs.py", line 151, in __bool__
          return bool(self._sequence)
                 ^^^^^^^^^^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 155, in __bool__
          return any(self)
                 ^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 143, in <genexpr>
          return (c for c in iterator if id(c) not in self._incompatible_ids)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 47, in _iter_built
          candidate = func()
                      ^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 206, in _make_candidate_from_link
          self._link_candidate_cache[link] = LinkCandidate(
                                             ^^^^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 297, in __init__
          super().__init__(
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 162, in __init__
          self.dist = self._prepare()
                      ^^^^^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 231, in _prepare
          dist = self._prepare_distribution()
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 308, in _prepare_distribution
          return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_internal/operations/prepare.py", line 491, in prepare_linked_requirement
          return self._prepare_linked_requirement(req, parallel_builds)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_internal/operations/prepare.py", line 536, in _prepare_linked_requirement
          local_file = unpack_url(
                       ^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_internal/operations/prepare.py", line 166, in unpack_url
          file = get_http_url(
                 ^^^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_internal/operations/prepare.py", line 107, in get_http_url
          from_path, content_type = download(link, temp_dir.path)
                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pip/_internal/network/download.py", line 148, in __call__
          content_file.write(chunk)
      OSError: [Errno 28] No space left on device
      Traceback (most recent call last):
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/setuptools/installer.py", line 97, in _fetch_build_egg_no_warn
          subprocess.check_call(cmd)
        File "/usr/lib/python3.11/subprocess.py", line 413, in check_call
          raise CalledProcessError(retcode, cmd)
      subprocess.CalledProcessError: Command '['/home/lo/App/facefusion/myenv/bin/python', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/tmp/tmpjpya2r53', '--quiet', 'nvidia-cublas-cu11==11.10.3.66']' returned non-zero exit status 2.
      
      The above exception was the direct cause of the following exception:
      
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "/tmp/pip-install-j0rokjfj/basicsr_6687d44989f944a1a7b489382d237466/setup.py", line 147, in <module>
          setup(
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/setuptools/__init__.py", line 107, in setup
          _install_setup_requires(attrs)
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/setuptools/__init__.py", line 80, in _install_setup_requires
          _fetch_build_eggs(dist)
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/setuptools/__init__.py", line 85, in _fetch_build_eggs
          dist.fetch_build_eggs(dist.setup_requires)
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/setuptools/dist.py", line 894, in fetch_build_eggs
          return _fetch_build_eggs(self, requires)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/setuptools/installer.py", line 39, in _fetch_build_eggs
          resolved_dists = pkg_resources.working_set.resolve(
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pkg_resources/__init__.py", line 815, in resolve
          dist = self._resolve_dist(
                 ^^^^^^^^^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pkg_resources/__init__.py", line 851, in _resolve_dist
          dist = best[req.key] = env.best_match(
                                 ^^^^^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pkg_resources/__init__.py", line 1123, in best_match
          return self.obtain(req, installer)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/pkg_resources/__init__.py", line 1135, in obtain
          return installer(requirement)
                 ^^^^^^^^^^^^^^^^^^^^^^
        File "/home/lo/App/facefusion/myenv/lib/python3.11/site-packages/setuptools/installer.py", line 99, in _fetch_build_egg_no_warn
          raise DistutilsError(str(e)) from e
      distutils.errors.DistutilsError: Command '['/home/lo/App/facefusion/myenv/bin/python', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/tmp/tmpjpya2r53', '--quiet', 'nvidia-cublas-cu11==11.10.3.66']' returned non-zero exit status 2.
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

Error Analysis:
The error seems to be related to package installation within the virtual environment. Specifically, it reports a subprocess error and mentions a lack of space on the device.

Expected Behavior:
I expected the installation process to complete successfully and the required packages to be installed within the virtual environment.

Additional Information:

  • I'm using Arch Linux, and my system has enough free disk space.
  • I have ensured that the virtual environment is activated before running the installation command.
  • The error message suggests the possibility of an issue with the nvidia-cublas-cu11 package, but I'm not sure how it relates to my environment because i don't have a GPU and followed the basic guide

Steps Tried:

  • Checked available disk space using df -h, and there is enough space.
  • Verified that virtualenv and pip are up to date.

Screenshots:
df -h

Reproducibility:
This issue is reproducible on my system every time I try to install the required packages using the given steps.

Environment:

  • Python version: [3.11.3]
  • Output of pip list within the virtual environment:
    pip 23.0.1
    setuptools 67.4.0
    wheel 0.38.4

Note:
I'm relatively new to Arch Linux and may not be aware of specific system configurations that could contribute to this issue. Any guidance or suggestions to resolve this issue would be greatly appreciated.

Output Timeout

is there a timeout setting on the output? Because when i try to face swap 10s video (300 frame), it will need more than 60s to do the process and when the output time reach 60s it will return error, but on cli it said that "Processing to video succeed"

It failed with an error

It fails with an error in preview it works but when I click render it fails with the following error

inswapper-shape: [1, 3, 128, 128]
100%|███████████████████████████████████████████████████████████████████████████▉| 12038/12040 [06:10<00:00, 26.09it/s]Traceback (most recent call last):
File "C:\Users\Z5050\miniconda3\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\Z5050\miniconda3\lib\site-packages\gradio\blocks.py", line 1435, in process_api
result = await self.call_function(
File "C:\Users\Z5050\miniconda3\lib\site-packages\gradio\blocks.py", line 1107, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\Z5050\miniconda3\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\Z5050\miniconda3\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\Z5050\miniconda3\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\Users\Z5050\miniconda3\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "C:\Users\Z5050\Desktop\facefusion\facefusion\uis\components\output.py", line 44, in update
conditional_process()
File "C:\Users\Z5050\Desktop\facefusion\facefusion\core.py", line 210, in conditional_process
process_video()
File "C:\Users\Z5050\Desktop\facefusion\facefusion\core.py", line 151, in process_video
if predict_video(facefusion.globals.target_path):
File "C:\Users\Z5050\Desktop\facefusion\facefusion\predictor.py", line 42, in predict_video
_, probabilities = opennsfw2.predict_video_frames(video_path = target_path, frame_interval = 100)
File "C:\Users\Z5050\miniconda3\lib\site-packages\opennsfw2_inference.py", line 178, in predict_video_frames
cv2.destroyAllWindows() # pylint: disable=no-member
cv2.error: OpenCV(4.8.0) D:\a\opencv-python\opencv-python\opencv\modules\highgui\src\window.cpp:1266: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvDestroyAllWindows'

100%|████████████████████████████████████████████████████████████████████████████| 12040/12040 [06:23<00:00, 26.09it/s]

Mostly works, but occasionally failed

I got all working on my M2 Mac Mini, but sometimes I think if video does not show "face" for the first few frames, it will not run the processing. Or it could be something else. But it fails sometimes.

[FACEFUSION.CORE] Creating temporary resources
[FACEFUSION.CORE] Extracting frames with 25.0 FPS
python(38465) MallocStackLogging: can't turn off malloc stack logging because it was not enabled.
[FACEFUSION.CORE] Temporary frames not found
  0%|                                                                                          | 0/822 [00:00<?, ?it/s]WARNING:tensorflow:5 out of the last 11 calls to <function Model.make_predict_function.<locals>.predict_function at 0x287e59480> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for  more details.
100%|███████████████████████████████████████████████████████████████████████████████| 822/822 [00:03<00:00, 258.04it/s]
[FACEFUSION.CORE] Creating temporary resources
[FACEFUSION.CORE] Extracting frames with 25.0 FPS
python(38529) MallocStackLogging: can't turn off malloc stack logging because it was not enabled.
[FACEFUSION.CORE] Temporary frames not found
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /Users/jimmygunawan/.insightface/models/buffalo_l/1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /Users/jimmygunawan/.insightface/models/buffalo_l/2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /Users/jimmygunawan/.insightface/models/buffalo_l/det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /Users/jimmygunawan/.insightface/models/buffalo_l/genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: /Users/jimmygunawan/.insightface/models/buffalo_l/w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (640, 640)
  0%|                                                             

CUDA_PATH is set but CUDA wasn't able to be loaded.

python3.10.11
RTX3090 24G

my install steps:
git clone facefusion
pip install -r requirements.txt
pip uninstall onnxruntime onnxruntime-gpu
pip install onnxruntime-gpu==1.15.1
python run.py --execution-providers cuda

image

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\gradio\blocks.py", line 1435, in process_api
    result = await self.call_function(
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\gradio\blocks.py", line 1107, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "F:\__4__code\gc_python\facefusion\facefusion\uis\components\preview.py", line 89, in update
    preview_frame = extract_preview_frame(temp_frame)
  File "F:\__4__code\gc_python\facefusion\facefusion\uis\components\preview.py", line 97, in extract_preview_frame
    source_face = get_one_face(cv2.imread(facefusion.globals.source_path)) if facefusion.globals.source_path else None
  File "F:\__4__code\gc_python\facefusion\facefusion\face_analyser.py", line 30, in get_one_face
    many_faces = get_many_faces(frame)
  File "F:\__4__code\gc_python\facefusion\facefusion\face_analyser.py", line 41, in get_many_faces
    faces = get_face_analyser().get(frame)
  File "F:\__4__code\gc_python\facefusion\facefusion\face_analyser.py", line 18, in get_face_analyser
    FACE_ANALYSER = insightface.app.FaceAnalysis(name = 'buffalo_l', providers = facefusion.globals.execution_providers)
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\insightface\app\face_analysis.py", line 31, in __init__
    model = model_zoo.get_model(onnx_file, **kwargs)
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 96, in get_model
    model = router.get_model(providers=providers, provider_options=provider_options)
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 40, in get_model
    session = PickableInferenceSession(self.onnx_file, **kwargs)
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 25, in __init__
    super().__init__(model_path, **kwargs)
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 394, in __init__
    raise fallback_error from e
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 389, in __init__
    self._create_inference_session(self._fallback_providers, None)
  File "F:\__4__code\gc_python\facefusion\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 435, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:636 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasn't able to be loaded. Please install the correct version of CUDA and cuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

image
image
image

Too much enhancement

Any way of 'lessening' the amount of enhancement. It's just TOO good, too sharp unless it's on a 4k target source.
I mean a slider arrangement would be just brilliant of course, but failing that is there anywhere in the code it could be edited?

Invalid NAL unit size

I don't know if it's because the video is very large, my video is about 4.5GB in size.And my command is python run.py -s ~/autodl-fs/gyy.jpg -t ~/autodl-tmp/722-3.mp4 -o ~/autodl-tmp/722-3-output.mp4 --execution-provider cuda --frame-processor face_swapper face_enhancer --keep-fps --face-recognition many --reference-face-distance 1.5 --temp-frame-format jpg --temp-frame-quality 70 --output-video-encoder libx264 --output-video-quality 80 --execution-thread-count 20

set det-size: (640, 640)
 46%|█████████████████████████████████████████████████████████████████████████████████▋                                                                                                | 46628/101619 [13:05<13:58, 65.56it/s][NULL @ 0xc859cb40] Invalid NAL unit size (0 > 44204).
[NULL @ 0xc859cb40] missing picture in access unit with size 44208
[h264 @ 0xcb248880] Invalid NAL unit size (0 > 44204).
[h264 @ 0xcb248880] Error splitting the input into NAL units.
[NULL @ 0xc859cb40] Invalid NAL unit size (0 > 23386).
[NULL @ 0xc859cb40] missing picture in access unit with size 23390
[h264 @ 0xcb25d200] Invalid NAL unit size (0 > 23386).
[h264 @ 0xcb25d200] Error splitting the input into NAL units.
[h264 @ 0xcb233f00] error while decoding MB 97 104, bytestream -13
[NULL @ 0xc859cb40] Invalid NAL unit size (0 > 23717).
[NULL @ 0xc859cb40] missing picture in access unit with size 23721
[h264 @ 0xc9297a00] Invalid NAL unit size (0 > 23717).
[h264 @ 0xc9297a00] Error splitting the input into NAL units.
[NULL @ 0xc859cb40] Invalid NAL unit size (0 > 120119).
[NULL @ 0xc859cb40] missing picture in access unit with size 120123
[h264 @ 0xc921a400] Invalid NAL unit size (0 > 120119).
[h264 @ 0xc921a400] Error splitting the input into NAL units.
 46%|█████████████████████████████████████████████████████████████████████████████████▋                                                                                                | 46636/101619 [13:06<13:25, 68.28it/s][NULL @ 0xc859cb40] Invalid NAL unit size (0 > 32102).
[NULL @ 0xc859cb40] missing picture in access unit with size 32106
[h264 @ 0xc928a9c0] Invalid NAL unit size (0 > 32102).
[h264 @ 0xc928a9c0] Error splitting the input into NAL units.
[NULL @ 0xc859cb40] Invalid NAL unit size (0 > 21048).
[NULL @ 0xc859cb40] missing picture in access unit with size 21052
[h264 @ 0xc8555440] Invalid NAL unit size (0 > 21048).
[h264 @ 0xc8555440] Error splitting the input into NAL units.
[NULL @ 0xc859cb40] Invalid NAL unit size (0 > 18670).
[NULL @ 0xc859cb40] missing picture in access unit with size 18674
[h264 @ 0xcb2e2740] Invalid NAL unit size (0 > 18670).
[h264 @ 0xcb2e2740] Error splitting the input into NAL units.
[NULL @ 0xc859cb40] Invalid NAL unit size (0 > 114286).
[NULL @ 0xc859cb40] missing picture in access unit with size 114290
[h264 @ 0xcb2f6cc0] Invalid NAL unit size (0 > 114286).
[h264 @ 0xcb2f6cc0] Error splitting the input into NAL units.
[NULL @ 0xc859cb40] Invalid NAL unit size (0 > 29870).
[NULL @ 0xc859cb40] missing picture in access unit with size 29874
[h264 @ 0xc9261c00] Invalid NAL unit size (0 > 29870).
[h264 @ 0xc9261c00] Error splitting the input into NAL units.
[NULL @ 0xc859cb40] Invalid NAL unit size (0 > 19429).
[NULL @ 0xc859cb40] missing picture in access unit with size 19433
[h264 @ 0xc9276380] Invalid NAL unit size (0 > 19429).
[h264 @ 0xc9276380] Error splitting the input into NAL units.
[NULL @ 0xc859cb40] Invalid NAL unit size (0 > 16946).
[NULL @ 0xc859cb40] missing picture in access unit with size 16950
[h264 @ 0x1033c140] Invalid NAL unit size (0 > 16946).
[h264 @ 0x1033c140] Error splitting the input into NAL units.
[NULL @ 0xc859cb40] Invalid NAL unit size (0 > 121347).
[NULL @ 0xc859cb40] missing picture in access unit with size 121351
[h264 @ 0x103506c0] Invalid NAL unit size (0 > 121347).
[h264 @ 0x103506c0] Error splitting the input into NAL units.
[NULL @ 0xc859cb40] Invalid NAL unit size (0 > 37826).
[NULL @ 0xc859cb40] missing picture in access unit with size 37830
[h264 @ 0x10364fc0] Invalid NAL unit size (0 > 37826).
[h264 @ 0x10364fc0] Error splitting the input into NAL units.
 46%|█████████████████████████████████████████████████████████████████████████████████▋                                                                                                | 46645/101619 [13:06<12:43, 72.01it/s][NULL @ 0xc859cb40] Invalid NAL unit size (0 > 18335).
[NULL @ 0xc859cb40] missing picture in access unit with size 18339
[h264 @ 0x10379840] Invalid NAL unit size (0 > 18335).
[h264 @ 0x10379840] Error splitting the input into NAL units.
[NULL @ 0xc859cb40] Invalid NAL unit size (0 > 18371).
[NULL @ 0xc859cb40] missing picture in access unit with size 18375
[h264 @ 0xcb233f00] Invalid NAL unit size (0 > 18371).
[h264 @ 0xcb233f00] Error splitting the input into NAL units.
 46%|█████████████████████████████████████████████████████████████████████████████████▋                                                                                                | 46646/101619 [13:06<15:26, 59.33it/s]
[FACEFUSION.CORE] Creating temporary resources
[FACEFUSION.CORE] Extracting frames with 59.94005994005994 FPS

Error reported after update

i need help
hello
after update today
when i use:python run.py --execution_provider cuda
The following error occurred:run.py: error: unrecognized arguments: --execution_provider gpu
now
can only run:python run.py --execution_provider cpu

error on cuda

command with python run.py --execution-providers cuda
returns
run.py: error: argument --execution-providers: invalid choice: 'cuda' (choose from 'cpu')
installed cuda toolkit 12.2 and cuDNN for cuda 12.x, don't know if it's compatible

When GPU Metal failed, use CPU

Occasionally experiencing GPU failure:
warnings.warn('resource_tracker: There appear to be %d '

So switching to CPU is safe bet.

Any improvement behind roop?

Is the core function of this based on roop? Have there been any improvements made to the algorithm or is it just focused on enhancing the UI? The code structure appears to be similar.

why only few videos are going into processing mode..

hey.
i have multiple target videos dont know why only few of them are running to processing mode which is second step after calculating frames,,, also m not able to find the logic whether the size of video,frames, height ratio, or less frame,, or what.. sometimes its working and mostly note..
i have 20gb ram so its not system issue.. plus already ran roop that time also same thing happened..
and 1 in 4 times previews works otherwise keep sliding preview bar to have preview..

can u tell me.
??

Colab error

[FACEFUSION.FRAME_PROCESSOR.FACE_SWAPPER] Processing
2023-08-25 19:08:18.923557831 [E:onnxruntime:, inference_session.cc:1644 operator()] Exception during initialization: /onnxruntime_src/onnxruntime/core/framework/bfc_arena.cc:368 void* onnxruntime::BFCArena::AllocateRawInternal(size_t, bool, onnxruntime::Stream*, bool, onnxruntime::WaitNotificationFn) Failed to allocate memory for requested buffer of size 4718592

Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1431, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 1109, in call_function
prediction = await anyio.to_thread.run_sync(
File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/dist-packages/gradio/utils.py", line 706, in wrapper
response = f(*args, **kwargs)
File "/content/facefusion/facefusion/uis/components/output.py", line 44, in update
conditional_process()
File "/content/facefusion/facefusion/core.py", line 196, in conditional_process
process_image()
File "/content/facefusion/facefusion/core.py", line 141, in process_image
frame_processor_module.process_image(facefusion.globals.source_path, facefusion.globals.output_path, facefusion.globals.output_path)
File "/content/facefusion/facefusion/processors/frame/modules/face_swapper.py", line 92, in process_image
result_frame = process_frame(source_face, reference_face, target_frame)
File "/content/facefusion/facefusion/processors/frame/modules/face_swapper.py", line 68, in process_frame
temp_frame = swap_face(source_face, similar_face, temp_frame)
File "/content/facefusion/facefusion/processors/frame/modules/face_swapper.py", line 60, in swap_face
return get_frame_processor().get(temp_frame, target_face, source_face, paste_back = True)
File "/content/facefusion/facefusion/processors/frame/modules/face_swapper.py", line 26, in get_frame_processor
FRAME_PROCESSOR = insightface.model_zoo.get_model(model_path, providers = facefusion.globals.execution_providers)
File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 96, in get_model
model = router.get_model(providers=providers, provider_options=provider_options)
File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 40, in get_model
session = PickableInferenceSession(self.onnx_file, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 25, in init
super().init(model_path, kwargs)
File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 435, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: /onnxruntime_src/onnxruntime/core/framework/bfc_arena.cc:368 void
onnxruntime::BFCArena::AllocateRawInternal(size_t, bool, onnxruntime::Stream
, bool, onnxruntime::WaitNotificationFn) Failed to allocate memory for requested buffer of size 4718592

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.