GithubHelp home page GithubHelp logo

in03 / proxima Goto Github PK

View Code? Open in Web Editor NEW
46.0 3.0 3.0 31.62 MB

Transcode source media directly from DaVinci Resolve using multiple machines for encoding. Great for creating proxies quickly.

License: MIT License

Python 100.00%
python davinci-resolve distributed-computing parallel-computing video-processing celery redis cli-app

proxima's Introduction

Proxima ✨

proxima logo

GitHub Code style: black semantic-release: angular

GitHub branch checks state pre-commit.ci status GitHub last commit GitHub Repo stars


Check out the Docs!https://in03.github.io/proxima/


Proxima makes queuing proxies from DaVinci Resolve a breeze.

Launch the worker on as many computers as you have free and it'll pool all the free CPU threads together to encode multiple proxies at once. Only have the one computer? Encoding runs entirely on the CPU, leaving GPU-heavy Resolve with plenty of resources to continue editing while you pump out proxies. Once they're finished, they're automatically linked.

queuer demo

Why make proxies?

DaVinci Resolve's greatly benefits from having all-intra media, like ProRes or DNxHD. If you shoot in h.264 or h.265 like many do, you're likely to see great performance improvements using proxies. This application makes queuing, encoding and linking them quick and easy.

Features

  • Resolve 18 + legacy Resolve 17 support
  • Distributed encoding on multiple machines
  • Utilizes all CPU threads for encoding
  • Queue only used proxies straight from a DaVinci Resolve timeline
  • Automatically link proxies after encoding
  • Flexible queuer/worker arrangement allows background encoding on same machine
  • Version constrain prevents incompatabilities between different machines
  • Easy YAML based user configuration
  • Advanced configuration validation
  • Automatically checks for updates
  • Control Celery with Celery commands from CLI, even when installed with pipx
  • Multi progress-bar #190
  • Faster config parsing and validation with TOML #258
  • Manage configuration from CLI #258

Roadmap

  • Better Apple silicon support
  • Interactive timeline selection #161
  • Better resource-utilization with encode-chunking
  • Multiple, customizable proxy-presets with rule-based matching #201
  • Easier cross-platform paths via path-mapping
  • Pre-flight checks - encoding settings validation
  • UI improvements
  • Monitor Web App

What about Blackmagic's Proxy Generator?

I started this for the company I work for, well before BPG was on the scene. If BPG works better for you, go for it! As it stands BPG won't do any all-intra codecs on Windows, which is a dealbreaker for us. It also works on a watch-folder basis with no filename whitelisting or path-filtering rules. That means EVERY video file becomes a proxy, whether you need it or not. Our workflow often sees the shooter doing a rough assembly of chosen takes as an exported timeline. We simply import this timeline and queue proxies from it. If you work with chronic-overshooters, you'll save a heap of disk space and encoding time queuing proxies from a roughly-organised timeline.

Ooh, I want it!

Prerequisites

  • One or more editing computers with DaVinci Resolve Studio installed and scripting enabled
  • An always on computer to run the broker (e.g. server, NAS, primary-desktop, Raspberry Pi)
  • Worker computers (decent resources, can overlap with editing computers)
  • all above machines on LAN and able to access same files via same filepath.

Note

Only Davinci Resolve 18 is supported

Please check out the "resolve-17" branch if you need Resolve 17 support. Prior versions are untested, but may work depending on Resolve's API feature set for that version.

Installation

Proxima is composed of three major parts:

  • the 'queuer' responsible for interfacing with DaVinci Resolve and sending tasks to the broker
  • the 'broker' (Redis) that distributes jobs to the workers
  • the 'worker' one of potentially many workers responsible for doing the actual encoding

CLI

The 'queuer' and 'worker' are bundled together in the CLI app. They are both installed from the same source, called from the same command and share the same configuration file. As such, any computer that has the CLI app installed can both queue proxies (so long as Resolve is set-up) and run workers. Install it with pipx:

pipx install git+https://github.com/in03/proxima

Broker

The broker is best installed on an always-on computer or server. If it's not running neither queuers nor workers can communicate. Very little configuration is required. Just make sure it's accessible over LAN. Redis and RabbitMQ are Celery's best supported broker backends. Only Redis has been tested extensively with Proxima.

Install Redis with docker:

docker run -d --name some-redis -p 6379:6379 redis-server --append-only yes

Monitor

An in-house web-app monitor for Proxima jobs is coming and will include some really helpful, specific features. Until it's ready, if you want to monitor your jobs, it's a good idea to install Flower - an official monitor for Celery tasks and workers. Install it with docker alongside your broker:

docker run --name flower -e $CELERY_BROKER_URL=redis://192.168.1.171:6379/0 -e FLOWER_PURGE_OFFLINE_WORKERS=300 -d flower

Set the IP address and port to the same as your broker container. Don't forget the Redis protocol redis://. Consider setting FLOWER_PURGE_OFFLINE_WORKERS if you don't have a well-defined set of computers to run the workers, or there's potential to change worker hostnames. 300 for example, would clear the list of offline workers every 5 minutes. This just keeps things tidy.

How do I use it?

Usage: proxima [OPTIONS] COMMAND [ARGS]...

Options:
  --install-completion  Install completion for the current shell.
  --show-completion     Show completion for the current shell, to copy it or
                        customize the installation.

  --help                Show this message and exit.

Commands:
  link   Manually link proxies from directory to source media in open...
  mon    Launch Flower Celery monitor in default browser new window
  purge  Purge all proxy jobs from all queues
  queue  Queue proxies from the currently open DaVinci Resolve timeline
  work   Prompt to start Celery workers on local machine
  celery Inject Celery commands to control tasks and workers

Configuration

On first run, you'll be prompted to alter your settings. The app will copy the default settings to the OS user configuration folder.

  • Linux/Mac: $XDG_HOME_CONFIG/proxima/user_settings.yml (may not open settings automatically)
  • Windows: %homepath%/proxima/user_settings.yml

Some Key Settings

proxy_path_root

All proxies will be encoded to this directory. They'll retain the source media's directory structure:

paths:
  proxy_path_root: R:/ProxyMedia

celery

Celery runs all queuer/worker/broker communications. Make sure you set all of the below addresses as per your environment!

broker:
  url:  redis://192.168.1.19:6379/0
  job_expires: 3600 # 1 hour (cleared if not received by worker)
  result_expires: 86400 # 1 day (Needed for webapp monitor)

Warning

Make sure you set result_expires! to a reasonable value otherwise Redis may run out of memory! If you need persistent results, consider configuring Redis for persistence.

worker

worker:
  loglevel: INFO
  terminal_args: [] # use alternate shell? Recommend windows terminal ("wt") on Windows.
  celery_args: [-l, INFO, -P, solo, --without-mingle, --without-gossip]

Queuer and worker have separate loglevels to make debugging a little easier if you've got just one worker playing up.

A Note on Concurrency

Some pretty dangerous concurrency settings were moved out of the configuration settings to prevent accidents. Windows doesn't support pre-forking for concurrency. Actually, Celery doesn't officially support Windows anymore at all. Mac and Linux can pre-fork, but the worker launcher invoked with the proxima work command gets around this by launching separate processes with workers named worker1@host, worker2@host, etc. We're going for the lowest common denominator here. This works fine with Mac and Linux too. It makes monitoring easier and cross-platform behaviour more consistent. For those concerned about the extra overhead who would like to squeeze out every last bit of performance on Mac or Linux, consider injecting the celery multi command with proxima celery. See Celery daemonization

How can I contribute?

Hey! Thanks! Any help is appreciated. Please check the Contribution Guide.

Star History

Thanks for your interest! Every star helps :) Star History Chart

proxima's People

Contributors

actions-user avatar dependabot[bot] avatar github-actions[bot] avatar in03 avatar pre-commit-ci[bot] avatar youtourdev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

proxima's Issues

Stop relying on calling process for post-render actions

As convenient as keeping the calling script open is, we should look into restructuring our post-render actions.

Currently the script responsible for queuing the proxy tasks stays open and waits for all the jobs queued in that instance to finish before performing post-render actions (native OS notification, log failed jobs, link successfully encoded proxies, etc.). The way this is currently implemented means we get a new console window for every round of proxies we queue. These stay open for these long-running encode jobs and can sometimes be unreliable and continue waiting after all tasks have finished.

Ideally we would decouple the post-render actions from the calling script altogether.

We could always have a separate worker with a separate task queue specifically for post-render actions. These workers would have to be guaranteed to have access to a running copy of Resolve for Resolve's API. Given that Resolve remote-rendering is often used by post-production houses, it may be a viable workflow. By passing the entire project object to this worker, we'd be able to link proxies for a project even if the original queuing computer was turned off or had closed Resolve.

We could further decouple notifications from the Celery/worker ecosystem and use webhooks with a cross-platform app capable of desktop notifications (Chrome or Firefox?) or use emails instead of native OS notifications. If we pass the queuing computer's hostname, we can choose to filter notifications for encodes that weren't queued on the active system.

Occasionally job is sent, but no worker receives it or works on it?

Sometimes RESOLVE_queue_proxies reports job sent, waiting on job to finish, but the job does not get received by any Celery workers?

Possibly an issue with a worker that loses sync with other workers when it's host machine returns from sleep?
Need to add a more robust check to see if progress is being made and timeout, or retry. Maybe some function to request a different Celery worker or restart one that is playing up if the worker is at fault?

Knowing why this happens would certainly bring peace of mind.

Add the Resolve API media pool item object so we can call it directly to link

Add the Resolve API media pool item object so we can call it directly to link

source_metadata.update({'media_pool_item_object':media_pool_item})

https://github.com/in03/Resolve-Proxy-Encoder/blob/dab58394e1b8685f085c56227a5b8a40af02287c/resolve_proxy_encoder/resolve_queue_proxies.py#L423

                )
                continue

        # TODO: Add the Resolve API media pool item object so we can call it directly to link
        # source_metadata.update({'media_pool_item_object':media_pool_item})
        filtered_metadata.append(source_metadata)

    print(f"[green]Total queuable clips on timeline: {len(filtered_metadata)}[/]")

6d6b0945ae89a09d2a9d212fc5db8146332192d4

`git_sha` slice returns as 5 characters, not standard 7

git_sha slice returns as 5 characters, not standard 7

https://github.com/in03/Resolve-Proxy-Encoder/blob/ed720e68a961652adfb5e2b262771e688442f33a/resolve_proxy_encoder/worker/helpers.py#L70

        return windows_path

    return wsl_path


def get_queue():

    """Get Celery queue name (routing key) from package git commit short SHA

    Allows constraining tasks and workers to exact same version and prevent breaking changes.

    """

    # Add git SHA Celery queue to prevent queuer/worker incompatibilities
    git_full_sha = get_package_current_commit("resolve_proxy_encoder")

    if config["app"]["disable_version_constrain"]:
        logger.warning(
            "[yellow]Version constrain is disabled! Thar be dragons :dragon_face:[/]"
        )
        return "celery"

    if not git_full_sha:

        logger.error(
            "[red]Couldn't get local package commit SHA!\n"
            + "Necessary to maintain version constrain.[/]"
        )
        app_exit(1, -1)

    # TODO: `git_sha` slice returns as 5 characters, not standard 7
    # labels: bug

    # Use git standard 7 character short SHA
    return git_full_sha[::8]

2f4395fb65b4eb8eceb61a5ece4a21d33e580f82

Legacy Link - Multiple link attempts for a malformed proxy

All of the proxy linking functions are due for a rewrite...

The legacy linking function runs when existing unlinked proxies are detected and the user chooses to link them. It doesn't receive a list of references to Resolve's original media pool objects like a potential post-encode linking implementation might, so it needs to iterate over all timeline objects and retrieve those items from scratch. By default, this implementation retrieves ALL clips from ALL timelines and checks the list of proxies against all of them.

Proxy files that fail to link are still checked against all other matching source-media files even though they failed once. A match should only fail if the proxy-media is malformed; one fail is enough to warrant skipping. GIven that a proxy file should only link to one source-media file, additional matches are likely to be the same source media in a different timeline. Having the same clip in multiple different timelines is highly likely when doing timeline revisions.

Proxies can be manually linked when still encoding

Problem

It seems that proxies can be linked when they haven't finished encoding yet. I don't know why this hasn't come up sooner.
Maybe the metadata is being written before encoding has finished and so passes Resolve's timecode/duration check when calling to link through the API?

Potential solution

Ideally linking would only be done automatically post-encode. Since the chance of a user performing a manual link before files are finished is high, we should prevent this files from being linkable. A simple solution would be to use a different filename while the file is encoding and rename it to its final name once finished. If ffmpeg will allow us to export the file to the same format with a modified extension, we could make this extension .part.

Otherwise we could add _part_ as a suffix to the filename just before the extension.
This will still be required once split and stitch encoding is implemented, pre/post joining segments.

FFMpeg not failing properly

After switching to better_ffmpeg_progress whenever FFMpeg fails it logs the fail, but Python doesn't catch the exception in script. Need to fix this.

Figure out how to pass optional celery args to Typer

Figure out how to pass optional celery args to Typer

https://github.com/in03/Resolve-Proxy-Encoder/blob/979d02f2f285e02170b2aa075995495116efe47c/resolve_proxy_encoder/cli.py#L62

@cli_app.command()

# TODO: Figure out how to pass optional celery args to Typer
def work(
    workers_to_launch: Optional[int] = typer.Argument(
        0, help="How many workers to start"
    )
):
    """Prompt to start Celery workers on local machine"""

    if workers_to_launch > 0:
        print(f"[green]Starting workers! :construction_worker:[/]")
    print(f"[cyan]Starting worker launcher prompt :construction_worker:[/]")

    from resolve_proxy_encoder import start_workers

    start_workers.main(workers_to_launch)


@cli_app.command()

41e9da4e380aec42333fe8f48124359785ae50d1

Can't link files if source media has timecode data

Problem

It seems that Resolve is very fussy about timecode matching in order to link to source media. 5bf51ca started to explore copying the starting timecode to the proxy's file metadata, but it seems that Resolve is looking for Quicktime timecode data ONLY, despite accepting the MXF if there is no timecode metadata to match.

Potential Solutions

  1. Somehow force FFMpeg to write Quicktime timecode metadata
  2. Find a Quicktime DNxHD encoder that can take FFMpeg's place
  3. Strip timecode metadata from the source files to keep Resolve happy
  4. Complain and wait for BlackMagic to change Resolve's proxy timecode matching implementation

Test that celery binary can be started from pipx install

Test that celery binary can be started from pipx install

https://github.com/in03/Resolve-Proxy-Encoder/blob/979d02f2f285e02170b2aa075995495116efe47c/resolve_proxy_encoder/start_workers.py#L120

        invalid_answer()

    if answer == 0:
        print(f"[yellow]Using suggested amount: {safe_cores_suggestion}[/]")
        answer = safe_cores_suggestion

    return answer


def get_routing_key_from_version():

    # Add git SHA Celery queue to prevent queuer/worker incompatibilities
    git_full_sha = get_package_current_commit("resolve_proxy_encoder")

    if not git_full_sha:

        logger.error(
            "[red]Couldn't get local package commit SHA!\n"
            + "Necessary to prevent version mismatches between queuer and worker.[/]"
        )
        app_exit(1, -1)

    # Use git standard 7 character short SHA
    return git_full_sha[::8]


def new_worker(id=None):
    """Start a new celery worker in a new process

    Used to start workers even when the script binaries are buried
    in a virtual env like in pipx.

    Args:
        - id: Used to differentiate multiple workers on the same host

    Returns:
        - none

    Raises:
        - none
    """

    # Get worker name
    worker_name = []

    if id:
        worker_name = [
            "-n",
            f"worker{id}",
            "@h",  # hostname
        ]

    # Get worker queue
    routing_key = get_routing_key_from_version()
    queue = [" -Q " + routing_key]

    # TODO: Test that celery binary can be started from pipx install

    # Get celery binary
    if which("celery"):

        celery_bin = "celery"

    else:

        # Celery bin not on path. Maybe virtual env. Find.
        site_packages_dir = Path(get_python_lib()).resolve()
        celery_bin = get_script_from_package("Celery")
        print(celery_bin)
        package_root = os.path.join(site_packages_dir, "resolve_proxy_encoder")

        # Change dir to package root
        os.chdir(package_root)

    launch_cmd = [
        *config["celery_settings"]["worker_terminal_args"],
        celery_bin,
        "-A",
        "resolve_proxy_encoder.worker",
        "worker",
        *worker_name,
        *queue,
        *config["celery_settings"]["worker_celery_args"],
    ]

    # TODO: Figure out why celery worker is failing to start
    # "Error: Unable to parse extra configuration from command line."
    # Not sure why, copying the cmd and pasting runs the worker fine...
    # labels: bug

    logger.info(launch_cmd)
    print(launch_cmd)
    process = subprocess.Popen(launch_cmd)


def launch_workers(workers_to_launch: int):

    # Start launching

    worker_id = 0
    with Progress() as progress:

        launch = progress.add_task(
            "[green]Starting workers[/]", total=workers_to_launch
        )

        while not progress.finished:

            worker_id += 1
            progress.update(launch, advance=1)

            # logger.info(launch_cmd)

            new_worker(id=worker_id)
            time.sleep(0.05)

        print()
        return


def main(workers: int = 0):
    """Main function"""

    os_ = platform.system()
    cpu_cores = multiprocessing.cpu_count()

    print(f"[green]Running on {os_} with {cpu_cores} cores.[/]\n")

    # Check OS isn't Linux
    if not platform.system() == "Windows":
        print(
            f"[red]This utility is for Windows only!\n"
            + "To start multiple workers on Linux or WSL, setup a systemd service."
        )
        sys.exit(1)

9608ac7d0f8f9ef7a9f6b8269fdfbaf224956515

Remove unused dependencies

There's a heap of unused dependencies at the moment.
Can we potentially find an automated way of doing this? Not sure how it plays with Poetry.
I know pip tools works with requirements.txt nicely.

'Split and Stitch' Encoding

I've been messing around with running separate FFmpeg processes on segments of the same video file.
There are a bunch of benefits here:

  • Encoding speed is only limited by chunk-duration and worker-pool size
  • Greater resource utilisation
  • Chunked job structure lends to more reliable progress-metrics

I've got this working reliably locally - with no performance gains obviously since I'm running all the FFmpeg processes on the same machine.

To get this working here we'll need a few things:

  • A new task to parse the original job into segments
  • The encoding task's group will become a chord so we can join the segments after encode
  • Job must be pickled, not sent as JSON. We need to transport Resolve's PyRemoteObj MediaPoolItem as task results
  • Additional cleanup required for temporary files (original segments, temp folder structure, etc.)

Handle missing config keys

When RPE first starts, it checks for pre-existing user settings and will only copy the whole file if pre-existing user settings are not found. When new features are released that require new settings, default_settings.yml is updated with those new keys and default values.

We should check that all keys in default_settings.yml exist in the user configuration and if not, copy the keys and their default values to the user configuration. Maybe a quick console warning with the missing keys listed.

Proxy metadata doesn't use original timecode?

Will have to check up on this because I can't remember how it was implemented. Just had an error with an old project not linking rendered proxies because of mismatched starting timecode. Proxies all had starting timecode set to zero. Is ffmpeg disregarding timecode and always setting to zero?

Swap win term and start min cmd for custom start args

Swap win term and start min cmd for custom start args

This is silly and messy. Win term doesn't support min anyway.

Like so is better: run_with: ["start /min" , "bash", "wt"]

https://github.com/in03/Resolve-Proxy-Encoder/blob/dab58394e1b8685f085c56227a5b8a40af02287c/resolve_proxy_encoder/start_workers.py#L105

        queue_from_sha = " -Q " + git_full_sha[::8]
        launch_cmd = START_WIN_WORKER + multi_worker_fmt + queue_from_sha

        # TODO: Swap win term and start min cmd for custom start args
        # This is silly and messy. Win term doesn't support min anyway.
        # Like so is better: `run_with: ["start /min" , "bash", "wt"]`
        # labels: bug

        # Use windows terminal?
        if config["celery_settings"]["worker_use_win_terminal"]:
            start = "wt "

7b19328d8685158e0472a86ee4d161c33d913845

Implement remote worker shutdown CLI command

Look into remotely shutting down workers gracefully here.
Maybe consider restructuring commands a bit if we'll have multiple worker commands, etc.
Perhaps something like this?

$ rprox worker stop all
>> Stopped all workers

$ rprox worker stop worker1@EMC
>> Stopped worker1@EMC

'rprox work' command is incompatible with pipx!

'rprox work' command is incompatible with pipx!

Because we call our worker from celery as a subprocess we need to either call it

programmatically somehow or create a "rprox start worker" command that takes celery args,

but wraps the first part of the cmd.

https://github.com/in03/Resolve-Proxy-Encoder/blob/dab58394e1b8685f085c56227a5b8a40af02287c/resolve_proxy_encoder/start_workers.py#L29

config = settings.user_settings
logger = get_rich_logger(loglevel=config["celery_settings"]["worker_loglevel"])

# TODO: 'rprox work' command is incompatible with pipx!
# Because we call our worker from celery as a subprocess we need to either call it
# programmatically somehow or create a "rprox start worker" command that takes celery args,
# but wraps the first part of the cmd.
# labels: bug, critical

# Make sure the module path in the command below is up to date!
START_WIN_WORKER = """celery -A resolve_proxy_encoder.worker worker -l INFO -P solo"""

c3ef4997cd63208244e51bbc5e1e62777024c761

Unhandled overwrite error

I'm warned that a proxy can't be linked (assuming it's corrupt or incomplete). Upon attempting to render by clicking "No" at the "Unlinked proxy media found, would you like to link?" dialog, the job is sent, instantly completed and then fails to link again. If I try to delete the file manually and refresh the file manager, I can see it reappears. This is an occasional glitch with QNAP NAS systems. A potential fix (pending testing) would be to explicitly try to delete the file marked for overwriting, and if unsuccessful, warn the user explicitly that the file could not be deleted. I'm not sure if "trying" and "excepting" a delete is enough, or if an explicit if os.path.exists() is necessary after deletion, since File Explorer reports no errors upon delete?

Temporary fix is to delete the file from the QNAP web interface.

Proxy resolution settings assume consistent aspect ratio

Problem

Since we're providing absolute resolution settings to FFMpeg, we're stretching any clips that are filmed in different aspect ratios to the settings provided.

Potential Solutions

  • Provide vertical resolution only as absolute value, divide original vert res by proxy vert res to get proxy horizontal res
  • Provide percentages of the original source media instead of absolute values (inconsistent resolutions of high-res media)

Fix weird double app_exit issue on failed link

Fix weird double app_exit issue on failed link

Currently a failed link brings up two app_exit 'Press ENTER to exit'

prompts. Probably an app_exit prompt inside legacy_link func exiting with status 1

being caught by next app_exit. Perhaps app_exits should only run if name == "main"?

https://github.com/in03/Resolve-Proxy-Encoder/blob/dab58394e1b8685f085c56227a5b8a40af02287c/resolve_proxy_encoder/resolve_queue_proxies.py#L946

    job = queue_job(tasks)
    wait_encode(job)

    # TODO: Fix weird double app_exit issue on failed link
    # Currently a failed link brings up two app_exit 'Press ENTER to exit'
    # prompts. Probably an app_exit prompt inside legacy_link func exiting with status 1
    # being caught by next app_exit. Perhaps app_exits should only run if __name__ == "__main__"?
    # labels: bug

    # ATTEMPT POST ENCODE LINK
    try:

4c4e92824d01965f4baec8918d9a37dcef8eea76

Ensure partial matches work here too

Ensure partial matches work here too

Make sure no starting args exist

https://github.com/in03/Resolve-Proxy-Encoder/blob/a37c615e4f9a37dd559add17113ab986400d4cb2/resolve_proxy_encoder/start_workers.py#L153

        - none
    """

    def get_worker_name(id):

        # @h for 'hostname'
        return f"-n worker{id}@h"

    def get_worker_queue():

        return " -Q " + get_routing_key_from_version()

    def get_celery_binary_path():

        # Check if in virtual env. Find.
        celery_bin = get_script_from_package("Celery")
        if celery_bin:
            return celery_bin

        logger.warning("[yellow]Can't get Celery from package.[/]")

        # Assume global
        celery_bin = which("celery")
        if celery_bin:
            return celery_bin
        logger.warning(
            "[yellow]Using Celery on path." + "Please ensure version compatability![/]"
        )

        logger.error("[red]Couldn't find celery binary! Is it installed?[/]")
        app_exit(1, -1)

    def get_module_path():
        """Get absolute module path to pass to Celery worker"""

        # Change dir to package root
        module_path = os.path.dirname(os.path.abspath(__file__))

        logger.debug(f"Worker path: {module_path}")
        assert os.path.exists(module_path)
        return os.path.abspath(module_path)

    def get_new_console():
        """Get os command to spawn process in a new console window"""

        # Get new terminal
        worker_terminal_args = config["celery_settings"]["worker_terminal_args"]

        # Check if any args are on path. Probably terminal executable.
        executable_args = [which(x) for x in worker_terminal_args]
        if executable_args:
            return ""

        # TODO: Ensure partial matches work here too
        # Make sure no starting args exist
        start_commands = ["open", "start"]
        if any(x in start_commands for x in worker_terminal_args):
            return ""

        # TODO: Need to test starting workers on other platforms
        # Get new terminal cmd
        os_ = platform.system()

        if os_ is "Windows":
            return 'start "RPROX Worker"'  # First double quotes as title

        elif os_ is "Mac":
            return "open"

        elif os_ is "Linux":
            logger.error(
                "Cannot guess installed Linux terminal. Too many distros."
                + "Please provide a terminal executable in 'worker_terminal_args' settings."
            )
            app_exit(1, -1)
        else:
            logger.error(
                f"Could not determine terminal executable for OS: {os_}."
                + "Please provide a terminal executable in 'worker_terminal_args' settings."
            )
            app_exit(1, -1)

    launch_cmd = [
        get_new_console(),
        *config["celery_settings"]["worker_terminal_args"],
        f'"{get_celery_binary_path()}"',
        "-A resolve_proxy_encoder.worker",
        "worker",
        get_worker_name(id),
        get_worker_queue(),
        *config["celery_settings"]["worker_celery_args"],
    ]

    logger.info(" ".join(launch_cmd))
    # print(" ".join(launch_cmd))

    subprocess.Popen(
        cwd=get_module_path(),
        args=" ".join(launch_cmd),
        shell=True,
    )


def launch_workers(workers_to_launch: int):

8f830ea439e61f66c17e498dc5c4aacac0da6c0f

Worker-side Checks

Before receiving any tasks (which are darn hard to debug with all the Celery fluff and bubble), let's run a bunch of quick permissions / access tests. E.g:

  • Can we access the proxy_media path? Do we have R/W permissions?
  • Are there any configuration conflicts?

Then we can pop up some user-friendly warnings before attempting any encoding.

Extension check matches using TimelineItem.GetName() instead of source media filepath

The extension check should not use TimelineItem.GetName(). Although the default clip name is the file name of the source media, a user can give something a custom clip name, and then the script will fail to find the clip's extension in acceptable_exts. Since it's a basic string "in" check, no attempt is made to confirm the filepath exists. TBH, I'm not entirely sure if we need an extension check? By "trying" .GetClipProperty(), anything that doesn't have that method will fail. If that method fails, then we also can't retrieve the "File Path" attribute. If we just go off the "File Path" attribute from the start, we know we're receiving a real file path from Resolve and that the name is up to date.

Improve linking logic

There are currently three implementations of linking going on:

legacy_link()

Original throw-together script since before rprox
This is currently just the old script reworked as a module. It gets the project -> all timelines -> all tracks -> all clips -> all source media and compares lists of source media to lists of proxy media. This is done very inefficiently. Any corrupted proxies will attempt linking multiple times. It uses different, worse implemented functions that exist the rest of the package anyway. Recent attempts at rewrites didn't go so well, so this has remained the defacto for now.

search_and_link()

Currently working on this
This is a rewrite of the legacy link function that shares functions and hopes to overcome some of the weirder behaviour on glitches.

post_encode_link()

This doesn't walk all the project objects again, but instead relies on the returned results from Celery. This comes with these ideal benefits:

  • only clips that were queued and are successfully encoded will be linked (not ALL clips from all timelines)
  • if we can successfully pass Resolve's PyRemoteObject MediaPoolItem() for each source media item, we can simply pass the original object reference to each exported proxy. No matching source filenames or slowly comparing lists required.
  • if the worker has Resolve open and running, it would also be possible to link proxies from the worker without returning results back to the caller

Tone down output frills if not TTY

Currently all colours and markup available in rich is both printed and logged regardless of if the output is piped or not.
Best add a series of check to disable the extras, as mentioned here

- if sys.stdout.isatty(): returns false
- The NO_COLOR environment variable is set.
- The TERM environment variable has the value dumb.
- The user passes the option --no-color.
- You may also want to add a MYAPP_NO_COLOR environment variable

DON'T FORGET ANIMATIONS!
Any progress bars will throw craziness everywhere if logged to file.

Rework CLI commands

Might need to consider restructuring CLI commands to accommodate more complicated combinations:

rprox work -> rprox worker start (start worker prompt)
           -> rprox worker start 3 (start three workers)
           -> rprox worker start 0 (start default amount of workers)

rprox purge -> rprox tasks purge (remove all tasks from Celery queues)
            -> rprox tasks info (get info on all tasks - active, successful, failed)

Stop console status spinner from breaking prompts and console logging

Stop console status spinner from breaking prompts and console logging

Maybe the spinner doesn't expect console output until we've exited the 'with'?

https://github.com/in03/Resolve-Proxy-Encoder/blob/dab58394e1b8685f085c56227a5b8a40af02287c/resolve_proxy_encoder/checks.py#L81

        time.sleep(2)
        return

    # TODO: Stop console status spinner from breaking prompts and console logging
    # Maybe the spinner doesn't expect console output until we've exited the 'with'?
    # labels: bug
    console = Console()
    with console.status(
        "\n[cyan]Fetching online workers for compatability check...[/]\n"

3ea8743e86db17d8876db3ed0ed4b82003646e7b

Figure out why celery worker is failing to start

Figure out why celery worker is failing to start

"Error: Unable to parse extra configuration from command line."

Not sure why, copying the cmd and pasting runs the worker fine...

https://github.com/in03/Resolve-Proxy-Encoder/blob/979d02f2f285e02170b2aa075995495116efe47c/resolve_proxy_encoder/start_workers.py#L149

        invalid_answer()

    if answer == 0:
        print(f"[yellow]Using suggested amount: {safe_cores_suggestion}[/]")
        answer = safe_cores_suggestion

    return answer


def get_routing_key_from_version():

    # Add git SHA Celery queue to prevent queuer/worker incompatibilities
    git_full_sha = get_package_current_commit("resolve_proxy_encoder")

    if not git_full_sha:

        logger.error(
            "[red]Couldn't get local package commit SHA!\n"
            + "Necessary to prevent version mismatches between queuer and worker.[/]"
        )
        app_exit(1, -1)

    # Use git standard 7 character short SHA
    return git_full_sha[::8]


def new_worker(id=None):
    """Start a new celery worker in a new process

    Used to start workers even when the script binaries are buried
    in a virtual env like in pipx.

    Args:
        - id: Used to differentiate multiple workers on the same host

    Returns:
        - none

    Raises:
        - none
    """

    # Get worker name
    worker_name = []

    if id:
        worker_name = [
            "-n",
            f"worker{id}",
            "@h",  # hostname
        ]

    # Get worker queue
    routing_key = get_routing_key_from_version()
    queue = [" -Q " + routing_key]

    # TODO: Test that celery binary can be started from pipx install

    # Get celery binary
    if which("celery"):

        celery_bin = "celery"

    else:

        # Celery bin not on path. Maybe virtual env. Find.
        site_packages_dir = Path(get_python_lib()).resolve()
        celery_bin = get_script_from_package("Celery")
        print(celery_bin)
        package_root = os.path.join(site_packages_dir, "resolve_proxy_encoder")

        # Change dir to package root
        os.chdir(package_root)

    launch_cmd = [
        *config["celery_settings"]["worker_terminal_args"],
        celery_bin,
        "-A",
        "resolve_proxy_encoder.worker",
        "worker",
        *worker_name,
        *queue,
        *config["celery_settings"]["worker_celery_args"],
    ]

    # TODO: Figure out why celery worker is failing to start
    # "Error: Unable to parse extra configuration from command line."
    # Not sure why, copying the cmd and pasting runs the worker fine...
    # labels: bug

    logger.info(launch_cmd)
    print(launch_cmd)
    process = subprocess.Popen(launch_cmd)


def launch_workers(workers_to_launch: int):

    # Start launching

    worker_id = 0
    with Progress() as progress:

        launch = progress.add_task(
            "[green]Starting workers[/]", total=workers_to_launch
        )

        while not progress.finished:

            worker_id += 1
            progress.update(launch, advance=1)

            # logger.info(launch_cmd)

            new_worker(id=worker_id)
            time.sleep(0.05)

        print()
        return


def main(workers: int = 0):
    """Main function"""

    os_ = platform.system()
    cpu_cores = multiprocessing.cpu_count()

    print(f"[green]Running on {os_} with {cpu_cores} cores.[/]\n")

    # Check OS isn't Linux
    if not platform.system() == "Windows":
        print(
            f"[red]This utility is for Windows only!\n"
            + "To start multiple workers on Linux or WSL, setup a systemd service."
        )
        sys.exit(1)

6bc15a73f799dc4093b66584d91b3efac9067513

Fix terminal progress output on newline

Fix terminal progress output on newline

Currently BetterFFMpegProgress is logging progress increments to newlines.

It's better than nothing, but it looks awful. There are other issues with BFP too.

Logging to files is messy, etc. Maybe steal the FFMpeg progress parsing func and

use rich.progress instead.

https://github.com/in03/Resolve-Proxy-Encoder/blob/dab58394e1b8685f085c56227a5b8a40af02287c/resolve_proxy_encoder/worker/tasks/standard/tasks.py#L177

    logger.info(f"FFmpeg command:\n{' '.join(ffmpeg_command)}")

    # TODO: Fix terminal progress output on newline
    # Currently BetterFFMpegProgress is logging progress increments to newlines.
    # It's better than nothing, but it looks awful. There are other issues with BFP too.
    # Logging to files is messy, etc. Maybe steal the FFMpeg progress parsing func and
    # use rich.progress instead.
    # labels: bug

    process = FfmpegProcess(
        command=[*ffmpeg_command], ffmpeg_loglevel=proxy_settings["ffmpeg_loglevel"]
    )

0c5e3806e3ceade8ff0c10ebd94a24a37e891a33

Need to get new search and link working

Need to get new search and link working

Legacy link is the original code and it's soooo bad.

I haven't looked at it in ages. But then I bit off more than I could

chew working on this. Find a middle ground!

https://github.com/in03/Resolve-Proxy-Encoder/blob/a37c615e4f9a37dd559add17113ab986400d4cb2/resolve_proxy_encoder/resolve_queue_proxies.py#L814

    return media_list


# TODO: Need to get new search and link working
# Legacy link is the original code and it's soooo bad.
# I haven't looked at it in ages. But then I bit off more than I could
# chew working on this. Find a middle ground!
# labels: bug


def search_and_link():
    """Search through all existing media in active project and
    attempt to find linkable proxies in expected directory

71cf960e9ac2495f526a8b17737d589a62158bfb

Need to test starting workers on other platforms

Need to test starting workers on other platforms

Get new terminal cmd

https://github.com/in03/Resolve-Proxy-Encoder/blob/a37c615e4f9a37dd559add17113ab986400d4cb2/resolve_proxy_encoder/start_workers.py#L159

        - none
    """

    def get_worker_name(id):

        # @h for 'hostname'
        return f"-n worker{id}@h"

    def get_worker_queue():

        return " -Q " + get_routing_key_from_version()

    def get_celery_binary_path():

        # Check if in virtual env. Find.
        celery_bin = get_script_from_package("Celery")
        if celery_bin:
            return celery_bin

        logger.warning("[yellow]Can't get Celery from package.[/]")

        # Assume global
        celery_bin = which("celery")
        if celery_bin:
            return celery_bin
        logger.warning(
            "[yellow]Using Celery on path." + "Please ensure version compatability![/]"
        )

        logger.error("[red]Couldn't find celery binary! Is it installed?[/]")
        app_exit(1, -1)

    def get_module_path():
        """Get absolute module path to pass to Celery worker"""

        # Change dir to package root
        module_path = os.path.dirname(os.path.abspath(__file__))

        logger.debug(f"Worker path: {module_path}")
        assert os.path.exists(module_path)
        return os.path.abspath(module_path)

    def get_new_console():
        """Get os command to spawn process in a new console window"""

        # Get new terminal
        worker_terminal_args = config["celery_settings"]["worker_terminal_args"]

        # Check if any args are on path. Probably terminal executable.
        executable_args = [which(x) for x in worker_terminal_args]
        if executable_args:
            return ""

        # TODO: Ensure partial matches work here too
        # Make sure no starting args exist
        start_commands = ["open", "start"]
        if any(x in start_commands for x in worker_terminal_args):
            return ""

        # TODO: Need to test starting workers on other platforms
        # Get new terminal cmd
        os_ = platform.system()

        if os_ is "Windows":
            return 'start "RPROX Worker"'  # First double quotes as title

        elif os_ is "Mac":
            return "open"

        elif os_ is "Linux":
            logger.error(
                "Cannot guess installed Linux terminal. Too many distros."
                + "Please provide a terminal executable in 'worker_terminal_args' settings."
            )
            app_exit(1, -1)
        else:
            logger.error(
                f"Could not determine terminal executable for OS: {os_}."
                + "Please provide a terminal executable in 'worker_terminal_args' settings."
            )
            app_exit(1, -1)

    launch_cmd = [
        get_new_console(),
        *config["celery_settings"]["worker_terminal_args"],
        f'"{get_celery_binary_path()}"',
        "-A resolve_proxy_encoder.worker",
        "worker",
        get_worker_name(id),
        get_worker_queue(),
        *config["celery_settings"]["worker_celery_args"],
    ]

    logger.info(" ".join(launch_cmd))
    # print(" ".join(launch_cmd))

    subprocess.Popen(
        cwd=get_module_path(),
        args=" ".join(launch_cmd),
        shell=True,
    )


def launch_workers(workers_to_launch: int):

bc403778568dd46dca9e10fd5bae065124bba9d8

Proxy encode settings are being pulled from local machine config instead of from task

Problem

Currently the worker encode task is pulling the proxy encoding settings locally from the user configuration yaml. This means queuer-side settings have no effect, which is totally unintuitive.

Potential Solution

Pull encoding settings from local configuration queuer-side and send them with the task along with all source-media metadata.

Add more app level settings.

Add more app level settings.

Some more app or CLI level settings is probably a good idea, e.g:

  • control app_exit prompt before exit

  • no coloured output

  • warn updates (move)

  • loglevel (move)

https://github.com/in03/Resolve-Proxy-Encoder/blob/dab58394e1b8685f085c56227a5b8a40af02287c/resolve_proxy_encoder/settings/default_settings.yml#L3

---  # Resolve Proxy Encoder User configuration file

# TODO: Add more app level settings.
# Some more app or CLI level settings is probably a good idea, e.g:
# - control app_exit prompt before exit
# - no coloured output
# - warn updates (move)
# - loglevel (move)
# labels: enhancement

loglevel: WARNING

ab763bc110c2e67ae4b45360879375bba5bb7478

Link proxy

This is not a real issue, is juste a question : how I can juste use the script for link proxy ? I don't have any knowledges in da vinci scripting

Easy to pass invalid FFMpeg arguments in configuration

Problem

It's quite easy to pass invalid FFMpeg arguments in user configuration.

Possible Solutions

  • Add lambdas to settings schema to fail whitespaces in lists of arguments in user config
  • Validate FFMpeg command on startup with a dryrun

Fix too much newline padding when all checks pass

Fix too much newline padding when all checks pass

Move the newline padding from these 'success prints' to

the warning and error logs. Make sure newline padding is consistent.

https://github.com/in03/Resolve-Proxy-Encoder/blob/dab58394e1b8685f085c56227a5b8a40af02287c/resolve_proxy_encoder/checks.py#L63

        logger.info(f"Current: {package_latest_commit}")

    else:
        # TODO: Fix too much newline padding when all checks pass
        # Move the newline padding from these 'success prints' to
        # the warning and error logs. Make sure newline padding is consistent.
        # labels: bug
        print("\n[green]Installation up-to-date :white_check_mark:[/]\n")

    return

f6d00037b5f4f2641d6651675dd3566b7ad69884

Remove. Not necessary anymore

Remove. Not necessary anymore

source_metadata = remove_duplicate_elements(source_metadata)

https://github.com/in03/Resolve-Proxy-Encoder/blob/dab58394e1b8685f085c56227a5b8a40af02287c/resolve_proxy_encoder/resolve_queue_proxies.py#L923

    track_items = get_video_track_items(timeline)
    media_pool_items = get_media_pool_items(track_items)
    source_metadata = get_source_metadata(media_pool_items)
    # TODO: Remove. Not necessary anymore
    # source_metadata = remove_duplicate_elements(source_metadata)

    print("\n")

d5bc4cf04b562976e2f4b344c76caf6f247603aa

Add user settings switch for queuer/worker version constrain

Something that just allows for quick debugging across machines with out having to update all machines to the same commit.
Should be fine for testing so long as there are no breaking changes between queuers/workers.
We'll throw up a heap of scary warnings.

Offline proxies not handled

Issue

If Resolve links to proxies but they then become unavailable, they're considered 'offline' by Resolve. This usually means Resolve has lost access (maybe disconnected network drive or removable media) or the proxies were deleted or moved elsewhere. Currently they're seen as 'linked' and so if all proxies are offline, the script will say "All proxies are linked" and exit.

Proposal

Create a handler specifically for offline proxies. Prompt the user if offline proxies are detected:

[Title] Offline proxies detected
[Body] 5 proxies are linked, but offline. Would you like to queue the offline clips anyway?
[Yes] [No] [Cancel]

Fix progress dots still showing at more verbose loglevels

Fix progress dots still showing at more verbose loglevels

The dots shouldn't show when we're outputting each worker start cmd.

Probs get logger loglevel programmatically instead of from config.

Probs just swap dots for rich.progress.

https://github.com/in03/Resolve-Proxy-Encoder/blob/dab58394e1b8685f085c56227a5b8a40af02287c/resolve_proxy_encoder/start_workers.py#L130

            shell=True,
        )

        # TODO: Fix progress dots still showing at more verbose loglevels
        # The dots shouldn't show when we're outputting each worker start cmd.
        # Probs get logger loglevel programmatically instead of from config.
        # Probs just swap dots for rich.progress.
        # labels: bug, enhancement

        if config["celery_settings"]["worker_loglevel"] == "WARNING":

            sys.stdout.write(dots)

f279e8881837ff5ccfdabfb2f402cb3b8b52d8d6

Post-encode proxy link isn't working

This should've linked, but it didn't. Not sure why.

Should've been able to link this

Link_proxies module is definitely working when run manually. Might just need to check to see if the link function is handing the link_proxies module a correctly populated object.

Queuer / Worker versioning incompatibilties

Issue

Currently any workers subscribed to the broker will take supported jobs. If there are any changes to the project structure or task implementations, the worker will take the job and do what it thinks it should do. This can lead to unregistered task errors, incomplete jobs or even jobs that are completed but done wrong without any indication.

Proposal

Easy

  • Automatically update to the newest release on both queuer and worker startup (if auto-update is set in settings)

  • Warn user (if auto-update is not set) when new version is available (like pip does for Python) and briefly explain potential incompatibility issues

Hard

  • Warn the user before queuing If multiple versions of workers are detected as connected to the broker

  • Implement a versioning system that prevents workers from taking jobs from newer versions of the software. Log to the console when a job has been ignored and warn the user they should update to the latest version.

Properly catch Schema exceptions as SchemaWrongKeyError, etc.

Properly catch Schema exceptions as SchemaWrongKeyError, etc.

It's just generic SchemaError for now. If we can catch them, we don't need this func.

We can also use the default option in Schema to add default keys.

Then we can get rid of the default_settings.yml file.

https://github.com/in03/Resolve-Proxy-Encoder/blob/dab58394e1b8685f085c56227a5b8a40af02287c/resolve_proxy_encoder/settings/app_settings.py#L121

    def _ensure_user_keys(self):
        """Ensure user settings have all keys in default settings"""

        # TODO: Properly catch Schema exceptions as SchemaWrongKeyError, etc.
        # It's just generic SchemaError for now. If we can catch them, we don't need this func.
        # We can also use the default option in Schema to add default keys.
        # Then we can get rid of the default_settings.yml file.

        diffs = DeepDiff(self.default_settings, self.user_settings)

        # Check for unknown settings

b723b482d277795040f660859295ff158a31fb60

Don't force framerate.

Currently Resolve Proxy Encoder will always encode to whatever the set framerate is in proxy_settings. This is a bit silly since Resolve will only link to proxy media with the same timecode and framerate.

instead lets add a criteria to ignore source media with the wrong framerate and inherit proxy framerate from source media by default.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.