in03 / proxima Goto Github PK
View Code? Open in Web Editor NEWTranscode source media directly from DaVinci Resolve using multiple machines for encoding. Great for creating proxies quickly.
License: MIT License
Transcode source media directly from DaVinci Resolve using multiple machines for encoding. Great for creating proxies quickly.
License: MIT License
Currently all colours and markup available in rich is both printed and logged regardless of if the output is piped or not.
Best add a series of check to disable the extras, as mentioned here
- if sys.stdout.isatty(): returns false
- The NO_COLOR environment variable is set.
- The TERM environment variable has the value dumb.
- The user passes the option --no-color.
- You may also want to add a MYAPP_NO_COLOR environment variable
DON'T FORGET ANIMATIONS!
Any progress bars will throw craziness everywhere if logged to file.
Since we're providing absolute resolution settings to FFMpeg, we're stretching any clips that are filmed in different aspect ratios to the settings provided.
Might need to consider restructuring CLI commands to accommodate more complicated combinations:
rprox work -> rprox worker start (start worker prompt)
-> rprox worker start 3 (start three workers)
-> rprox worker start 0 (start default amount of workers)
rprox purge -> rprox tasks purge (remove all tasks from Celery queues)
-> rprox tasks info (get info on all tasks - active, successful, failed)
Get this working. R
linked = []
# failed = []
# TODO: Get this working. R
timelines = get_resolve_timelines()
if not timelines:
raise Exception("No timelines exist in current project.")
93d04c5aea266cd87c066e4614f8272385edd9e1
Fix weird double app_exit issue on failed link
Currently a failed link brings up two app_exit 'Press ENTER to exit'
prompts. Probably an app_exit prompt inside legacy_link func exiting with status 1
being caught by next app_exit. Perhaps app_exits should only run if name == "main"?
job = queue_job(tasks)
wait_encode(job)
# TODO: Fix weird double app_exit issue on failed link
# Currently a failed link brings up two app_exit 'Press ENTER to exit'
# prompts. Probably an app_exit prompt inside legacy_link func exiting with status 1
# being caught by next app_exit. Perhaps app_exits should only run if __name__ == "__main__"?
# labels: bug
# ATTEMPT POST ENCODE LINK
try:
4c4e92824d01965f4baec8918d9a37dcef8eea76
As convenient as keeping the calling script open is, we should look into restructuring our post-render actions.
Currently the script responsible for queuing the proxy tasks stays open and waits for all the jobs queued in that instance to finish before performing post-render actions (native OS notification, log failed jobs, link successfully encoded proxies, etc.). The way this is currently implemented means we get a new console window for every round of proxies we queue. These stay open for these long-running encode jobs and can sometimes be unreliable and continue waiting after all tasks have finished.
Ideally we would decouple the post-render actions from the calling script altogether.
We could always have a separate worker with a separate task queue specifically for post-render actions. These workers would have to be guaranteed to have access to a running copy of Resolve for Resolve's API. Given that Resolve remote-rendering is often used by post-production houses, it may be a viable workflow. By passing the entire project object to this worker, we'd be able to link proxies for a project even if the original queuing computer was turned off or had closed Resolve.
We could further decouple notifications from the Celery/worker ecosystem and use webhooks with a cross-platform app capable of desktop notifications (Chrome or Firefox?) or use emails instead of native OS notifications. If we pass the queuing computer's hostname, we can choose to filter notifications for encodes that weren't queued on the active system.
Test that celery binary can be started from pipx install
invalid_answer()
if answer == 0:
print(f"[yellow]Using suggested amount: {safe_cores_suggestion}[/]")
answer = safe_cores_suggestion
return answer
def get_routing_key_from_version():
# Add git SHA Celery queue to prevent queuer/worker incompatibilities
git_full_sha = get_package_current_commit("resolve_proxy_encoder")
if not git_full_sha:
logger.error(
"[red]Couldn't get local package commit SHA!\n"
+ "Necessary to prevent version mismatches between queuer and worker.[/]"
)
app_exit(1, -1)
# Use git standard 7 character short SHA
return git_full_sha[::8]
def new_worker(id=None):
"""Start a new celery worker in a new process
Used to start workers even when the script binaries are buried
in a virtual env like in pipx.
Args:
- id: Used to differentiate multiple workers on the same host
Returns:
- none
Raises:
- none
"""
# Get worker name
worker_name = []
if id:
worker_name = [
"-n",
f"worker{id}",
"@h", # hostname
]
# Get worker queue
routing_key = get_routing_key_from_version()
queue = [" -Q " + routing_key]
# TODO: Test that celery binary can be started from pipx install
# Get celery binary
if which("celery"):
celery_bin = "celery"
else:
# Celery bin not on path. Maybe virtual env. Find.
site_packages_dir = Path(get_python_lib()).resolve()
celery_bin = get_script_from_package("Celery")
print(celery_bin)
package_root = os.path.join(site_packages_dir, "resolve_proxy_encoder")
# Change dir to package root
os.chdir(package_root)
launch_cmd = [
*config["celery_settings"]["worker_terminal_args"],
celery_bin,
"-A",
"resolve_proxy_encoder.worker",
"worker",
*worker_name,
*queue,
*config["celery_settings"]["worker_celery_args"],
]
# TODO: Figure out why celery worker is failing to start
# "Error: Unable to parse extra configuration from command line."
# Not sure why, copying the cmd and pasting runs the worker fine...
# labels: bug
logger.info(launch_cmd)
print(launch_cmd)
process = subprocess.Popen(launch_cmd)
def launch_workers(workers_to_launch: int):
# Start launching
worker_id = 0
with Progress() as progress:
launch = progress.add_task(
"[green]Starting workers[/]", total=workers_to_launch
)
while not progress.finished:
worker_id += 1
progress.update(launch, advance=1)
# logger.info(launch_cmd)
new_worker(id=worker_id)
time.sleep(0.05)
print()
return
def main(workers: int = 0):
"""Main function"""
os_ = platform.system()
cpu_cores = multiprocessing.cpu_count()
print(f"[green]Running on {os_} with {cpu_cores} cores.[/]\n")
# Check OS isn't Linux
if not platform.system() == "Windows":
print(
f"[red]This utility is for Windows only!\n"
+ "To start multiple workers on Linux or WSL, setup a systemd service."
)
sys.exit(1)
9608ac7d0f8f9ef7a9f6b8269fdfbaf224956515
Sometimes RESOLVE_queue_proxies reports job sent, waiting on job to finish, but the job does not get received by any Celery workers?
Possibly an issue with a worker that loses sync with other workers when it's host machine returns from sleep?
Need to add a more robust check to see if progress is being made and timeout, or retry. Maybe some function to request a different Celery worker or restart one that is playing up if the worker is at fault?
Knowing why this happens would certainly bring peace of mind.
If Resolve links to proxies but they then become unavailable, they're considered 'offline' by Resolve. This usually means Resolve has lost access (maybe disconnected network drive or removable media) or the proxies were deleted or moved elsewhere. Currently they're seen as 'linked' and so if all proxies are offline, the script will say "All proxies are linked" and exit.
Create a handler specifically for offline proxies. Prompt the user if offline proxies are detected:
[Title] Offline proxies detected
[Body] 5 proxies are linked, but offline. Would you like to queue the offline clips anyway?
[Yes] [No] [Cancel]
Remove. Not necessary anymore
source_metadata = remove_duplicate_elements(source_metadata)
track_items = get_video_track_items(timeline)
media_pool_items = get_media_pool_items(track_items)
source_metadata = get_source_metadata(media_pool_items)
# TODO: Remove. Not necessary anymore
# source_metadata = remove_duplicate_elements(source_metadata)
print("\n")
d5bc4cf04b562976e2f4b344c76caf6f247603aa
Add the Resolve API media pool item object so we can call it directly to link
source_metadata.update({'media_pool_item_object':media_pool_item})
)
continue
# TODO: Add the Resolve API media pool item object so we can call it directly to link
# source_metadata.update({'media_pool_item_object':media_pool_item})
filtered_metadata.append(source_metadata)
print(f"[green]Total queuable clips on timeline: {len(filtered_metadata)}[/]")
6d6b0945ae89a09d2a9d212fc5db8146332192d4
Fix too much newline padding when all checks pass
Move the newline padding from these 'success prints' to
the warning and error logs. Make sure newline padding is consistent.
logger.info(f"Current: {package_latest_commit}")
else:
# TODO: Fix too much newline padding when all checks pass
# Move the newline padding from these 'success prints' to
# the warning and error logs. Make sure newline padding is consistent.
# labels: bug
print("\n[green]Installation up-to-date :white_check_mark:[/]\n")
return
f6d00037b5f4f2641d6651675dd3566b7ad69884
When RPE first starts, it checks for pre-existing user settings and will only copy the whole file if pre-existing user settings are not found. When new features are released that require new settings, default_settings.yml
is updated with those new keys and default values.
We should check that all keys in default_settings.yml
exist in the user configuration and if not, copy the keys and their default values to the user configuration. Maybe a quick console warning with the missing keys listed.
Before receiving any tasks (which are darn hard to debug with all the Celery fluff and bubble), let's run a bunch of quick permissions / access tests. E.g:
Then we can pop up some user-friendly warnings before attempting any encoding.
It seems that proxies can be linked when they haven't finished encoding yet. I don't know why this hasn't come up sooner.
Maybe the metadata is being written before encoding has finished and so passes Resolve's timecode/duration check when calling to link through the API?
Ideally linking would only be done automatically post-encode. Since the chance of a user performing a manual link before files are finished is high, we should prevent this files from being linkable. A simple solution would be to use a different filename while the file is encoding and rename it to its final name once finished. If ffmpeg will allow us to export the file to the same format with a modified extension, we could make this extension .part
.
Otherwise we could add _part_
as a suffix to the filename just before the extension.
This will still be required once split and stitch encoding is implemented, pre/post joining segments.
There are currently three implementations of linking going on:
Original throw-together script since before rprox
This is currently just the old script reworked as a module. It gets the project -> all timelines -> all tracks -> all clips -> all source media and compares lists of source media to lists of proxy media. This is done very inefficiently. Any corrupted proxies will attempt linking multiple times. It uses different, worse implemented functions that exist the rest of the package anyway. Recent attempts at rewrites didn't go so well, so this has remained the defacto for now.
Currently working on this
This is a rewrite of the legacy link function that shares functions and hopes to overcome some of the weirder behaviour on glitches.
This doesn't walk all the project objects again, but instead relies on the returned results from Celery. This comes with these ideal benefits:
Figure out how to pass optional celery args to Typer
@cli_app.command()
# TODO: Figure out how to pass optional celery args to Typer
def work(
workers_to_launch: Optional[int] = typer.Argument(
0, help="How many workers to start"
)
):
"""Prompt to start Celery workers on local machine"""
if workers_to_launch > 0:
print(f"[green]Starting workers! :construction_worker:[/]")
print(f"[cyan]Starting worker launcher prompt :construction_worker:[/]")
from resolve_proxy_encoder import start_workers
start_workers.main(workers_to_launch)
@cli_app.command()
41e9da4e380aec42333fe8f48124359785ae50d1
Stop console status spinner from breaking prompts and console logging
Maybe the spinner doesn't expect console output until we've exited the 'with'?
time.sleep(2)
return
# TODO: Stop console status spinner from breaking prompts and console logging
# Maybe the spinner doesn't expect console output until we've exited the 'with'?
# labels: bug
console = Console()
with console.status(
"\n[cyan]Fetching online workers for compatability check...[/]\n"
3ea8743e86db17d8876db3ed0ed4b82003646e7b
Let's look into automatically documenting module functions and such using docstrings.
Maybe we can use pre-commit hooks with mkgendocs like this article suggests?
Or use mkdocs supported plugin?
Swap win term and start min cmd for custom start args
This is silly and messy. Win term doesn't support min anyway.
Like so is better: run_with: ["start /min" , "bash", "wt"]
queue_from_sha = " -Q " + git_full_sha[::8]
launch_cmd = START_WIN_WORKER + multi_worker_fmt + queue_from_sha
# TODO: Swap win term and start min cmd for custom start args
# This is silly and messy. Win term doesn't support min anyway.
# Like so is better: `run_with: ["start /min" , "bash", "wt"]`
# labels: bug
# Use windows terminal?
if config["celery_settings"]["worker_use_win_terminal"]:
start = "wt "
7b19328d8685158e0472a86ee4d161c33d913845
Properly catch Schema exceptions as SchemaWrongKeyError, etc.
It's just generic SchemaError for now. If we can catch them, we don't need this func.
We can also use the default option in Schema to add default keys.
Then we can get rid of the default_settings.yml file.
def _ensure_user_keys(self):
"""Ensure user settings have all keys in default settings"""
# TODO: Properly catch Schema exceptions as SchemaWrongKeyError, etc.
# It's just generic SchemaError for now. If we can catch them, we don't need this func.
# We can also use the default option in Schema to add default keys.
# Then we can get rid of the default_settings.yml file.
diffs = DeepDiff(self.default_settings, self.user_settings)
# Check for unknown settings
b723b482d277795040f660859295ff158a31fb60
Need to get new search and link working
Legacy link is the original code and it's soooo bad.
I haven't looked at it in ages. But then I bit off more than I could
chew working on this. Find a middle ground!
return media_list
# TODO: Need to get new search and link working
# Legacy link is the original code and it's soooo bad.
# I haven't looked at it in ages. But then I bit off more than I could
# chew working on this. Find a middle ground!
# labels: bug
def search_and_link():
"""Search through all existing media in active project and
attempt to find linkable proxies in expected directory
71cf960e9ac2495f526a8b17737d589a62158bfb
Something that just allows for quick debugging across machines with out having to update all machines to the same commit.
Should be fine for testing so long as there are no breaking changes between queuers/workers.
We'll throw up a heap of scary warnings.
I've been messing around with running separate FFmpeg processes on segments of the same video file.
There are a bunch of benefits here:
I've got this working reliably locally - with no performance gains obviously since I'm running all the FFmpeg processes on the same machine.
To get this working here we'll need a few things:
This is not a real issue, is juste a question : how I can juste use the script for link proxy ? I don't have any knowledges in da vinci scripting
Ensure partial matches work here too
Make sure no starting args exist
- none
"""
def get_worker_name(id):
# @h for 'hostname'
return f"-n worker{id}@h"
def get_worker_queue():
return " -Q " + get_routing_key_from_version()
def get_celery_binary_path():
# Check if in virtual env. Find.
celery_bin = get_script_from_package("Celery")
if celery_bin:
return celery_bin
logger.warning("[yellow]Can't get Celery from package.[/]")
# Assume global
celery_bin = which("celery")
if celery_bin:
return celery_bin
logger.warning(
"[yellow]Using Celery on path." + "Please ensure version compatability![/]"
)
logger.error("[red]Couldn't find celery binary! Is it installed?[/]")
app_exit(1, -1)
def get_module_path():
"""Get absolute module path to pass to Celery worker"""
# Change dir to package root
module_path = os.path.dirname(os.path.abspath(__file__))
logger.debug(f"Worker path: {module_path}")
assert os.path.exists(module_path)
return os.path.abspath(module_path)
def get_new_console():
"""Get os command to spawn process in a new console window"""
# Get new terminal
worker_terminal_args = config["celery_settings"]["worker_terminal_args"]
# Check if any args are on path. Probably terminal executable.
executable_args = [which(x) for x in worker_terminal_args]
if executable_args:
return ""
# TODO: Ensure partial matches work here too
# Make sure no starting args exist
start_commands = ["open", "start"]
if any(x in start_commands for x in worker_terminal_args):
return ""
# TODO: Need to test starting workers on other platforms
# Get new terminal cmd
os_ = platform.system()
if os_ is "Windows":
return 'start "RPROX Worker"' # First double quotes as title
elif os_ is "Mac":
return "open"
elif os_ is "Linux":
logger.error(
"Cannot guess installed Linux terminal. Too many distros."
+ "Please provide a terminal executable in 'worker_terminal_args' settings."
)
app_exit(1, -1)
else:
logger.error(
f"Could not determine terminal executable for OS: {os_}."
+ "Please provide a terminal executable in 'worker_terminal_args' settings."
)
app_exit(1, -1)
launch_cmd = [
get_new_console(),
*config["celery_settings"]["worker_terminal_args"],
f'"{get_celery_binary_path()}"',
"-A resolve_proxy_encoder.worker",
"worker",
get_worker_name(id),
get_worker_queue(),
*config["celery_settings"]["worker_celery_args"],
]
logger.info(" ".join(launch_cmd))
# print(" ".join(launch_cmd))
subprocess.Popen(
cwd=get_module_path(),
args=" ".join(launch_cmd),
shell=True,
)
def launch_workers(workers_to_launch: int):
8f830ea439e61f66c17e498dc5c4aacac0da6c0f
There's a heap of unused dependencies at the moment.
Can we potentially find an automated way of doing this? Not sure how it plays with Poetry.
I know pip tools works with requirements.txt nicely.
Fix progress dots still showing at more verbose loglevels
The dots shouldn't show when we're outputting each worker start cmd.
Probs get logger loglevel programmatically instead of from config.
Probs just swap dots for rich.progress.
shell=True,
)
# TODO: Fix progress dots still showing at more verbose loglevels
# The dots shouldn't show when we're outputting each worker start cmd.
# Probs get logger loglevel programmatically instead of from config.
# Probs just swap dots for rich.progress.
# labels: bug, enhancement
if config["celery_settings"]["worker_loglevel"] == "WARNING":
sys.stdout.write(dots)
f279e8881837ff5ccfdabfb2f402cb3b8b52d8d6
Look into remotely shutting down workers gracefully here.
Maybe consider restructuring commands a bit if we'll have multiple worker commands, etc.
Perhaps something like this?
$ rprox worker stop all
>> Stopped all workers
$ rprox worker stop worker1@EMC
>> Stopped worker1@EMC
The extension check should not use TimelineItem.GetName(). Although the default clip name is the file name of the source media, a user can give something a custom clip name, and then the script will fail to find the clip's extension in acceptable_exts. Since it's a basic string "in" check, no attempt is made to confirm the filepath exists. TBH, I'm not entirely sure if we need an extension check? By "trying" .GetClipProperty(), anything that doesn't have that method will fail. If that method fails, then we also can't retrieve the "File Path" attribute. If we just go off the "File Path" attribute from the start, we know we're receiving a real file path from Resolve and that the name is up to date.
git_sha
slice returns as 5 characters, not standard 7
return windows_path
return wsl_path
def get_queue():
"""Get Celery queue name (routing key) from package git commit short SHA
Allows constraining tasks and workers to exact same version and prevent breaking changes.
"""
# Add git SHA Celery queue to prevent queuer/worker incompatibilities
git_full_sha = get_package_current_commit("resolve_proxy_encoder")
if config["app"]["disable_version_constrain"]:
logger.warning(
"[yellow]Version constrain is disabled! Thar be dragons :dragon_face:[/]"
)
return "celery"
if not git_full_sha:
logger.error(
"[red]Couldn't get local package commit SHA!\n"
+ "Necessary to maintain version constrain.[/]"
)
app_exit(1, -1)
# TODO: `git_sha` slice returns as 5 characters, not standard 7
# labels: bug
# Use git standard 7 character short SHA
return git_full_sha[::8]
2f4395fb65b4eb8eceb61a5ece4a21d33e580f82
It seems that Resolve is very fussy about timecode matching in order to link to source media. 5bf51ca started to explore copying the starting timecode to the proxy's file metadata, but it seems that Resolve is looking for Quicktime timecode data ONLY, despite accepting the MXF if there is no timecode metadata to match.
I'm warned that a proxy can't be linked (assuming it's corrupt or incomplete). Upon attempting to render by clicking "No" at the "Unlinked proxy media found, would you like to link?" dialog, the job is sent, instantly completed and then fails to link again. If I try to delete the file manually and refresh the file manager, I can see it reappears. This is an occasional glitch with QNAP NAS systems. A potential fix (pending testing) would be to explicitly try to delete the file marked for overwriting, and if unsuccessful, warn the user explicitly that the file could not be deleted. I'm not sure if "trying" and "excepting" a delete is enough, or if an explicit if os.path.exists()
is necessary after deletion, since File Explorer reports no errors upon delete?
Temporary fix is to delete the file from the QNAP web interface.
Add more app level settings.
Some more app or CLI level settings is probably a good idea, e.g:
control app_exit prompt before exit
no coloured output
warn updates (move)
loglevel (move)
--- # Resolve Proxy Encoder User configuration file
# TODO: Add more app level settings.
# Some more app or CLI level settings is probably a good idea, e.g:
# - control app_exit prompt before exit
# - no coloured output
# - warn updates (move)
# - loglevel (move)
# labels: enhancement
loglevel: WARNING
ab763bc110c2e67ae4b45360879375bba5bb7478
'rprox work' command is incompatible with pipx!
Because we call our worker from celery as a subprocess we need to either call it
programmatically somehow or create a "rprox start worker" command that takes celery args,
but wraps the first part of the cmd.
config = settings.user_settings
logger = get_rich_logger(loglevel=config["celery_settings"]["worker_loglevel"])
# TODO: 'rprox work' command is incompatible with pipx!
# Because we call our worker from celery as a subprocess we need to either call it
# programmatically somehow or create a "rprox start worker" command that takes celery args,
# but wraps the first part of the cmd.
# labels: bug, critical
# Make sure the module path in the command below is up to date!
START_WIN_WORKER = """celery -A resolve_proxy_encoder.worker worker -l INFO -P solo"""
c3ef4997cd63208244e51bbc5e1e62777024c761
Fix terminal progress output on newline
Currently BetterFFMpegProgress is logging progress increments to newlines.
It's better than nothing, but it looks awful. There are other issues with BFP too.
Logging to files is messy, etc. Maybe steal the FFMpeg progress parsing func and
use rich.progress instead.
logger.info(f"FFmpeg command:\n{' '.join(ffmpeg_command)}")
# TODO: Fix terminal progress output on newline
# Currently BetterFFMpegProgress is logging progress increments to newlines.
# It's better than nothing, but it looks awful. There are other issues with BFP too.
# Logging to files is messy, etc. Maybe steal the FFMpeg progress parsing func and
# use rich.progress instead.
# labels: bug
process = FfmpegProcess(
command=[*ffmpeg_command], ffmpeg_loglevel=proxy_settings["ffmpeg_loglevel"]
)
0c5e3806e3ceade8ff0c10ebd94a24a37e891a33
Will have to check up on this because I can't remember how it was implemented. Just had an error with an old project not linking rendered proxies because of mismatched starting timecode. Proxies all had starting timecode set to zero. Is ffmpeg disregarding timecode and always setting to zero?
TBH attempting to create a proper README will help bring to mind some dumb ways of doing things.
Figure out why celery worker is failing to start
"Error: Unable to parse extra configuration from command line."
Not sure why, copying the cmd and pasting runs the worker fine...
invalid_answer()
if answer == 0:
print(f"[yellow]Using suggested amount: {safe_cores_suggestion}[/]")
answer = safe_cores_suggestion
return answer
def get_routing_key_from_version():
# Add git SHA Celery queue to prevent queuer/worker incompatibilities
git_full_sha = get_package_current_commit("resolve_proxy_encoder")
if not git_full_sha:
logger.error(
"[red]Couldn't get local package commit SHA!\n"
+ "Necessary to prevent version mismatches between queuer and worker.[/]"
)
app_exit(1, -1)
# Use git standard 7 character short SHA
return git_full_sha[::8]
def new_worker(id=None):
"""Start a new celery worker in a new process
Used to start workers even when the script binaries are buried
in a virtual env like in pipx.
Args:
- id: Used to differentiate multiple workers on the same host
Returns:
- none
Raises:
- none
"""
# Get worker name
worker_name = []
if id:
worker_name = [
"-n",
f"worker{id}",
"@h", # hostname
]
# Get worker queue
routing_key = get_routing_key_from_version()
queue = [" -Q " + routing_key]
# TODO: Test that celery binary can be started from pipx install
# Get celery binary
if which("celery"):
celery_bin = "celery"
else:
# Celery bin not on path. Maybe virtual env. Find.
site_packages_dir = Path(get_python_lib()).resolve()
celery_bin = get_script_from_package("Celery")
print(celery_bin)
package_root = os.path.join(site_packages_dir, "resolve_proxy_encoder")
# Change dir to package root
os.chdir(package_root)
launch_cmd = [
*config["celery_settings"]["worker_terminal_args"],
celery_bin,
"-A",
"resolve_proxy_encoder.worker",
"worker",
*worker_name,
*queue,
*config["celery_settings"]["worker_celery_args"],
]
# TODO: Figure out why celery worker is failing to start
# "Error: Unable to parse extra configuration from command line."
# Not sure why, copying the cmd and pasting runs the worker fine...
# labels: bug
logger.info(launch_cmd)
print(launch_cmd)
process = subprocess.Popen(launch_cmd)
def launch_workers(workers_to_launch: int):
# Start launching
worker_id = 0
with Progress() as progress:
launch = progress.add_task(
"[green]Starting workers[/]", total=workers_to_launch
)
while not progress.finished:
worker_id += 1
progress.update(launch, advance=1)
# logger.info(launch_cmd)
new_worker(id=worker_id)
time.sleep(0.05)
print()
return
def main(workers: int = 0):
"""Main function"""
os_ = platform.system()
cpu_cores = multiprocessing.cpu_count()
print(f"[green]Running on {os_} with {cpu_cores} cores.[/]\n")
# Check OS isn't Linux
if not platform.system() == "Windows":
print(
f"[red]This utility is for Windows only!\n"
+ "To start multiple workers on Linux or WSL, setup a systemd service."
)
sys.exit(1)
6bc15a73f799dc4093b66584d91b3efac9067513
Currently Resolve Proxy Encoder will always encode to whatever the set framerate is in proxy_settings
. This is a bit silly since Resolve will only link to proxy media with the same timecode and framerate.
instead lets add a criteria to ignore source media with the wrong framerate and inherit proxy framerate from source media by default.
Need to test starting workers on other platforms
Get new terminal cmd
- none
"""
def get_worker_name(id):
# @h for 'hostname'
return f"-n worker{id}@h"
def get_worker_queue():
return " -Q " + get_routing_key_from_version()
def get_celery_binary_path():
# Check if in virtual env. Find.
celery_bin = get_script_from_package("Celery")
if celery_bin:
return celery_bin
logger.warning("[yellow]Can't get Celery from package.[/]")
# Assume global
celery_bin = which("celery")
if celery_bin:
return celery_bin
logger.warning(
"[yellow]Using Celery on path." + "Please ensure version compatability![/]"
)
logger.error("[red]Couldn't find celery binary! Is it installed?[/]")
app_exit(1, -1)
def get_module_path():
"""Get absolute module path to pass to Celery worker"""
# Change dir to package root
module_path = os.path.dirname(os.path.abspath(__file__))
logger.debug(f"Worker path: {module_path}")
assert os.path.exists(module_path)
return os.path.abspath(module_path)
def get_new_console():
"""Get os command to spawn process in a new console window"""
# Get new terminal
worker_terminal_args = config["celery_settings"]["worker_terminal_args"]
# Check if any args are on path. Probably terminal executable.
executable_args = [which(x) for x in worker_terminal_args]
if executable_args:
return ""
# TODO: Ensure partial matches work here too
# Make sure no starting args exist
start_commands = ["open", "start"]
if any(x in start_commands for x in worker_terminal_args):
return ""
# TODO: Need to test starting workers on other platforms
# Get new terminal cmd
os_ = platform.system()
if os_ is "Windows":
return 'start "RPROX Worker"' # First double quotes as title
elif os_ is "Mac":
return "open"
elif os_ is "Linux":
logger.error(
"Cannot guess installed Linux terminal. Too many distros."
+ "Please provide a terminal executable in 'worker_terminal_args' settings."
)
app_exit(1, -1)
else:
logger.error(
f"Could not determine terminal executable for OS: {os_}."
+ "Please provide a terminal executable in 'worker_terminal_args' settings."
)
app_exit(1, -1)
launch_cmd = [
get_new_console(),
*config["celery_settings"]["worker_terminal_args"],
f'"{get_celery_binary_path()}"',
"-A resolve_proxy_encoder.worker",
"worker",
get_worker_name(id),
get_worker_queue(),
*config["celery_settings"]["worker_celery_args"],
]
logger.info(" ".join(launch_cmd))
# print(" ".join(launch_cmd))
subprocess.Popen(
cwd=get_module_path(),
args=" ".join(launch_cmd),
shell=True,
)
def launch_workers(workers_to_launch: int):
bc403778568dd46dca9e10fd5bae065124bba9d8
Currently the worker encode task is pulling the proxy encoding settings locally from the user configuration yaml. This means queuer-side settings have no effect, which is totally unintuitive.
Pull encoding settings from local configuration queuer-side and send them with the task along with all source-media metadata.
All of the proxy linking functions are due for a rewrite...
The legacy linking function runs when existing unlinked proxies are detected and the user chooses to link them. It doesn't receive a list of references to Resolve's original media pool objects like a potential post-encode linking implementation might, so it needs to iterate over all timeline objects and retrieve those items from scratch. By default, this implementation retrieves ALL clips from ALL timelines and checks the list of proxies against all of them.
Proxy files that fail to link are still checked against all other matching source-media files even though they failed once. A match should only fail if the proxy-media is malformed; one fail is enough to warrant skipping. GIven that a proxy file should only link to one source-media file, additional matches are likely to be the same source media in a different timeline. Having the same clip in multiple different timelines is highly likely when doing timeline revisions.
After switching to better_ffmpeg_progress whenever FFMpeg fails it logs the fail, but Python doesn't catch the exception in script. Need to fix this.
We can do this by adding verbosity flag count support in Typer, like so.
It's quite easy to pass invalid FFMpeg arguments in user configuration.
Currently any workers subscribed to the broker will take supported jobs. If there are any changes to the project structure or task implementations, the worker will take the job and do what it thinks it should do. This can lead to unregistered task
errors, incomplete jobs or even jobs that are completed but done wrong without any indication.
Automatically update to the newest release on both queuer and worker startup (if auto-update is set in settings)
Warn user (if auto-update is not set) when new version is available (like pip does for Python) and briefly explain potential incompatibility issues
Warn the user before queuing If multiple versions of workers are detected as connected to the broker
Implement a versioning system that prevents workers from taking jobs from newer versions of the software. Log to the console when a job has been ignored and warn the user they should update to the latest version.
We need to run a GH action to convert docstrings into markdown
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.