GithubHelp home page GithubHelp logo

intro-outro-generator's Introduction

Frab-Based Into- and Outro-Generator

This is a scripted pre-, postroll and pause-clip generator. It takes a Frab/frab schedule-xml and artwork as svg and generates .dv- or .ts-clips ready-to-use with the VOC-CRS (Conference Recording System) or any other System. It can aĺso be modified to generate Lossless h264 or something different if reqired.

Following the requirements of the CRS-Setup it generates one postroll, one pause-sequence and multiple prerolls - one per Talk in your Schedule-xml, but it should be simple to modify this if your Setup needs it.

Yes! That's what I want!

Okay, let's go.

  • Install python3, python3-lxml, python3-cssutils, python3-wand (or use virtualenv, see below), inkscape and libav-tools
  • Fork this repo on github and clone your personal fork to your local system.
  • Copy one of the existing setup: 00_example_render_byid
    • If you are using a newer version of intro-outro-generator, don't copy any of the other projects, as the init.py may not contain all mandatory parameters in the tasks function.
  • Open artwork/intro.svg (preroll template) in inkscape and modify it. You can also just create a new one. For the VOC-Setup you should use a Pixel-Resolution of 1920×1080 (or for the legacy SD/.dv-Pipeline 1024×576).
  • Group things together that should be animated together (like subtitle and speaker-text)
  • Use Flow-Text (in Inkscape drag an Area of Text instead of just placing a single line). This way the text will automatically wrap inside the specified area if it gets too long.
  • Type Placeholder-Texts where the script should substitute content from your schedule.xml. By default the following placeholders are substituted
    • $id - Talk-ID (useful in links to the Frab-Page)
    • $title - Title of the Talk
    • $subtitle - You guessed it...
    • $personnames - Comma-Separated list of Speaker-Names
  • Give IDs to the Objects and Groups you want to animate (Inkscape Shift-Ctrl-O)
  • Edit your copy of init.py - this is your project configuration
    • set scheduleUrl to the url of your schedule.xml-file
    • modify introFrames (preroll) - see section about the frame-generators below
    • search for def debug() and comment the sections about outro (postroll) and pause
    • run ./make.py yourproject/ --debug to generate your first intro
    • if it looks good, duplicate intro.svg to outro.svg (postroll) and pause.svg (pause-loop) and modify them according to your needs. You can use different IDs in your SVG if required
    • modify outroFrames and pauseFrames like before and test them using ./make.py yourproject/ --debug
    • if everything look like you'd want them to, run ./make.py yourproject/.
    • You can use any debianesque linux (can be headless) to generate the videos. More cores help more.
  • Run ./make-snapshots.sh yourproject/ to generate a png from a specific time-index of your .ts or .dv-files. You can run ./make-snapshots.sh yourproject/ 5 to get a png for the frame at the 5th second of all your clips. Default is 3 seconds.
    • Viewing through those pngs to check if all intros are looking good with the real-world titles- and person-names
    • Viewing through the pngs is faster then opening each clip and waiting 5 seconds.

Python3 virtualenv

Create virtualenv and fetch python deps:

$ virtualenv -p python3 env  
$ . ./env/bin/activate
$ pip3 install -r requirements.txt
Debian

On debian you need to install ImageMagick and Python lxml dependencies:

sudo apt-get install python3-pil libmagickwand-dev libmagickcore5-extra libxml2-dev libxslt1-dev

Quick start

Start your own project by copying "00_example_render_byid" folder which contains all changes to use the new features.

Just type ./make.py or ./make.py -h in the main directory and you'll get the following help information.

usage: see help with option -h

C3VOC Intro-Outro-Generator

positional arguments:

  yourproject/
    Path to your project is a required argument.
    Usage: ./make.py yourproject/
    Without any further argument(s) given, your whole project will be rendered.
                            

optional arguments:

  -h, --help

    show this help message and exit

  --debug

    Run script in debug mode and just render the debug values
    given in your projects __init.py__
    This argument must not be used together with --id
    Usage: ./make.py yourproject/ --debug
                            
  --id ID [ID ...]

    Only render the given ID(s) from your projects schedule.
    This argument must not be used together with --debug
    Usage: ./make.py yourproject/ --id 4711 0815 4223 1337
    To skip all IDs (just generate intro/outro/background files) use it with --id 000000
                            
  --skip SKIP [SKIP ...]

    Skip outro, pause and/or background files in rendering if not needed.
    This argument must not be used together with --debug
    Usage: ./make.py yourproject/ --skip pause out bg
    Example - only generate outro: ./make.py yourproject/ --skip pause bg
    Example - only generate pause and background: ./make.py yourproject/ --skip out

The Frame-Generators

The animation sequence is controlled by the three frame-generator routines vorspanFrames, abspannFrames and pauseFrames. Each of them yields one tupel per frame. This Frame-Tupel contains one Sub-Tupel per Animated Element, which has one of two forms:

CSS-Style-Modifications

('logo', 'style', 'opacity', 1), - locate object with id logo in the svg, parse its style-attribute as css-inline-string and change the value of the css-property opacity to 1. The Tupel-Element 'style' is fixed and declares the type of action which is applied to the specified element. All other tupel-mebers can be modified to suit your needs.

To form a fade-in-opacity-animation, the frame-generator could look like this:

# three seconds of animation
frames = 3*fps
for i in range(0, frames):
	yield (
		('logo',  'style',    'opacity', "%.4f" % easeInCubic(i, 0, 1, frames)),
	)

easeInCubic is an easing-function stolen from the jquery-easing plugin (easing-cheat-sheet). They take 4 parameters:

  • t: current time
  • b: beginning value
  • c: change In value
  • d: duration (of time)

So to fade the logo out, the generator yould look like this:

# three seconds of animation
frames = 3*fps
for i in range(0, frames):
	yield (
		('logo',  'style',    'opacity', "%.4f" % easeInCubic(i, 1, -1, frames)),
	)

By yielding multiple sub-tuples, you can animate multiple elements at the same time using different easings. Its up to you to find a combination that looks nice with your artwork.

XML-Attribute-Modifications

The other form a sub-tuble can have is ('box', 'attr', 'transform', 'translate(0,0)') - locate object with id box in the svg, and set its transform-attribute to translate(0,0). This can be used to animate things not specifiable by css - like the spacial translation of an object. A suitable generator, that animates the element box in an upward movement, could look like this:

# three seconds of animation
frames = 3*fps
for i in range(0, frames):
	yield (
		('box',   'attr',     'transform', 'translate(0,%.4f)' % easeOutQuad(i, 100, -100, frames) ),
	)

FEM/VOC-Tracker-Integration

that script-Z-thingy The FEM and the VOC uses a special Ticket-Tracker to keep track of the Talks on an event. Various tasks are performed around the recorded Videomaterial (preparing, cutting, encoding, releasing) - synchronized by the Tracker. The files starting with script-Z are experiments to integrate the intro-rendering into this process. On some Events the Schedule is very fluid with talks being addes or names changing over the whole conference. Using the Scripts to render the prerols when they are actually needed (and not some days before the conference) would help to get the always-freshest prerolls but it would an additional (computational intense) task to the publishing process.

Generating a Live-Stream-Overlay

While your working on your Video-Artwork you can create another required asset: the stream overlay. When we'll live-stream your Talks we can't send prerolls ovet the live-stream. To let your viewer now what program they are watching at, we usually overlay a transparent image over the live-stream like most television programs do, too. Just create another SVG of the size 1920×1080 (or 1024×576 if you're only targeting the legacy SD-Pipeline) and throw your logo into your prefered corner. To have it looking good we would suggest

  • to test it on dark as well as bright background and add a glow or a backround-box if neccessary
  • avoid thin lines or small text that will not be visible in the final size
  • set an opacity of 0.8 to 1.0 (below 0.8 it usually won't be recognizable on a bumpy background) Save your file as artwork/overlay.svg

When you're done, call ./make-overlay.sh yourproject/ which will generate three .pngs in your artwork directory. One of them looks squeezed - don't worry, that is correct.

It works! It doesn't work!

If it works, push your code to github. This way everybody can see which beautiful animations you created and we can all learn from each other. If it doesn't work, ask on IRC or on the Mailinglist and we'll see that we can solve your problem. If you think you found a bug, file an Issue. Or even better, fix it and send a Pull-Request.

intro-outro-generator's People

Contributors

a-tze avatar abrock avatar bastianilso avatar bjoern-r avatar danimo avatar dedeibel avatar dependabot[bot] avatar derchrisuk avatar derpeter avatar fkusei avatar florob avatar florolf avatar gglnx avatar imntl avatar jbeyerstedt avatar jwacalex avatar kunsi avatar lukas2511 avatar mattgrayyes avatar mazdermind avatar nhgmaniac avatar peterhasse avatar saerdnaer avatar sophieschi avatar thomic avatar tsaitgaist avatar v0tti avatar wonko21 avatar woodpeck avatar zuntrax avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

intro-outro-generator's Issues

setting cwd for inkscape subprocess uses wrong directory

While trying to do the work on #39 I tried to use the default inkscape-based compilation of the videos.
Unfortunately the subprocess call in renderlib.py sets cwd to task.workdir:

errorReturn = subprocess.check_output(cmd, shell=True, universal_newlines=True, stderr=subprocess.STDOUT, cwd=task.workdir)

This leads to inkscape not being able to open the generated SVG files, because the path is concatenated to something like /absolute/path/to/<project_dir>/artwork/<project_dir>/artwork/.frames/.
Moreover, this does not lead to make.py (or rather renderlib.py) exiting with a non-zero exit code. The script tries to render all frames, but ultimately fails to finish any of them and is therefore also unable to render the video (the process should exit early).

For me it worked to just remove the cwd setting.

add support for multiple licensenses in one projekt

it should be possible to render outros with different licensens for each cc license type used in the schedule.

it could be done by label the logo areas in the svg and replace the image before rendering the outro.
There for we also need a new naming for the outro file. There are to option:

a) render one outro per license e.g. cc-by-sa-outro.ts
b) render one outro per event e.g. 2342-outro.ts

If do a) the tracker / encoding worker needs to take care to map a license to a talk
if we do b) we render more files but would also have the option to change more things in the outro like adding the talk title and speaker name again

We have a ticket in the vcfb schedule thats would need this feature.

Optimisation: Pipe input svg directly into inkscape instead using temporary file

# open the output-file (named ".gen.svg" in the workdir)
with open(os.path.join(task.workdir, '.gen.svg'), 'w') as fp:
# write the generated svg-text into the output-file
fp.write( etree.tostring(svg, encoding='unicode') )
if task.outfile.endswith('.ts'):
width = 1920
height = 1080
else:
width = 1024
height = 576
# invoke inkscape to convert the generated svg-file into a png inside the .frames-directory
cmd = 'cd {0} && inkscape --export-background=white --export-width={2} --export-height={3} --export-png=$(pwd)/.frames/{1:04d}.png $(pwd)/.gen.svg 2>&1 >/dev/null'.format(task.workdir, frameNr, width, height)

Idea: add cli parameter to increase verbosity level

e.g. ./make.py subscibe9 --debug -v or even ./make.py subscibe9 --debug -vvv to print output from inkscape to stdout. Is there a default pattern for python to achieve this or do I have to use global variables and multiple options to ArgumentParser?

add further (upstream) documentation for inkscape's transform attribute setting

IMHO it would be a good idea to provide further information to users in regards to inkscape's transform attribute settings (and especially its quirks) for SVG objects.
This setting is relevant for the XML attribute modifications type of transformation for objects.

Rationale:
When an SVG file has a group that is supposed to be used for a transformation and this group has its transform attribute set already, this can lead to very unexpected results: e.g. a matrix() transform can be stuck on a group, due to one of its child objects requiring it. When using a translate() function on such a group, its matrix() function will be reset and afterwards the translate() function will be called. As the matrix() function was in reality only applied to one of the child objects, it will be resize/move the group to very unexpected dimensions/locations and then apply the additional transform.

I think it could be worthwhile mentioning this, as I'm fairly sure that users might spend quite some time on figuring this out on their own (I just did). A note could be made reasonably short and e.g. have a link to some stackoverflow issue (such as this one).

Incompatible with macOS

Unter Mac OS X wird das Inkscape Binary direkt sondern mit einem wrapper Skript aufgerufen. Mehr zu dem Thema steht unter http://wiki.inkscape.org/wiki/index.php/MacOS_X#Inkscape_command_line
Dieser Wrapper erzeugt zusätzliche Ausgaben, die unser Skript als Fehler wertet. Allein den Exit-Code auszuwerten reicht nicht aus "weil inkscape manchmal fehler ausgibt und dann trotzdem mit 0 exited” (Ende Zitat Mazermind).

Am stressfreisten ist eine Linux VM, wenn man es umbedingt unter OS X laufen lassen möchte kann man dieses Inkscape-Wrapper-Skript so anpassen das es keine Ausgabe erzeugt. Unter https://gist.github.com/saerdnaer/18258a2667add2c5b4a6 gibt es eine fertige Version ansonsten kann man /Applications/Inkscape.app/Contents/Resources/bin/inkscape manuell bearbeiten und vor jeder Zeile die mit einem echo anfängt ein # einfügen. Man beachte auch das else in Zeile 124.

Bei installiertem Inkscape und modifizierter /Applications/Inkscape.app/Contents/Resources/bin/inkscape:

brew install libxml2 ffmpeg
pip3 install lxml
pip3 install cssutils
git clone https://github.com/voc/intro-outro-generator.git
cd intro-outro-generator
export PATH=/Applications/Inkscape.app/Contents/Resources/bin/:$PATH
python3 make.py eh15/ --debug

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.