GithubHelp home page GithubHelp logo

mifi / editly Goto Github PK

View Code? Open in Web Editor NEW
4.6K 53.0 287.0 785 KB

Slick, declarative command line video editing & API

License: MIT License

JavaScript 98.14% GLSL 0.30% Dockerfile 1.55%
video-editor opengl ffmpeg cli command-line transition animations video editor slideshow

editly's Introduction

Discord NPM version Build status PayPal

Editly is a tool and framework for declarative NLE (non-linear video editing) using Node.js and ffmpeg. Editly allows you to easily and programmatically create a video from a set of clips, images, audio and titles, with smooth transitions and music overlaid.

Editly has a simple CLI for quickly assembling a video from a set of clips or images, or you can use its more flexible JavaScript API.

Inspired by ffmpeg-concat, editly is much faster and doesn't require much storage because it uses streaming editing. Editly aims to be very extensible and feature rich with a pluggable interface for adding new dynamic content.

demo

This GIF / YouTube was created with this command: "editly commonFeatures.json5". See more examples here.

Features

  • Edit videos with code! ๐Ÿค“
  • Declarative API with fun defaults
  • Create colorful videos with random colors generated from aesthetically pleasing palettes and random effects
  • Supports any input size, e.g. 4K video and DSLR photos
  • Can output to any dimensions and aspect ratio, e.g. Instagram post (1:1), Instagram story (9:16), YouTube (16:9), or any other dimensions you like.
  • Content is scaled and letterboxed automatically, even if the input aspect ratio is not the same and the framerate will be converted.
  • Speed up / slow down videos automatically to match the cutFrom/cutTo segment length with each clip's duration
  • Overlay text and subtitles on videos, images or backgrounds
  • Accepts custom HTML5 Canvas / Fabric.js JavaScript code for custom screens or dynamic overlays
  • Render custom GL shaders (for example from shadertoy)
  • Can output GIF
  • Overlay transparent images or even videos with alpha channel
  • Show different sub-clips for parts of a clips duration (B-roll)
  • Picture-in-picture
  • Vignette
  • Preserve/mix multiple audio sources
  • Automatic audio crossfading
  • Automatic audio ducking and normalization

Use cases

  • Create a slideshow from a set of pictures with text overlay
  • Create a fast-paced trailer or promo video
  • Create a tutorial video with help text
  • Create news stories
  • Create an animated GIF
  • Resize video to any size or framerate and with automatic letterboxing/cropping (e.g. if you need to upload a video somewhere but the site complains Video must be 1337x1000 30fps)
  • Create a podcast with multiple mixed tracks

See examples

Requirements

Installing

npm i -g editly

Usage: Command line video editor

Run editly --help for usage

Create a simple randomized video edit from videos, images and text with an audio track:

editly \
  title:'My video' \
  clip1.mov \
  clip2.mov \
  title:'My slideshow' \
  img1.jpg \
  img2.jpg \
  title:'THE END' \
  --fast \
  --audio-file-path /path/to/music.mp3

Or create an MP4 (or GIF) from a JSON or JSON5 edit spec (JSON5 is just a more user friendly JSON format):

editly my-spec.json5 --fast --keep-source-audio --out output.gif

For examples of how to make a JSON edit spec, see below or examples.

Without --fast, it will default to using the width, height and frame rate from the first input video. All other clips will be converted to these dimensions. You can of course override any or all of these parameters.

  • TIP: Use this tool in conjunction with LosslessCut
  • TIP: If you need catchy music for your video, have a look at this YouTube or the YouTube audio library. Then use youtube-dl to download the video, and then point --audio-file-path at the video file. Be sure to respect their license!

JavaScript library

import editly from 'editly';

// See editSpec documentation
await editly(editSpec)

Edit spec

Edit specs are JavaScript / JSON objects describing the whole edit operation with the following structure:

{
  outPath,
  width,
  height,
  fps,
  allowRemoteRequests: false,
  defaults: {
    duration: 4,
    transition: {
      duration: 0.5,
      name: 'random',
      audioOutCurve: 'tri',
      audioInCurve: 'tri',
    },
    layer: {
      fontPath,
      // ...more layer defaults
    },
    layerType: {
      'fill-color': {
        color: '#ff6666',
      }
      // ...more per-layer-type defaults
    },
  },
  clips: [
    {
      transition,
      duration,
      layers: [
        {
          type,
          // ...more layer-specific options
        }
        // ...more layers
      ],
    }
    // ...more clips
  ],
  audioFilePath,
  loopAudio: false,
  keepSourceAudio: false,
  clipsAudioVolume: 1,
  outputVolume: 1,
  audioTracks: [
    {
      path,
      mixVolume: 1,
      cutFrom: 0,
      cutTo,
      start: 0,
    },
    // ...more audio tracks
  ],
  audioNorm: {
    enable: false,
    gaussSize: 5,
    maxGain: 30,
  }

  // Testing options:
  enableFfmpegLog: false,
  verbose: false,
  fast: false,
}

Parameters

Parameter CLI equivalent Description Default
outPath --out Output path (mp4, mkv), can also be a .gif
width --width Width which all media will be converted to 640
height --height Height which all media will be converted to auto based on width and aspect ratio of first video
fps --fps FPS which all videos will be converted to First video FPS or 25
customOutputArgs Specify custom output codec/format arguments for ffmpeg (See example) auto (h264)
allowRemoteRequests --allow-remote-requests Allow remote URLs as paths false
fast --fast, -f Fast mode (low resolution and FPS, useful for getting a quick preview โฉ) false
defaults.layer.fontPath --font-path Set default font to a .ttf System font
defaults.layer.* Set any layer parameter that all layers will inherit
defaults.duration --clip-duration Set default clip duration for clips that don't have an own duration 4 sec
defaults.transition An object { name, duration } describing the default transition. Set to null to disable transitions
defaults.transition.duration --transition-duration Default transition duration 0.5 sec
defaults.transition.name --transition-name Default transition type. See Transition types random
defaults.transition.audioOutCurve Default fade out curve in audio cross fades tri
defaults.transition.audioInCurve Default fade in curve in audio cross fades tri
clips[] List of clip objects that will be played in sequence. Each clip can have one or more layers.
clips[].duration Clip duration. See defaults.duration. If unset, the clip duration will be that of the first video layer. defaults.duration
clips[].transition Specify transition at the end of this clip. See defaults.transition defaults.transition
clips[].layers[] List of layers within the current clip that will be overlaid in their natural order (final layer on top)
clips[].layers[].type Layer type, see below
clips[].layers[].start What time into the clip should this layer start sec
clips[].layers[].stop What time into the clip should this layer stop sec
audioTracks[] List of arbitrary audio tracks. See audio tracks. []
audioFilePath --audio-file-path Set an audio track for the whole video. See also audio tracks
loopAudio --loop-audio Loop the audio track if it is shorter than video? false
keepSourceAudio --keep-source-audio Keep source audio from clips? false
clipsAudioVolume Volume of audio from clips relative to audioTracks. See audio tracks. 1
outputVolume --output-volume Adjust output volume (final stage). See example 1 e.g. 0.5 or 10dB
audioNorm.enable Enable audio normalization? See audio normalization. false
audioNorm.gaussSize Audio normalization gauss size. See audio normalization. 5
audioNorm.maxGain Audio normalization max gain. See audio normalization. 30

Transition types

transition.name can be any of gl-transitions, or any of the following: directional-left, directional-right, directional-up, directional-down, random or dummy.

Layer types

See examples and commonFeatures.json5

Layer type 'video'

For video layers, if parent clip.duration is specified, the video will be slowed/sped-up to match clip.duration. If cutFrom/cutTo is set, the resulting segment (cutTo-cutFrom) will be slowed/sped-up to fit clip.duration. If the layer has audio, it will be kept (and mixed with other audio layers if present.)

Parameter Description Default
path Path to video file
resizeMode See Resize modes
cutFrom Time value to cut from 0 sec
cutTo Time value to cut to end of video sec
width Width relative to screen width 1 0 to 1
height Height relative to screen height 1 0 to 1
left X-position relative to screen width 0 0 to 1
top Y-position relative to screen height 0 0 to 1
originX X anchor left left or right
originY Y anchor top top or bottom
mixVolume Relative volume when mixing this video's audio track with others 1

Layer type 'audio'

Audio layers will be mixed together. If cutFrom/cutTo is set, the resulting segment (cutTo-cutFrom) will be slowed/sped-up to fit clip.duration. The slow down/speed-up operation is limited to values between 0.5x and 100x.

Parameter Description Default
path Path to audio file
cutFrom Time value to cut from 0 sec
cutTo Time value to cut to clip.duration sec
mixVolume Relative volume when mixing this audio track with others 1

Layer type 'detached-audio'

This is a special case of audioTracks that makes it easier to start the audio relative to clips start times without having to calculate global start times.

detached-audio has the exact same properties as audioTracks, except start time is relative to the clip's start.

Example of detached audio tracks

Layer type 'image'

Full screen image

Parameter Description Default
path Path to image file
resizeMode See Resize modes

See also See Ken Burns parameters.

Layer type 'image-overlay'

Image overlay with a custom position and size on the screen. NOTE: If you want to use animated GIFs use video instead.

Parameter Description Default
path Path to image file
position See Position parameter
width Width (from 0 to 1) where 1 is screen width
height Height (from 0 to 1) where 1 is screen height

See also Ken Burns parameters.

Layer type 'title'

  • fontPath - See defaults.layer.fontPath
  • text - Title text to show, keep it short
  • textColor - default #ffffff
  • position - See Position parameter

See also Ken Burns parameters

Layer type 'subtitle'

  • fontPath - See defaults.layer.fontPath
  • text - Subtitle text to show
  • textColor - default #ffffff

Layer type 'title-background'

Title with background

  • text - See type title
  • textColor - See type title
  • background - { type, ... } - See type radial-gradient, linear-gradient or fill-color
  • fontPath - See type title

Layer type 'news-title'

  • fontPath - See defaults.layer.fontPath
  • text
  • textColor - default #ffffff
  • backgroundColor - default #d02a42
  • position - See Position parameter

Layer type 'slide-in-text'

  • fontPath - See defaults.layer.fontPath
  • text
  • fontSize
  • charSpacing
  • color
  • position - See Position parameter

Layer type 'fill-color', 'pause'

  • color - Color to fill background, default: randomize

Layer type 'radial-gradient'

  • colors - Array of two colors, default: randomize

Layer type 'linear-gradient'

  • colors - Array of two colors, default: randomize

Layer type 'rainbow-colors'

๐ŸŒˆ๐ŸŒˆ๐ŸŒˆ

Layer type 'canvas'

See customCanvas.js

  • func - Custom JavaScript function

Layer type 'fabric'

See customFabric.js

  • func - Custom JavaScript function

Layer type 'gl'

Loads a GLSL shader. See gl.json5 and rainbow-colors.frag

  • fragmentPath
  • vertexPath (optional)

Arbitrary audio tracks

audioTracks property can optionally contain a list of objects which specify audio tracks that can be started at arbitrary times in the final video. These tracks will be mixed together (mixVolume specifying a relative number for how loud each track is compared to the other tracks). Because audio from clips will be mixed separately from audioTracks, clipsAudioVolume specifies the volume of the combined audio from clips relative to the volume of each of the audio tracks from audioTracks.

Parameter Description Default
audioTracks[].path File path for this track
audioTracks[].mixVolume Relative volume for this track 1
audioTracks[].cutFrom Time value to cut source file from 0 sec
audioTracks[].cutTo Time value to cut source file to sec
audioTracks[].start How many seconds into video to start this audio track 0 sec

The difference between audioTracks and Layer type 'audio' is that audioTracks will continue to play across multiple clips and can start and stop whenever needed.

See audioTracks example

See also Layer type 'detached-audio'.

Audio normalization

You can enable audio normalization of the final output audio. This is useful if you want to achieve Audio Ducking (e.g. automatically lower volume of all other tracks when voice-over speaks).

audioNorm parameters are documented here.

Example of audio ducking

Resize modes

resizeMode - How to fit image to screen. Can be one of:

  • contain - All the video will be contained within the frame and letterboxed
  • contain-blur - Like contain, but with a blurred copy as the letterbox
  • cover - Video be cropped to cover the whole screen (aspect ratio preserved)
  • stretch - Video will be stretched to cover the whole screen (aspect ratio ignored).

Default contain-blur.

See:

Position parameter

Certain layers support the position parameter

position can be one of either:

  • top, bottom center, top-left, top-right, center-left, center-right, bottom-left, bottom-right
  • An object { x, y, originX = 'left', originY = 'top' }, where { x: 0, y: 0 } is the upper left corner of the screen, and { x: 1, y: 1 } is the lower right corner, x is relative to video width, y to video height. originX and originY are optional, and specify the position's origin (anchor position) of the object.

See position.json5

Ken Burns parameters

Parameter Description Default
zoomDirection Zoom direction for Ken Burns effect: in, out, left, right or null to disable
zoomAmount Zoom amount for Ken Burns effect 0.1

Docker

This should help you use editly as a containerized CLI, without worrying about getting all the right versions of dependencies on your system.

$ git clone https://github.com/mifi/editly.git
$ cd editly/examples
$ git clone https://github.com/mifi/editly-assets.git assets
$ cd ..
$ docker-compose up
$ docker-compose run editly bash -c "cd examples && editly audio1.json5 --out /outputs/audio1.mp4"
$ docker cp editly:/outputs/audio1.mp4 .

Troubleshooting

  • If you get Error: The specified module could not be found., try: npm un -g editly && npm i -g --build-from-source editly (see #15)
  • If you get an error about gl returning null, see Requirements.
  • If you get an error /bin/sh: pkg-config: command not found, try to use newest Node.js LTS version

Donate ๐Ÿ™

This project is maintained by me alone. The project will always remain free and open source, but if it's useful for you, consider supporting me. :) It will give me extra motivation to improve it.

Paypal

Thanks

This package would not exist without the support and help from all the contributors and sponsors!

Special thanks to:

See also

Videos made by you

Submit a PR if you want to share your videos or project created with editly here.


Made with โค๏ธ in ๐Ÿ‡ณ๐Ÿ‡ด

More apps by mifi.no

Follow me on GitHub, YouTube, IG, Twitter for more awesome content!

editly's People

Contributors

4383 avatar chapmanjacobd avatar chocolateboy avatar dalisoft avatar dependabot[bot] avatar gabrieltorreiro avatar gregzanch avatar ivanca avatar jonasgeiler avatar kalo0m avatar lucemia avatar mattkibbler avatar mifi avatar patcon avatar petetnt avatar uwemaurer avatar wisekaa03 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

editly's Issues

Normalizing Audio

From my point of view it would be quite helpful if editly includes the functionality to automatically normalize all video/audio files before stitching them together. That way we would eliminate differences in sound levels between the single files and create a more seamless video clip.

For reference: https://github.com/slhck/ffmpeg-normalize

exit code on ffmpeg fail

when ffmpeg exits with nonzero exit code then editly should also exit nonzero.

currently:

{ Error: Command failed with exit code 1: ffprobe -select_streams v:0 -show_entries stream -of json 
ffprobe version 4.1.5 Copyright (c) 2007-2020 the FFmpeg developers
  built with gcc 9 (GCC)
  configuration: blah blah blah

undefined: No such file or directory
{

}
    at makeError (/home/xk/amp/node_modules/execa/lib/error.js:58:11)
    at handlePromise (/home/xk/amp/node_modules/execa/index.js:114:26)
    at processTicksAndRejections (internal/process/task_queues.js:86:5)
  shortMessage:
   'Command failed with exit code 1: ffprobe -select_streams v:0 -show_entries stream -of json ',
  command: 'ffprobe -select_streams v:0 -show_entries stream -of json ',
  exitCode: 1,
  signal: undefined,
  signalDescription: undefined,
  stdout: '{\n\n}',

but node exits 0

to reproduce this you can incorrectly say file instead of path:

editly({
  fast: true,
  outPath: "./customFabric.mp4",
  clips: [
    {
      layers: [{ type: "video", file: "~/amp/test.webm" }],
    }]
})

resume a broken render or check that all files exist before encoding

I think it would be useful to check that all files exist. maybe someone types a path wrong or something. if it's not too much trouble it would be good to check that all the referenced files (video, shaders, audio) exist

normally this wouldn't be a big deal but if someone starts a long render process then it is a little bit annoying to realize that one of the assets is missing.

alternatively, some code which will allow resuming from a partial render would satisfy this issue and would be amazing!

Duration changes with FPS

Using JSON below I tried many different renders to see how framerate changes video duration. I will try to find bug in code but I may need some help. Here are some tests:

Desired duration: 90 seconds without transition, 89 seconds with transition (2x 0.5s duration)

width (px) height(px) fps output duration (sec) output duration (with transition null)
400 400 10 88.700000 89.700000
400 400 20 88.850000 89.850000
400 400 30 88.900000 X
400 400 40 88.925000 X
400 400 50 88.940000 X
400 400 60 88.950000 89.950000
400 400 120 88.975000 X
400 400 240 88.988000 89.988000
------------ ------------ ----- ----------------------- -----------------
800 800 10 88.700000 X
1920 1080 10 88.700000 X

(X means "no test")

JSON:

{ 
  "width": 1920, "height": 1080, "fps": 10,
  "outPath": "./subtitle.mp4",
  "defaults": {
    "layer": { "fontPath": "./Montserrat-Bold.ttf" },
    "transition": { "name": "fade" },
  },
  "clips": [
    { "duration": 30, "layers": [
       {
          "type": "subtitle",
          "text": "ST1 "
        },
         {
          "type": "title",
          "text": "T1 "
        }
    ] },
    { "duration": 30, "layers": [
       {
          "type": "subtitle",
          "text": "ST2 "
        },
         {
          "type": "title",
          "text": "T2 "
        }
    ] },
    { "duration": 30, "layers": [
       {
          "type": "subtitle",
          "text": "ST3 "
        },
         {
          "type": "title",
          "text": "T3 "
        }
    ] },
  ],
}

Include ffmpeg

Might add the ffmpeg-static dependency, then no need for system ffmpeg. (with an option to use a custom ffmpeg path)
But it does not work on raspberry pi, need to see if it crashes npm install or what happens

Problems installing on ubuntu bionic

Installed on ubuntu bionic with buildin nodejs.
first there seem to be some libraries missing in the dependecies or install instructions: libxi-dev, libxext-dev, libx11-dev

When finally everything installed I get this error:
editly --help
/usr/local/lib/node_modules/editly/node_modules/fs-extra/lib/mkdirs/make-dir.js:86
} catch {
^

SyntaxError: Unexpected token {
at createScript (vm.js:80:10)
at Object.runInThisContext (vm.js:139:10)
at Module._compile (module.js:616:28)
at Object.Module._extensions..js (module.js:663:10)
at Module.load (module.js:565:32)
at tryModuleLoad (module.js:505:12)
at Function.Module._load (module.js:497:3)
at Module.require (module.js:596:17)
at require (internal/module.js:11:18)
at Object. (/usr/local/lib/node_modules/editly/node_modules/fs-extra/lib/mkdirs/index.js:3:44)

Add ffmpeg version check

I've tried converting my audio file to different formats (mp3 lame, ogg) and I keep getting complaints about AAC:

720x1280 30fps
createFrameSource linear-gradient clip 0 layer 0
createFrameSource title clip 0 layer 1
createFrameSource fill-color clip 1 layer 0
createFrameSource title clip 1 layer 1
  0% [aac @ 0x1dec620] The encoder 'aac' is experimental but experimental codecs are not enabled, add '-strict -2' if you want to use it.
Output ffmpeg caught error Error: write EPIPE
    at afterWriteDispatched (internal/stream_base_commons.js:146:25)
    at writeGeneric (internal/stream_base_commons.js:137:3)
    at Socket._writeGeneric (net.js:698:11)
    at Socket._write (net.js:710:8)
    at doWrite (_stream_writable.js:417:12)
    at writeOrBuffer (_stream_writable.js:401:5)
    at Socket.Writable.write (_stream_writable.js:301:11)
    at /usr/local/lib/node_modules/editly/index.js:406:51
    at new Promise (<anonymous>)
    at module.exports (/usr/local/lib/node_modules/editly/index.js:406:15) {
  errno: 'EPIPE',
  code: 'EPIPE',
  syscall: 'write'
}
Loop failed Error: write EPIPE
    at afterWriteDispatched (internal/stream_base_commons.js:146:25)
    at writeGeneric (internal/stream_base_commons.js:137:3)
    at Socket._writeGeneric (net.js:698:11)
    at Socket._write (net.js:710:8)
    at doWrite (_stream_writable.js:417:12)
    at writeOrBuffer (_stream_writable.js:401:5)
    at Socket.Writable.write (_stream_writable.js:301:11)
    at /usr/local/lib/node_modules/editly/index.js:406:51
    at new Promise (<anonymous>)
    at module.exports (/usr/local/lib/node_modules/editly/index.js:406:15) {
  errno: 'EPIPE',
  code: 'EPIPE',
  syscall: 'write'```

set duration from audio file length

I wonder what you think about this:

clips: [{ audio: './audioclip.ogg', layers: [{ type: "fabric", func }] }]

if duration isn't set then it will set the duration to the length of the audio file?

maybe the default could be to use the length of the video first but a person could set duration to be the length of the audio:

defaults: {
    duration = audio,
},
clips: [{ audio: './audioclip.ogg', layers: [{ type: "fabric", func }] }]

or

clips: [{ duration: length('./audioclip.ogg'), audio: './audioclip.ogg', layers: [{ type: "fabric", func }] }]

Make it a stateless/pure/functional component

Look into making it completely functional/stateless:

getFrame(frameNum, parameters) -> calculates transitions and everything to return a calculated frame

This allows for making a GUI with seeking to any random frame

Must have workaround for non-seekable streams like ffmpeg pipe

Also cache low resolution/low framerate version of each input video cut so we can render preview faster

Anti-Aliasing Support

Some of the gl-transitions cause a strong aliasing effect. It seems the reason is lacking support for anti-aliasing in headless-gl (see stackgl/headless-gl#30).
I also noticed some aliasing effects on the text output.

I don't think there's anything that can be done in editly directly, I just wanted to mention this in case others stumble on the same issue.

Speed and suitability for 1-sec everyday-style videos

Hey, would love to get this working with https://github.com/umaar/video-everyday

Here are a few issues I had from doing this manually with ffmpeg, I'm wondering if Editly gets around these? Browsing the readme, sounds like it might do!

  1. Slow concatenation

Doing both video resizing and concatenation is pretty slow, e.g. scaling:

ffmpeg  -i video.mp4 -filter:v "scale=iw*min(1920/iw\,1080/ih):ih*min(1920/iw\,1080/ih), pad=1920:1080:(1920-iw*min(1920/iw\,1080/ih))/2:(1080-ih*min(1920/iw\,1080/ih))/2" -c:a copy video-out.mp4

Before a concat, I would try a scale videos appropriately. This seemed to be one way of scaling different sized videos into one common resolution. This command is a little daunting! Think I copied that from a forum. Also this is pretty slow!

  1. Slow subtitles

In my project, I tried to overlay subtitles - it works, but again it's super slow. It was actually faster to export an SRT file from my node script, and then import everything into a commercial video editing tool, and then export again!

  1. Slow trimming

This is the trickiest issue. When ingesting videos, I would grab a small segment around the half way mark. Using lossless cut or ffmpeg fast mode was fast - but because it didn't align with a keyframe, there'd be a black screen at the start and funny audio issues. In the end I had to do it the "slow" way in ffmpeg (I think it reencodes), the problem: extracting a 2 second clip from a 20min 4k video takes a long time! It's strangely faster for me to open the video in a commercial video editing tool, trim the video, and export!

I think what I'm trying to get across in this long comment is that in ffmpeg, there seems to be two ways of doing things:

  • fast way: but is inaccurate and can produce strange artefacts in the output video
  • slow way: very accurate, does exactly what you ask, but can take a long time to execute

I was hoping for a fast and accurate way! I'm wondering, does editly get around these challenges?

Keep Audio-Track of Videos

Hi there,

I really like your project editly. Everything works like a charm. But how to keep the audiotrack of the original videos? I would be great if they would be kept in the output.

Thank you for any help or hint to get this working.

cheers!

No audio in video clips

Not sure if this is a bug, or it was specified on README.md, but I couldn't find how to keep clip's audio in the final video.

Windows cli command execution failed

`E:\nodeJs\nodejs_test\ffmpeg-filter>editly
internal/modules/cjs/loader.js:717
return process.dlopen(module, path.toNamespacedPath(filename));
^

Error: The specified module could not be found.
\?\C:\Users\Administrator\AppData\Roaming\npm\node_modules\editly\node_modules\canvas\build\Release\canvas.node
at Object.Module._extensions..node (internal/modules/cjs/loader.js:717:18)
at Module.load (internal/modules/cjs/loader.js:598:32)
at tryModuleLoad (internal/modules/cjs/loader.js:537:12)
at Function.Module._load (internal/modules/cjs/loader.js:529:3)
at Module.require (internal/modules/cjs/loader.js:636:17)
at require (internal/modules/cjs/helpers.js:20:18)
at Object. (C:\Users\Administrator\AppData\Roaming\npm\node_modules\editly\node_modules\canvas\lib\bindings.js:3:18)
at Module._compile (internal/modules/cjs/loader.js:688:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:699:10)
at Module.load (internal/modules/cjs/loader.js:598:32)
`

Error: write EPIPE

Trying to run a pretty simple example file:

{
  width: 720, height: 1280, fps: 30,
  defaults: {
    transition: { name: 'random' },
  },
  clips: [
    { duration: 3, transition: { name: 'directional-left' }, layers: [{ type: 'title-background', text: 'EDITLY\nVideo editing framework', background: { type: 'linear-gradient', colors: ['#02aab0', '#00cdac'] } }] },
    { duration: 4, transition: { name: 'dreamyzoom' }, layers: [{ type: 'title-background', text: 'Multi-line text with animated linear or radial gradients', background: { type: 'radial-gradient' } }] },
    { duration: 3, transition: { name: 'directional-right' }, layers: [{ type: 'rainbow-colors' }, { type: 'title', text: 'Colorful backgrounds' }] },
  ],
}

Using command: editly simple.json5 --fast --out test.gif

Returns:

320x320 15fps
createFrameSource linear-gradient clip 0 layer 0
createFrameSource title clip 0 layer 1
createFrameSource radial-gradient clip 1 layer 0
createFrameSource title clip 1 layer 1
  0%   7%  14%  22%  29% Done with transition, switching to next clip
createFrameSource gl clip 2 layer 0
Error: write EPIPE
    at afterWriteDispatched (internal/stream_base_commons.js:154:25)
    at writeGeneric (internal/stream_base_commons.js:145:3)
    at Socket._writeGeneric (net.js:784:11)
    at Socket._write (net.js:796:8)
    at doWrite (_stream_writable.js:403:12)
    at writeOrBuffer (_stream_writable.js:387:5)
    at Socket.Writable.write (_stream_writable.js:318:11)
    at /usr/local/lib/node_modules/editly/index.js:396:51
    at new Promise (<anonymous>)
    at module.exports (/usr/local/lib/node_modules/editly/index.js:396:15)

Video improvements

  • Allow custom background on video letterbox, maybe blurred version of each video frame like for 'image'?
  • ken burns zoom for videos too (or for all layers?)

Errors installing

Getting some errors when installing (node v14.0.0) with npm i -g editly

> [email protected] install /usr/local/lib/node_modules/editly/node_modules/canvas
> node-pre-gyp install --fallback-to-build

node-pre-gyp WARN Using request for node-pre-gyp https download
node-pre-gyp WARN Tried to download(404): https://github.com/node-gfx/node-canvas-prebuilt/releases/download/v2.6.1/canvas-v2.6.1-node-v83-darwin-unknown-x64.tar.gz
node-pre-gyp WARN Pre-built binaries not found for [email protected] and [email protected] (node-v83 ABI, unknown) (falling back to source compile with node-gyp)
/bin/sh: pkg-config: command not found
gyp: Call to 'pkg-config pixman-1 --libs' returned exit status 127 while in binding.gyp. while trying to load binding.gyp

It errors out and doesn't get installed.

Why audio files doesn't support .mp3?

Hey.
I'm trying to create video from images and audio but if I use .mp3 files it says 0% [mp3 @ 0000024dfbdef1c0] Failed to read frame size: Could not seek to 1026. ./audio.mp3: Invalid argument

Any help would be really appreciated.

CustomFabricSoure does not accept parameters

The func value set of type:fabric customFabricFrameSource does not recieve any parameters. For example, you cannot pass a text to https://github.com/mifi/editly/blob/master/examples/customFabric.js.

Following patch can fix it.

diff --git a/sources/fabricFrameSource.js b/sources/fabricFrameSource.js
index 74b8299..c88238c 100644
--- a/sources/fabricFrameSource.js
+++ b/sources/fabricFrameSource.js
@@ -308,7 +308,7 @@ async function createCustomCanvasFrameSource({ width, height, params }) {
 }
 
 async function customFabricFrameSource({ canvas, width, height, params }) {
-  return params.func(({ width, height, fabric, canvas }));
+  return params.func(({ width, height, fabric, canvas, params }));
 }
 
 function registerFont(...args) {

Passing ffmpeg arguments to CLI

Hi,

Is there a possibility to insert custom ffmpeg arguments though the CLI? For example with: editly my-editly.json5 --custom "-b:v 64k -bufsize 64k" --fast --out output.gif

Option to inherit video resolution

Good work on this project.

Using the CLI, I am concatenating two video files like so

editly first_file.mp4 second_file.mp4 --out output.mp4

The input files are both 1920x1080, but the resulting video file is 640x360

Specifying the width and height in the command works, however, providing an option to automatically detect the lowest common resolution from the input files would be a good enhancement

Render takes 2+ hours

I tried to make 1920x1080, 60fps render and it took too long. Output was 70 seconds long and was made mainly by merging video files + adding subtitles. What can I do to speed up the rendering?

JSON File:

{
  "verbose": false,
  "fast": false,
  "width": 1920,
  "height": 1080,
  "fps": 60,
  "outPath": "./tmp/final.mp4",
  "defaults": {
    "layer": {
      "fontPath": "./Montserrat-Bold.ttf"
    },
    "transition": {
      "name": "fade"
    }
  },
  "clips": [
    {
      "layers": [
        {
          "type": "video",
          "path": "./mp4/a.mp4"
        }
      ]
    },
    {
      "layers": [
        {
          "type": "video",
          "path": "./mp4/c0.mp4"
        },
        {
          "type": "subtitle",
          "text": "CLIP #1\nKellwiin sa ... bla bla.. prosto co sa stalo \nHehe kludne aj na 2 riadky Poooooog"
        }
      ]
    },
    {
      "layers": [
        {
          "type": "video",
          "path": "./mp4/c0_t.mp4"
        }
      ]
    },
    {
      "layers": [
        {
          "type": "video",
          "path": "./mp4/c1.mp4"
        },
        {
          "type": "subtitle",
          "text": "CLIP #2\nKellwiin sa ... bla bla.. prosto co sa stalo \nHehe kludne aj na 2 riadky Poooooog"
        }
      ]
    },
    {
      "layers": [
        {
          "type": "video",
          "path": "./mp4/c1_t.mp4"
        }
      ]
    },
    {
      "layers": [
        {
          "type": "video",
          "path": "./mp4/c2.mp4"
        },
        {
          "type": "subtitle",
          "text": "CLIP #3\nKellwiin sa ... bla bla.. prosto co sa stalo \nHehe kludne aj na 2 riadky Poooooog"
        }
      ]
    },
    {
      "layers": [
        {
          "type": "video",
          "path": "./mp4/c2_t.mp4"
        }
      ]
    },
    {
      "layers": [
        {
          "type": "video",
          "path": "./mp4/z.mp4"
        }
      ]
    }
  ]
}

Transition-name CLI parameter does not work

Specifying the transition to be used via the CLI does not work for me

Command:

editly --transition-name "fade" first_clip.mp4 second_clip.mp4 --out output.mp4

Output:

640x360 59.94005994005994fps
createFrameSource video clip 0 layer 0
createFrameSource video clip 1 layer 0
Done with transition, switching to next clip
Done with transition, switching to next clip
No more transitionFromClip, done
Done. Output file can be found at:
output.mp4

There isn't a fade transition but instead a jump cut.

Error: gl returned null (missing dependencies)

While attempting to run the "kenBurns" example in a Debian docker image (using a node:14-stretch base image) I get the following error:

640x640 25fps
Error: gl returned null, this probably means that some dependencies are not installed. See README.
    at module.exports (/usr/local/lib/node_modules/editly/glTransitions.js:12:11)
    at module.exports (/usr/local/lib/node_modules/editly/index.js:265:36)
    at processTicksAndRejections (internal/process/task_queues.js:97:5)
    at async /usr/local/lib/node_modules/editly/cli.js:124:3

For reference, here's the Dockerfile I'm using:

FROM node:14-stretch

RUN apt-get -qq update \
  && apt-get -qq install \
    build-essential \
    libcairo2-dev \
    libpango1.0-dev \
    libjpeg-dev \
    libgif-dev \
    librsvg2-dev \
    libxi-dev \
    libglu1-mesa-dev \
    libglew-dev \
    pkg-config \
    python2.7 \
    python-pip \
    ffmpeg

RUN mkdir -p /opt/node \
  && cd /opt/node \
  && npm install --global --unsafe-perm editly

Transitions are upside down

Hello, I just noticed a bug that the heart transition is upside-down. Can this be fixed by editly or does the bug belong to gl-transitions?

Thanks for creating an excellent video tool @mifi

got image file list ,audio file and srt file,how to write json5

{
  // width: 2166, height: 1650, fps: 30,
  width: 720, height: 1280, fps: 30,
  outPath: './commonFeatures.mp4',
  // outPath: './commonFeatures.gif',
  audioFilePath: '.HowtoMakeYour_ucto.aiff',
  clips: [
    { duration: 92, transition: { name: 'directionalWarp' }, layers: [{ type: 'image', path: './EWMYRHY72JAE5DT3QURHTS7K2Y.jpg', zoomDirection: 'out' }] },
    { duration: 92, transition: { name: 'directionalWarp' }, layers: [{ type: 'image', path: './How to Make Your Own Hand Sanitizer Gel.jpg', zoomDirection: 'out' }] },
    { duration: 92, transition: { name: 'directionalWarp' }, layers: [{ type: 'image', path: './merlin_170906667_661ce5e5-85e1-4215-8efd-8d2bca2e758b-superJumbo.jpg', zoomDirection: 'out' }] },
    { duration: 92, transition: { name: 'directionalWarp' }, layers: [{ type: 'image', path: './photo-1583947215259-38e31be8751f.jpeg', zoomDirection: 'out' }] },
    { duration: 92, transition: { name: 'directionalWarp' }, layers: [{ type: 'image', path: './photo-1583947581380-3d27ed02f76f.jpeg', zoomDirection: 'out' }] },
    { duration: 92, transition: { name: 'directionalWarp' }, layers: [{ type: 'image', path: './photo-1583947582387-6f2336412460.jpeg', zoomDirection: 'out' }] },
    { duration: 92, transition: { name: 'directionalWarp' }, layers: [{ type: 'image', path: './photo-1583947582411-e3ce8469d1ee.jpeg', zoomDirection: 'out' }] },
    { duration: 92, transition: { name: 'directionalWarp' }, layers: [{ type: 'image', path: './photo-1583947582886-f40ec95dd752.jpeg', zoomDirection: 'out' }] },
    { duration: 92, transition: { name: 'directionalWarp' }, layers: [{ type: 'image', path: './photo-1584483720412-ce931f4aefa8.jpeg', zoomDirection: 'out' }] },
    { duration: 92, transition: { name: 'directionalWarp' }, layers: [{ type: 'image', path: './sanitizer-1.jpg', zoomDirection: 'out' }] },

  ],
}

better handling of "Frame source returned empty result"

I wanted to do a quick test to make sure alpha-channel video is composited correctly but I'm encountering some strange behavior. It might just be that the test file I am using is corrupt but it plays fine in mpv.

https://raw.githubusercontent.com/kinetiknz/nestegg/master/test/media/dancer1.webm

I think it is an ffmpeg issue but it's very minor. However, I think there should be better handling of this specific situation:

with

      layers: [{ type: "video", path: "./dancer1.webm" }],
~/amp [2] ๐Ÿฝ node test.js ; and mpv customFabric.mp4
320x180 15fps
createFrameSource video clip 0 layer 0
createFrameSource fabric clip 1 layer 0
Frame source returned empty result
Frame source returned empty result
Frame source returned empty result
Frame source returned empty result
Frame source returned empty result
(no end to Frame source returned empty result)

I'm not sure what the best option is but it seems like it should exit after some time rather than going on forever

Video layer position

Hi! I love using editly. I'm interested in contributing a patch to it to enable mosaicing of images, similar to:

https://trac.ffmpeg.org/wiki/Create%20a%20mosaic%20out%20of%20several%20input%20videos

If this were to be staged out I think the order would be something like:

  • Add a layer size option for video sources
  • Add a layer position option for video sources
  • (much harder) allow multiple video sources in a single clip, either as multiple layers or a "multi-video" source

But before I began, I was wondering if this was something you'd thought about, and if you'd be open to a PR along these lines if I'm able to make it work.

Thanks!

CLI improvements

  • allow set clip subtitle and title from CLI
  • Allow specifying path to ffmpeg and ffprobe

The time duration of the merged video is not accurate enough .

{
    "outPath":"E:/nodeJs/nodejs_test/ffmpeg-filter/output/ts.mp4",
    "width":1280,
    "height":720,
    "fps":30,
    "defaults": {
        // "duration":5,
        "transition": "null"
    },
    "clips": [
        {
            duration:6,
            // transition: {
            //     "duration":2,
            //     "name":"InvertedPageCurl",
            // },
            "layers":[
                {
                    "type":"video" ,
                    "path":"E:/nodeJs/nodejs_test/ffmpeg-filter/output/sucai/out.mp4",
                }
            ],
        },
        {
            duration:6,
            // transition: {
            //     "duration":0.5,
            //     "name":"WaterDrop",
            // },
            "layers":[
                {"type":"video" , "path":"E:/nodeJs/nodejs_test/ffmpeg-filter/output/sucai/out2.mp4"}
            ],
        },
        {
            duration:6,
            // transition: {
            //     "duration":0.5,
            //     "name":"PolkaDotsCurtain",
            // },
            "layers":[
                {"type":"video" , "path":"E:/nodeJs/nodejs_test/ffmpeg-filter/output/sucai/out.mp4"}
            ],
        },
        {
            duration:6,
            // transition: {
            //     "duration":0.5,
            //     "name":"GlitchMemories",
            // },
            "layers":[
                {"type":"video" , "path":"E:/nodeJs/nodejs_test/ffmpeg-filter/output/sucai/out2.mp4"}
            ],
        },
    ],
    "fast":false,
}

No matter how much I tweak the configuration, I can't get an output video of exactly the same duration as the configured Json file.

For example, in this Josn configuration above, the resulting video is only 22 seconds long, and I want a 24 second long video .

Loop failed TypeError: Cannot read property 'ARRAY_BUFFER' of null

I got this error now:

$ editly test.json
960x1440 30fps
createFrameSource image clip 0 layer 0
createFrameSource image clip 1 layer 0
0% 4% 8% 12% 16% 20% 25% 29% Loop failed TypeError: Cannot read property 'ARRAY_BUFFER' of null
at runTransitionOnFrame (/usr/lib/node_modules/editly/glTransitions.js:19:10)
at module.exports (/usr/lib/node_modules/editly/index.js:382:28)
at async /usr/lib/node_modules/editly/cli.js:120:3

test.json:
{
width: 960, height: 1440, fps: 30,
outPath: './test.mp4',
defaults: {
transition: { name: 'fade' },
},
clips: [
{ duration: 3, layers: [{ type: 'image', path: './kornblume.jpg', zoomDirection: 'out' }] },
{ duration: 3, layers: [{ type: 'image', path: './chili.jpg', zoomDirection: 'in' }] },
{ duration: 3, layers: [{ type: 'image', path: './zucchini.jpg', zoomDirection: 'out' }] },
],
}

location of example assets?

I may have missed something, but I'm not able to find the assets required by the examples, so I'm not able to run any of them. Are they available as a separate download somewhere?

More integrations

  • Full power of the HTML to create visuals and animations using HTML animation frameworks (maybe puppeteer)
  • three.js
  • ...

Error: Segmentation fault (core dumped)

I've been working on trying to get editly working on my docker image. We are using alpine:3.11 linux with node 12.16.3, this docker is hosted on aws elastic beanstalk. I've followed the troubleshooting docs with headless-gl linux and seems like everything is installed correctly, I'm using "xvfb-run -s \"-ac -screen 0 1920x1080x24\" node backend/build/server.js" Here is the logs from editly:

[ { layers: [ [Object] ] } ]
{
  outPath: 'concat-7d58e826-fe5c-474f-8534-83a22b94cb0b.mp4',
  width: 1920,
  height: 1080,
  fps: 30,
  defaults: {
    transition: {
      duration: 0.5,
      name: 'directionalWipe',
    },
  },
  clips: [
    {
      layers: [
        {
          type: 'video',
          path: 'withAnim-911c5cfa-045c-4a9f-8e7d-0f75cfa89703.mp4',
        },
      ],
    },
  ],
  enableFfmpegLog: true,
  verbose: true,
  fast: true,
}
[
  {
    transition: {
      name: 'directionalWipe',
      duration: 0.5,
    },
    duration: 6,
    layers: [
      {
        type: 'video',
        path: 'withAnim-911c5cfa-045c-4a9f-8e7d-0f75cfa89703.mp4',
        width: 1920,
        height: 1080,
        framerateStr: '25/1',
        framePtsFactor: 1,
      },
    ],
  },
]
320x180 15fps
Segmentation fault (core dumped)
error Command failed with exit code 139.

add silence

maybe with anullsrc

eg.

clips: [{ duration: 2, layers: [{ type: "blank", backgroundColor: '#000' }] }]

or maybe this is already solved by using title-background clip type:

{ duration: 2, layers: [{ type: 'title', text: ' ' }] },

that works fine. but maybe an explicit option would be better

Possible memory leak?

I'm running editly in a docker container using the JS API.
It's running inside an express handler and memory usage goes very high when rendering and stays at about 80% of it's highest peak after the video has rendered, and stays that way after it has done executing, never coming down unless restarting the container.

When I comment out the editly call, there is no memory leak:

await editly(editlySpec);

Here is my editlySpec:

let editlySpec = {
    outPath: '/usr/src/app/assets/video/movie.mp4',
    width: 1708,
    height: 960,
    defaults: {
      transition: { name: 'fade', duration: 0.525 },
      duration: 1.05
    },
    clips:[
       { layers: [{ type: 'image', path: '/usr/src/app/assets/video/000.png', zoomDirection: null, zoomAmount: 0 }] },
       { layers: [{ type: 'image', path: '/usr/src/app/assets/video/001.png', zoomDirection: null, zoomAmount: 0 }] },
       ....
       { layers: [{ type: 'image', path: '/usr/src/app/assets/video/019.png', zoomDirection: null, zoomAmount: 0 }] }
    ],
    enableFfmpegLog: false,
    verbose: false
  };

The PNGs are 1708โ€Šร—โ€Š960 and ~2MB each.

Any idea why this might happen?

Edit:
Migrated to using the CLI for now until this is fixed, the issue doesn't happen to me when using it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.