nypublicradio / audiogram Goto Github PK
View Code? Open in Web Editor NEWTurn audio into a shareable video.
License: MIT License
Turn audio into a shareable video.
License: MIT License
Layering a JPG background image into a video seems to hurt the performance a lot (PNGs too, but less so). My guess is that it's from node-canvas
re-decoding unnecessarily, can probably fix this by drawing the background image onto a separate Node canvas initially and relayering.
First off, thanks so much for making this available! I'm using Audiogram via the Docker install, and it's working great. My question is about modifying themes. I modified the themes.json file in the GitHub folder on my desktop, but it looks like in order for those changes to happen I need to commit them and make a pull request—but I don't want those changes to show up on the nypublicradio/audiogram repo! Is that what would happen? Is this the right way to modify themes using Docker, or is there something I'm missing?
This might be a Pango/canvas font nightmare, but it would be cool if we could support emoji in captions.
Fairly often an audiogram will fail to finish creating because of an "unkown error". The docker console gives this out:
Error: Can't set headers after they are sent.
at ServerResponse.OutgoingMessage.setHeader (_http_outgoing.js:335:11)
at ServerResponse.header (/home/audiogram/audiogram/node_modules/express/lib/response.js:719:10)
at ServerResponse.send (/home/audiogram/audiogram/node_modules/express/lib/response.js:164:12)
at module.exports (/home/audiogram/audiogram/server/error.js:17:9)
at Layer.handle_error (/home/audiogram/audiogram/node_modules/express/lib/router/layer.js:71:5)
at trim_prefix (/home/audiogram/audiogram/node_modules/express/lib/router/index.js:310:13)
at /home/audiogram/audiogram/node_modules/express/lib/router/index.js:280:7
at Function.process_params (/home/audiogram/audiogram/node_modules/express/lib/router/index.js:330:12)
at next (/home/audiogram/audiogram/node_modules/express/lib/router/index.js:271:10)
at Layer.handle_error (/home/audiogram/audiogram/node_modules/express/lib/router/layer.js:67:12)
Vertical aligning to the middle between a captionTop
and a captionBottom
can be off by a bit and doesn't seem totally consistent.
32684 verbose lifecycle [email protected]~install: PATH: /usr/local/lib/node_modules/npm/bin/node-gyp-bin:/Users/wenglish/Sites/www/audiogram/node_modules/canvas/node_modules/.bin:/Users/wenglish/Sites/www/audiogram/node_modules/.bin:/usr/local/Cellar/node/6.3.1/bin:/Users/wenglish/.rbenv/shims:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/git/bin:/Users/brad/Dropbox/Working/vvv-provision-flipper:/Users/wenglish/.rbenv/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/git/bin:/opt/sm/bin:/opt/sm/pkg/active/bin:/opt/sm/pkg/active/sbin:/usr/local/heroku/bin:/Users/wenglish/dotfiles/misc:/Users/wenglish/dotfiles/bin:/Users/wenglish/.composer/vendor/bin:/Users/wenglish/go/bin
32685 verbose lifecycle [email protected]~install: CWD: /Users/wenglish/Sites/www/audiogram/node_modules/canvas
32686 silly lifecycle [email protected]~install: Args: [ '-c', 'node-gyp rebuild' ]
32687 silly lifecycle [email protected]~install: Returned: code: 1 signal: null
32688 info lifecycle [email protected]~install: Failed to exec install script
32689 verbose unlock done using /Users/wenglish/.npm/_locks/staging-ab6e0168211d3c6b.lock for /Users/wenglish/Sites/www/audiogram/node_modules/.staging
32690 silly rollbackFailedOptional Starting
32691 silly rollbackFailedOptional Finishing
32692 silly runTopLevelLifecycles Finishing
32693 silly install printInstalled
32694 verbose stack Error: [email protected] install: `node-gyp rebuild`
32694 verbose stack Exit status 1
32694 verbose stack at EventEmitter.<anonymous> (/usr/local/lib/node_modules/npm/lib/utils/lifecycle.js:255:16)
32694 verbose stack at emitTwo (events.js:106:13)
32694 verbose stack at EventEmitter.emit (events.js:191:7)
32694 verbose stack at ChildProcess.<anonymous> (/usr/local/lib/node_modules/npm/lib/utils/spawn.js:40:14)
32694 verbose stack at emitTwo (events.js:106:13)
32694 verbose stack at ChildProcess.emit (events.js:191:7)
32694 verbose stack at maybeClose (internal/child_process.js:852:16)
32694 verbose stack at Process.ChildProcess._handle.onexit (internal/child_process.js:215:5)
32695 verbose pkgid [email protected]
32696 verbose cwd /Users/wenglish/Sites/www/audiogram
32697 error Darwin 14.5.0
32698 error argv "/usr/local/Cellar/node/6.3.1/bin/node" "/usr/local/bin/npm" "install"
32699 error node v6.3.1
32700 error npm v3.10.7
32701 error code ELIFECYCLE
32702 error [email protected] install: `node-gyp rebuild`
32702 error Exit status 1
32703 error Failed at the [email protected] install script 'node-gyp rebuild'.
I noticed in the package.JSON that it's pointing to @chearon/node-canvas instead of the repo version. Changing package.json
over to the latest version of canvas will install the app itself, but produce another error (which I dont know if it's related or not) here:
info: Listening on 8888
/Users/wenglish/Sites/www/audiogram/lib/register-fonts.js:8
Canvas.registerFont(font.file, _.pick(font, "family", "weight", "style"));
^
TypeError: Canvas.registerFont is not a function
at /Users/wenglish/Sites/www/audiogram/lib/register-fonts.js:8:12
at Array.forEach (native)
at Object.<anonymous> (/Users/wenglish/Sites/www/audiogram/lib/register-fonts.js:7:9)
at Module._compile (module.js:541:32)
at Object.Module._extensions..js (module.js:550:10)
at Module.load (module.js:458:32)
at tryModuleLoad (module.js:417:12)
at Function.Module._load (module.js:409:3)
at Module.require (module.js:468:17)
at require (internal/module.js:20:19)
^C^
Lastly, I'm on OSX 10.10.5 Yosemite, node 6.3.1 and npm 3.10.7. I tried this on a El Capitan OSX install, still did not work. Any help would be appreciated!
Stuff to do for next release:
[ ] Finalize a multiple-caption solution
[ ] Drag-and-drop uploads
[ ] Pre-validate settings
[ ] Built-in Twitter login middleware option
[ ] Refactoring
It's a great piece of software! But why is there a limit to 30 seconds of audio?
Rather than making people edit raw JSON (yuck), there could be a /themes
endpoint that lets you edit/add your own themes. This would take a fair bit of work but it would make it no-code-required for a lot of users, and be a much easier way to test out spacing + positioning. Could even toggle the overlays of different social platforms (#26).
One additional change this would require is for the editor to live-load the themes.json file rather than it being bundled with Browserify, or else changes wouldn't show up without a rebundle.
The validate-settings
branch gives some more helpful error messages on startup if you have malformed JSON, but the editor doesn't really handle it. Should replace the "Loading..." message with a note about the broken themes if that occurs.
It would be nice if you could put certain words in bold or an alternate color in captions. This could be done with Markdown-like syntax in the editor. Rendering and measuring the text fluidly would take some work.
Right now, if you use a custom font (or fonts), it won't show up in the preview unless it's also installed on a user's computer as a system font. We could solve this with:
settings/index.js
.fonts.css
file for the editor. If there are no fonts defined, it would be blank. If there are fonts defined, it would add a @font-face
declaration for each name + weight + style that goes to the appropriate path.Current status updates tell you what stage the rendering is at and % progress, but it would be nice to get a full waiting time estimate, especially if you're in a queue with jobs in front of you. This would entail some formula for estimated completion time, which would require knowing the duration before a job has started, so we'd probably have to send a duration estimate from the browser. Then we could multiply the duration and framerate by some conservative per-frame estimate.
This would be useful for quickly grabbing files, especially if somebody lost the link or accidentally closed their browser while it was rendering. It's a bit tricky since we don't want to depend on the Redis/.jobs
info. One option is to scan the actual videos folder (or S3 folder) and show the ones with file modification dates within the past X days. We wouldn't have any metadata about them, but maybe that's OK.
bc choosing your clip graphically in the waveform is tedious
This prescription will ignore local changes (really confusing):
git clone https://github.com/nypublicradio/audiogram.git
cd audiogram
docker build -t audiogram .
As Dockerfile
specifies RUN git clone https://github.com/nypublicradio/audiogram.git
.
A quick fix I made for @evoterra: https://gist.github.com/Laurian/32a1250e270ad5d31a75e0a729179c79
Hi, is it possible, or is there a plan, to use audiogram via commandline or via url parameter?
For example
to get the resulting file in a predefined folder.
Rendering a video on demand might be a good Lambda use case, need to investigate this further.
Potential upsides: No Redis, no queue, much more modular, much easier installation for the server/editor alone
Potential downsides: AWS lock-in, AWS documentation nightmares, debugging nightmares, long delays for long videos, hard limit of 300s rendering time
The earliest Audiogram prototype used the WebAudio API to get the waveform data, and we still use it for the summary version in the editor. But we ran into a lot of inconsistencies with decoding and duration detection. If we can get that working properly, we would reduce the server-side complexity and be able to drop the libgroove
and node-waveform
dependencies.
The app will work with other video dimensions, but the editor and preview still assume the dimensions are 1280x720, so the preview looks weird even though the resulting video is OK. Should make sure the editor is size-agnostic.
Dockerfile? AMI? Puppet manifest?
Rather than the current very manual installation process, we could potentially bundle a lot of this into a nice Electron desktop app. Some obstacles:
Hello, I am trying to run this on a digital ocean box and was able to build and start the server after following the instructions. Upon running, I can upload a file and change the theme and caption, but when I click generate, it seems to process regularly for a bit (gets passed audio uploading), then fails.
I get the following error in the browser:
https://www.dropbox.com/s/t3bpz4zfl9wmork/Screenshot%202016-08-01%2022.48.37.png?dl=0
And the following in the server console:
Error: spawn /root/audiogram/node_modules/waveform/build/Release/waveform ENOENT
at exports._errnoException (util.js:1007:11)
at Process.ChildProcess._handle.onexit (internal/child_process.js:182:32)
at onErrorNT (internal/child_process.js:348:16)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)Error
at /root/audiogram/node_modules/waveform/index.js:33:20
at exithandler (child_process.js:214:5)
at ChildProcess.errorhandler (child_process.js:226:5)
at emitOne (events.js:96:13)
at ChildProcess.emit (events.js:188:7)
at Process.ChildProcess._handle.onexit (internal/child_process.js:202:12)
at onErrorNT (internal/child_process.js:348:16)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
Note I tried this with Ubuntu 14.04 and 16.04, with multiple audio files, with and without the node-gyp step.
Any guidance would be appreciated.
Right now if you wanted to generate Facebook/Twitter (1280x720) and Instagram (640x640) audiograms, you'd have to define two separate themes with two separate background images and generate the audiogram twice. It would be nice if you could generate an audiogram once and get back a whole family of videos, one for each site.
Not sure how this would work given that logo placement and that sort of thing would change at different sizes. It would require spec'ing out and constraining a lot of the visual options to make it work across sizes. May or may not be desirable.
Allow people to upload their own image as the background for an individual audiogram. This would be pretty easy to implement, though it would require some means of scaling/translating the uploaded image.
You currently have to download your video file and upload it to Twitter or Facebook separately. It would be nice to have built-in share buttons that send you to the share dialog with the video already set up. Not sure whether this is feasible or what it would entail.
Some early prototypes had other non-waveform visualizations. Should at least include sample code for them so that people can use it as desired. A step further could be to add a "pattern" option to the settings with prebuilt ones, like dots
, wave
, bars
, etc.
An interesting feature would be feeding in a transcript and having timed subtitles to go along with the audio. Would that ever be possible?
Hello,
My apologies if I am asking something stupid. I am not that familiar with programming in general; The same going for npm, brew, ffmpeg, nodejs etc.
Also please forgive me for any bad English as it is not my native language. :)
At the moment I am working on an iMac running El Capitan and am trying to get the audiogram server and client to work.
So far I have managed to get the terminal to go into listening mode, and then I was able to reach the client through my browser. It seemed to work properly up until I pressed the Generate button.
At that point the terminal spit out an error and the browser showed a continuous loading screen.
I have been trying to find a solution to get the error to disappear and the app to start working, but I am afraid that I can't find the information I am looking for.
Maybe because I am making a beginners mistake, or maybe I haven't got enough knowledge to proceed with the other tips I have found online.
Therefor I am hoping that someone here would be able and willing to help me out.
I tried to follow the step as properly as I could.
The error I got when pressing the Generate button in the browser was:
{"status":"error","error":{"killed":false,"code":127,"signal":null,"cmd":"ffmpeg -i /PATH/TO/DIRECTORY/audiogram/tmp/7551fab0-5988-11e6-9be1-5b8c780d4c8d/audio -ss 0 -t 29.585714328125 -acodec libmp3lame -b:a 128k //PATH/TO/DIRECTORY/audiogram/tmp/7551fab0-5988-11e6-9be1-5b8c780d4c8d/audio-trimmed.mp3"}}
So far I have tried updating all the dependencies/libs, also uninstalling and re-installing them, uninstalling ffmpeg only (because it might work with libgroove only), and tried running it and running it as root.
If anyone could help me out that would be greatly appreciated!
Greetings,
Ruben
$ npm install
/
[email protected] install /Users/markusvoelter/Documents/audiogram/audiogram/node_modules/waveform
node-gyp rebuild
CC(target) Release/obj.target/waveform/waveform.o
../waveform.c:4:10: fatal error: 'groove/groove.h' file not found
^
1 error generated.
make: *** [Release/obj.target/waveform/waveform.o] Error 1
gyp ERR! build error
gyp ERR! stack Error: make
failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:276:23)
gyp ERR! stack at emitTwo (events.js:87:13)
gyp ERR! stack at ChildProcess.emit (events.js:172:7)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:200:12)
gyp ERR! System Darwin 15.0.0
gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /Users/markusvoelter/Documents/audiogram/audiogram/node_modules/waveform
gyp ERR! node -v v4.4.7
gyp ERR! node-gyp -v v3.3.1
gyp ERR! not ok
[email protected] install /Users/markusvoelter/Documents/audiogram/audiogram/node_modules/canvas
node-gyp rebuild
SOLINK_MODULE(target) Release/canvas-postbuild.node
CXX(target) Release/obj.target/canvas/src/Canvas.o
In file included from ../src/Canvas.cc:23:
../src/JPEGStream.h:10:10: fatal error: 'jpeglib.h' file not found
^
1 error generated.
make: *** [Release/obj.target/canvas/src/Canvas.o] Error 1
gyp ERR! build error
gyp ERR! stack Error: make
failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:276:23)
gyp ERR! stack at emitTwo (events.js:87:13)
gyp ERR! stack at ChildProcess.emit (events.js:172:7)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:200:12)
gyp ERR! System Darwin 15.0.0
gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /Users/markusvoelter/Documents/audiogram/audiogram/node_modules/canvas
gyp ERR! node -v v4.4.7
gyp ERR! node-gyp -v v3.3.1
gyp ERR! not ok
npm ERR! Darwin 15.0.0
npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "install"
npm ERR! node v4.4.7
npm ERR! npm v2.15.8
npm ERR! code ELIFECYCLE
npm ERR! [email protected] install: node-gyp rebuild
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] install script 'node-gyp rebuild'.
npm ERR! This is most likely a problem with the waveform package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node-gyp rebuild
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs waveform
npm ERR! Or if that isn't available, you can get their info via:
npm ERR!
npm ERR! npm owner ls waveform
npm ERR! There is likely additional logging output above.
npm ERR! Please include the following file with any support request:
npm ERR! /Users/markusvoelter/Documents/audiogram/audiogram/npm-debug.log
These conditions will all result in an error, would be nice to validate them on startup and provide descriptive error messages:
workingDirectory
storagePath
or s3Bucket
maxUploadSize
fonts
arrayfonts
filesredisHost
Could also provide warnings about:
Missing AWS credentials are already handled by a headBucket()
test on startup.
seems like one way to make this easier to use is having a docker container setup instead, I'll see if I can get that working.
Hi!
I'm just trying to produce my first audiogram.
I follow all your well done instruction to use on my own. I installed it on Xubuntu 14.04 but when I try to connect to the port through the npm start
0 info it worked if it ends with ok 1 verbose cli [ '/usr/local/bin/node', '/usr/local/bin/npm', 'start' ] 2 info using [email protected] 3 info using [email protected] 4 verbose config Skipping project config: /home/cla/.npmrc. (matches userconfig) 5 verbose stack Error: ENOENT: no such file or directory, open '/home/cla/package.json' 5 verbose stack at Error (native) 6 verbose cwd /home/cla 7 error Linux 4.4.0-36-generic 8 error argv "/usr/local/bin/node" "/usr/local/bin/npm" "start" 9 error node v6.6.0 10 error npm v3.10.3 11 error path /home/cla/package.json 12 error code ENOENT 13 error errno -2 14 error syscall open 15 error enoent ENOENT: no such file or directory, open '/home/cla/package.json' 16 error enoent ENOENT: no such file or directory, open '/home/cla/package.json' 16 error enoent This is most likely not a problem with npm itself 16 error enoent and is related to npm not being able to find a file. 17 verbose exit [ -2, true ]
the system stay....listening to the 8888 port without any result. Could you please give me some help?
I have this running properly on localhost:8888, I can use the tool, but when I hit "Generate", I get this error in the browser (in html under the loading gif):
{"status":"error","numFrames":285,"error":{"code":"EACCES","errno":"EACCES","syscall":"spawn"}}
and I get this error in my console:
`/Users/michaelmaffattone/Documents/Dev/audiogram-generator/audiogram/bin/worker:40
throw err;
^
Error: spawn EACCES
at exports._errnoException (util.js:1008:11)
at ChildProcess.spawn (internal/child_process.js:302:11)
at exports.spawn (child_process.js:372:9)
at exports.execFile (child_process.js:139:15)
at module.exports (/Users/michaelmaffattone/Documents/Dev/audiogram-generator/audiogram/node_modules/waveform/index.js:29:3)
at getWaveform (/Users/michaelmaffattone/Documents/Dev/audiogram-generator/audiogram/audiogram/waveform.js:17:3)
at Audiogram.getWaveform (/Users/michaelmaffattone/Documents/Dev/audiogram-generator/audiogram/audiogram/index.js:64:3)
at start (/Users/michaelmaffattone/Documents/Dev/audiogram-generator/audiogram/node_modules/d3-queue/build/d3-queue.js:67:13)
at poke (/Users/michaelmaffattone/Documents/Dev/audiogram-generator/audiogram/node_modules/d3-queue/build/d3-queue.js:55:26)
at /Users/michaelmaffattone/Documents/Dev/audiogram-generator/audiogram/node_modules/d3-queue/build/d3-queue.js:83:25`
Any ideas?
The quality of the audio has gotten distinctly tinny and somewhat echo-y after it's put through Audiogram—this is with a 1:09 minute long mp3. Is this a known issue?
As it says :-)
I recommend to increase the maxbuffer of the exec-calls:
http://stackoverflow.com/a/23429654
I'm not sure if this is the right place to ask for help with installing and launching .. please correct me if I'm asking in the wrong place.
This is the message I see in Terminal when I try to launch Audiogram:
npm ERR! Failed at the [email protected] postinstall script 'mkdir -p editor/js && browserify client/index.js > editor/js/bundle.js'.
npm ERR! This is most likely a problem with the audiogram package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! mkdir -p editor/js && browserify client/index.js > editor/js/bundle.js
I just really want to get started using Audiogram! Any help you can provide would be much appreciated. Thank you :)
Some ideas for a waveform that stands out more:
Because I set up audiogram on a Ubuntu 16.04 system I couldn't use the upstart blogpost since they changed to systemd. I figured out how to create a systemd service and create the config for it. To save other people some work I will attach it here, up to you wether you put it in the INSTALL, or as an example file somewhere.
# Install this in /lib/systemd/system/audiogram.service
[Unit]
Description=Startup script for audiogram
After=network.target
[Service]
User=<<User to run as>>
WorkingDirectory=<<Where you installed audiogram>>
ExecStart=/usr/bin/npm start -- 8888
[Install]
WantedBy=multi-user.target
After putting the file you can enable it by running
systemctl reload-daemon
systemctl enable audiogram
systemctl start audiogram
We might be able to improve performance by using FFmpeg's image2pipe
and streaming each frame to the open process, instead of writing each one to a file first and loading those. Unclear how much of a performance boost this would be.
I ran into a problem while trying to get audiogram to work on my Windows machine. I'm not a programmer, but it's easy enough to follow the steps laid out here.
However, I ran into a problem when trying to create a custom theme for audiogram. I edited the file and put in the correct sized background files, but when I started up audiogram the themes weren't there. I couldn't select them and only the defaults were present.
It feels like the themes aren't being copied into the container, and through some testing that seems to be the case, but I have no idea how to fix that.
Rather than having to leave your browser open, people could enter an email address to get an email from the server with a link once the video is ready. This might involve dealing with lots of email handling and spam filter issues. Only really matters if rendering a REALLY long video.
A good-enough alternative might be to immediately give someone the future URL of the video so they can save it and check back later to see if it's done.
I think I'm almost there --- when I hit generate, this happens:
Audiogram
{"readyState":0,"status":0,"statusText":"error"}
Any suggestions?
Kent
worker
option is truthy, require Redis, localhost by defaultlineHeight
of a caption from the font definition$.extend()
, the latter could be replaced with underscore and the former could be a vanilla XHR.Have a mostly-working branch that allows for entering and positioning multiple captions, but the manual entry/interface is a real drag, especially for a long video. Worth exploring some improvements.
Forced aligners?
Using a forced aligner like Gentle to take a bulk transcript and automatically time it to the audio would help - then you could type in the whole thing (or paste from a transcript) and it could automatically break it into chunks.
Pros: Much faster if you have a full transcript already (paste the whole thing rather than pasting line-by-line and tweaking the timing).
Cons: Not much faster if you don't have a transcript. A lot more code complexity (all the OSS aligners seem to be Python). Would probably still need to tweak the captions into sensible breaks (e.g. avoid orphan words).
Auto transcribe
Use some sort of speech-to-text to take a first pass at transcribing the audio. In-browser options include PocketSphinx and the Web Speech API in certain browsers. Server-side options include normal Sphinx or the Watson API.
Pros: Great when it works.
Cons: Doesn't always work, especially for non-English languages or clips with music, background noise, etc. Still doesn't work out timing. If it's server-side, would require a second round-trip before the form submission. Could take a long time for long pieces of audio.
Parse timestamped transcripts?
Could allow people to upload an SRT or some other timecoded transcript format in the editor. The parsing wouldn't be that hard, but it's unclear how often audio orgs use these.
Right now authentication is roll-your-own, it would be nice to build in at least one, Twitter is probably the easiest. Would entail:
-Adding a few Express routes to handle the OAuth flow
-Adding a list of allowed users to check a login against
Include diagrams showing where the various progress bars, play buttons, and other obscuring overlays show up on different platforms so you can better avoid them when designing a theme.
Rather than sequential queue with callbacks, it might be a little more legible for the various rendering steps to be a chain of promises.
A few reports of audio getting gradually out of sync with the waveform in a long video.
It doesn't seem like the number of frames generated is wrong, so either:
-The detected waveform data is getting offset by a little bit (a problem with waveform
or the math to split up the samples)
-FFmpeg's frame spacing is getting offset somehow
Things to investigate:
-Audio file rounding
-Setting the framerate on inputs vs. output
-Disabling -shortest
-waveforming a long alternating tone file and confirming that the frames line up
-Specifying -aframes
Allow users to drag an audio file onto a droppable target instead of a standard file input. Maybe more trouble than it's worth.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.