video-dev / video-transcoding-api Goto Github PK
View Code? Open in Web Editor NEWAgnostic API to transcode media assets across different cloud services.
License: Apache License 2.0
Agnostic API to transcode media assets across different cloud services.
License: Apache License 2.0
I've discussed with @fsouza about that, maybe storing the provider directly on providers list will reduce the overall complexity of the code. We should remove the factory and create a Validate()
to be used on init()
.
encodingcom and zencoder are reporting different results for the path node in the JSON response for jobs endpoint. Encodingcom is giving the s3 path while zencoder is giving the http path.
For Example (bucketname, jobId and filename are placeholders):
Encoding: s3://access_key:secret_access@bucketname/jobId/filename
Zencoder: http://bucketname.s3.amazonaws.com/bucketname/jobId/filename
The zencoder integration code should be smart enough to check if there's any HLS
level that matches with H264/MP4 outputs (same bitrate, resolution, etc).
If so, the implementation should reuse it with copy_audio
and copy_video
set to True
. This will avoid the transcoding step on Zencoder side which will lead us to a much faster/cheaper job.
Here's a full JSON example:
"outputs": [
{
"label": "mp4_low",
"format": "mp4",
"size": "430x240",
"prepare_for_segmenting": "hls",
"url": "s3://test/output_low.mp4"
},
{
"label": "mp4_medium",
"format": "mp4",
"size": "854x480",
"prepare_for_segmenting": "hls",
"url": "s3://test/output_medium.mp4"
},
{
"label": "mp4_high",
"format": "mp4",
"size": "1280x720",
"prepare_for_segmenting": "hls",
"url": "s3://test/output_high.mp4"
},
{
"source": "mp4_low",
"label": "hls_low",
"url": "s3://test/hls_low/playlist.m3u8",
"format": "ts",
"copy_video": true,
"copy_audio": true,
"type": "segmented"
},
{
"source": "mp4_medium",
"label": "hls_medium",
"url": "s3://test/hls_medium/playlist.m3u8",
"format": "ts",
"copy_video": true,
"copy_audio": true,
"type": "segmented"
},
{
"source": "mp4_high",
"label": "hls_high",
"url": "s3://test/hls_high/playlist.m3u8",
"format": "ts",
"copy_video": true,
"copy_audio": true,
"type": "segmented"
},
{
"type": "playlist",
"url": "s3://test/master.m3u8",
"streams": [
{
"path": "hls_low/playlist.m3u8",
"source": "hls_low"
},
{
"path": "hls_medium/playlist.m3u8",
"source": "hls_medium"
},
{
"path": "hls_high/playlist.m3u8",
"source": "hls_high"
},
]
}
]
Thanks @gaberussell for the heads up on this.
I tested with a profile set to Main
but got a Baseline
output
In the video-transcoding-api, when one of the video dimensions (width and height) are not specified (i.e. set to 0 or omitted in the preset definition), we assume that the user wants to keep aspect ratio and automatically set the other dimension. For example, if the source media has the dimensions 1920x1080, and the dimensions are set to 0x720 in the preset, the output media is going to be 1280x720.
We rely on the provider for supporting this feature, but that's not how Elastic Transcoder works: whenever one of the dimensions are not specified, it assumes that the user wants a 1920x1080 video.
After talking with people from Amazon, they recommended the following steps to make Elastic Transcoder keep the aspect ratio specifying only the height (it also works for when we want to specify only the width, just swap max height and max width definitions):
An important note: this only works if the source media dimensions are greater than the output dimensions.
Figure out what public hostname we'll be able to use for video files published to Akamai.
This is actually a question/discussion.
We have a MediaInfo
field for JobStatus
struct, but a list of outputs on the JobOutput
struct. Knowing that we can have multiple VideoCodec
s, Width
s and Height
s for a given job, I'm wondering if makes sense to maintain this MediaInfo
struct.
Then we probably drop the profile-based transcoding, and delete a bunch of code.
We should support mapping array of structs so we can correctly save Job.Outputs.
We could at least return job ID's. This will help us on debugging what's going on.
Currently, when an Encoding.com job is in the status "Waiting for encoder", the transcoding-api reports it as started with progress=0. We should report the job status as queued.
PR #108 doesn't cover this yet. I'll work on a separate pull request for this.
When running a zencoder job for a webm file using a preset that has no height or no width specified I get the following error:
{
"error": "Error with provider \"zencoder\": 422 Unprocessable Entity"
}
profile and profileLevel fields should be related to video, we should move them to the video object.
The rate controls is always VBR
no matter if we're passing CBR
or VBR
on the preset.
We should create a playlist with all hls files when using adaptive streaming.
For consistency across GitHub:NYTimes repos in active development, please replace LICENSE with the NYT LICENSE.md found here:
https://github.com/NYTimes/license/blob/master/LICENSE.md
We should be able to Create, List and Delete presets using the API. The endpoints would probably look something like:
Creates a new preset. The body would contain the name of the preset and the encoding parameters. Example:
{
"name": "mp4_720p",
"params": {
"output": ["mp4"],
"size": {"height": 720},
"audioCodec": "dolby_aac",
"audioBitRate": "128k",
"audioChannelsNumber": "2",
"audioSampleRate": 48000,
"bitRate": "2500k",
"frameRate": "30",
"keepAspectRatio": true,
"videoCodec": "libx264",
"keyFrame": "90",
"audioVolume": 100,
"twoPassEncoding": true
}
}
Status codes:
200
if everything goes fine (maybe 201
?)409
if the preset already exists500
if something goes wrongList available presets. It will return a list of presets using a structure similar to the one defined above.
Status codes:
200
if everything goes fine500
if something goes wrongDelete the given preset.
Status code:
200
if everything goes fine404
if the preset doesn't exist500
if anything goes wrongWe don't need a LocalPreset
abstraction.
At first glance, at least how we're calling its API, Elemental Conductor doesn't seem to allow existing presets to be updated. When using the transcoding API to create/update existing presets in Elemental, we get the error Name has already been taken
.
We should investigate if we're calling Elemental's API the wrong way. If not, we should mimic what Encoding.com does, which is to override the existing one.
The rate controls is always VBR
no matter if we're passing CBR
or VBR
on the preset.
Each test is taking 1 second. It looks like the SaveJob function is indeed that slow, maybe it's WATCH + MULTI (I don't think it's slow on Redis, but might be the library that we're using to communicate with Redis).
I will have a look at them to see if we can improve it. I'm opening a new issue so I don't forget to handle it in the future.
Here's the error returned:
{"Results":{"elastictranscoder":{"PresetID":"","Error":"creating preset: ValidationException: 2
validation errors detected: Value 'libvpx' at 'video.codec' failed to satisfy constraint: Member
must satisfy regular expression pattern: (^H\\.264$)|(^vp8$)|(^vp9$)|(^mpeg2$)|(^gif$); Value
'libvorbis' at 'audio.codec' failed to satisfy constraint: Member must satisfy regular expression
pattern: (^AAC$)|(^vorbis$)|(^mp3$)|(^mp2$)|(^pcm$)|(^flac$)\n\tstatus code: 400, request id:
95be2037-85c7-11e6-ad62-251e8895ad74"}...
Right now, we're using the values that Encoding.com supports. We should look into mapping video and audio codecs, just like we did with H.264.
after this #129 (comment) I realized we have two StreamingParams
structs with different shapes (one is missing PlaylistFilename
). When triggering new transcoding jobs we have a db.StreamingParams
inside of Job
and a provider.StreamingParams
inside of TranscodingProfile
.
We are using the one inside of TranscodeProfile
for the real jobs so I'm gonna change Zencoder's implementation & tests to point to it as well. We need to figure out a way to normalize this.
Currently we force users to use key-based credentials, but when running on aws, it's possible to control access to Elastic Transcoder resources using IAM roles and instance profiles.
moving from #148
Since we have the concept of LocalPreset
s now, we can support all encoding services even if they don't have API's to handle presets. Do you guys think we should still expose PresetMaps
?
Hey guys,
I caught your article on NYTimes today - great work!
Do you have any users requesting that logos (or any sort of overlay) be applied during transcode? Or applying a sting to videos during transcode? Are there any plans to include features like that in the future, or are those features out of scope?
Thanks,
Nick
If we don't specify streaming parameters we should be able to transcode using default parameters. Right now we're not able to transcode videos with no segment duration on Elastic Transcoder.
when fetching the status of a job transcoded using zencoder we're receiving something like this:
{
"output": {
"files": [
{
"container": "mpeg4",
"height": 608,
"path": "https://zencoder-temp-storage-us-east-1.s3.amazonaws.com/o/20161101/2a55e4f59cce2999ec0c3824f8e92e1b/aeb7c8e34f9cbead7884ab0b4c552f36.mp4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI456JQ76GBU7FECA%2F20161101%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20161101T021532Z&X-Amz-Expires=86397&X-Amz-SignedHeaders=host&X-Amz-Signature=1ed9b1888d5028a48f693090bcaaf9684a823381cf365bf1e22015e75b333bf1",
"videoCodec": "h264",
"width": 1080
}
]
},
"progress": 0,
"providerJobId": "315429677",
"providerName": "zencoder",
"sourceInfo": {
"duration": 102569000,
"height": 1080,
"videoCodec": "h264",
"width": 1920
},
"status": "finished"
}
the zencoder implementation is not taking into account the ZENCODER_DESTINATION
environment variable when triggering the job.
PR #108 doesn't cover this yet. I'll work on a separate pull request for this.
refs #131
Figure out how to setup transcoding output bucket permissions so that it can be written to by the transcoding-api nodes and mounted by the distribution-api nodes, while not making the bucket public.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.