GithubHelp home page GithubHelp logo

videoflint / cabbage Goto Github PK

View Code? Open in Web Editor NEW
1.5K 37.0 221.0 28.03 MB

A video composition framework build on top of AVFoundation. It's simple to use and easy to extend.

License: MIT License

Ruby 1.00% Swift 99.00%
avfoundation videoediting video video-processing audio

cabbage's Introduction

中文说明 中文使用文档

A high-level video composition framework build on top of AVFoundation. It's simple to use and easy to extend. Use it and make life easier if you are implementing video composition feature.

This project has a Timeline concept. Any resource can put into Timeline. A resource can be Image, Video, Audio, Gif and so on.

Features

  • Build result content objcet with only few step.
  1. Create resource
  2. Set configuration
  3. Put them into Timeline
  4. Use Timeline to generate AVPlayerItem/AVAssetImageGenerator/AVExportSession
  • Resouce: Support video, audio, and image. Resource is extendable, you can create your customized resource type. e.g gif image resource
  • Video configuration support: transform, opacity and so on. The configuration is extendable.
  • Audio configuration support: change volume or process with audio raw data in real time. The configuration is extendable.
  • Transition: Clips may transition with previous and next clip

Usage

Below is the simplest example. Create a resource from AVAsset, set the video frame's scale mode to aspect fill, then insert trackItem to timeline, after all use CompositionGenerator to build AVAssetExportSession/AVAssetImageGenerator/AVPlayerItem.

// 1. Create a resource
let asset: AVAsset = ...     
let resource = AVAssetTrackResource(asset: asset)

// 2. Create a TrackItem instance, TrackItem can configure video&audio configuration
let trackItem = TrackItem(resource: resource)
// Set the video scale mode on canvas
trackItem.configuration.videoConfiguration.baseContentMode = .aspectFill

// 3. Add TrackItem to timeline
let timeline = Timeline()
timeline.videoChannel = [trackItem]
timeline.audioChannel = [trackItem]

// 4. Use CompositionGenerator to create AVAssetExportSession/AVAssetImageGenerator/AVPlayerItem
let compositionGenerator = CompositionGenerator(timeline: timeline)
// Set the video canvas's size
compositionGenerator.renderSize = CGSize(width: 1920, height: 1080)
let exportSession = compositionGenerator.buildExportSession(presetName: AVAssetExportPresetMediumQuality)
let playerItem = compositionGenerator.buildPlayerItem()
let imageGenerator = compositionGenerator.buildImageGenerator()

Basic Concept

Timeline

Use to construct resource, the developer is responsible for putting resources at the right time range.

CompositionGenerator

Use CompositionGenerator to create AVAssetExportSession/AVAssetImageGenerator/AVPlayerItem

CompositionGenerator use Timeline instance translate to AVFoundation API.

Resource

Resource provider Image or/and audio data. It also provide time infomation about the data.

Currently support

  • Image type:
    • ImageResource: Provide a CIImage as video frame
    • PHAssetImageResource: Provide a PHAsset, load CIImage as video frame
    • AVAssetReaderImageResource: Provide AVAsset, reader samplebuffer as video frame using AVAssetReader
    • AVAssetReverseImageResource: Provide AVAsset, reader samplebuffer as video frame using AVAssetReader, but reverse the order
  • Video&Audio type:
    • AVAssetTrackResource: Provide AVAsset, use AVAssetTrack as video frame and audio frame.
    • PHAssetTrackResource: Provide PHAsset, load AVAsset from it.

TrackItem

A TrackItem contains Resource, VideoConfiguration and AudioConfiguration.

Currently support

  • Video Configuration
    • baseContentMode, video frame's scale mode base on canvas size
    • transform
    • opacity
    • configurations, custom filter can be added here.
  • Audio Configuration
    • volume
    • nodes, apply custom audio process operation, e.g VolumeAudioConfiguration
  • videoTransition, audioTransition

Advance usage

Custom Resource

You can provide custom resource type by subclass Resource, and implement func tracks(for type: AVMediaType) -> [AVAssetTrack].

By subclass ImageResource, you can use CIImage as video frame.

Custom Image Filter

Image filter need Implement VideoConfigurationProtocol protocol, then it can be added to TrackItem.configuration.videoConfiguration.configurations

KeyframeVideoConfiguration is a concrete class.

Custom Audio Mixer

Audio Mixer need implement AudioConfigurationProtocol protocol, then it can be added to TrackItem.configuration.audioConfiguration.nodes

VolumeAudioConfiguration is a concrete class.

Why I create this project

AVFoundation aready provide powerful composition API for video and audio, but these API are far away from easy to use.

1.AVComposition

We need to know how and when to connect different tracks. Say we save the time range info for a track, finnaly we will realize the time range info is very easy to broken, consider below scenarios

  • Change previous track's time range info
  • Change speed
  • Add new track
  • Add/remove transition

These operations will affect the timeline and all tracks' time range info need to be updated.

Bad thing is that AVComposition only supports video track and audio track. If we want to combine photo and video, it's very difficult to implement.

2.AVVideoCompostion

Use AVVideoCompositionInstruction to construct timeline, use AVVideoCompositionLayerInstruction to configure track's transform. If we want to operate raw video frame data, need implement AVVideoCompositing protocol.

After I write the code, I realized there are many codes unrelated to business logic, they should be encapsulated.

3.Difffcult to extend features

AVFoundation only supports a few basic composition features. As far as I know, it only can change video frame transform and audio volume. If a developer wants to implement other features, e.g apply a filter to a video frame, then need to rewrite AVVideoCompostion's AVVideoCompositing protocol. The workload suddenly become very large.

Life is hard why should I write hard code too? So I create Cabbage, easy to understand API, flexible feature scalability.

Installation

Cocoapods

platform :ios, '9.0'
use_frameworks!

target 'MyApp' do
  # your other pod
  # ...
  pod 'VFCabbage'
end

Manually

It is not recommended to install the framework manually, but if you have to do it manually. You can

  • simplely drag Cabbage/Sources folder to you project.
  • Or add Cabbage as a submodule.
$ git submodule add https://github.com/VideoFlint/Cabbage.git

Requirements

  • iOS 9.0+
  • Swift 4.x

Projects using Cabbage

  • VideoCat: A demo project demonstrates how to use Cabbage.

LICENSE

Under MIT

Special Thanks

cabbage's People

Contributors

abyswifter avatar briannadoubt avatar dzhurov-8bars avatar gazadge avatar iallenc avatar morningstarj avatar touyu avatar vitoziv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cabbage's Issues

Black video when added 2 overlay with different start time

I write the simple demo as below:

`

  let backgroundTrackItem: TrackItem = {
        let url = Bundle.main.url(forResource: "bamboo", withExtension: "mp4")!
        let resource = AVAssetTrackResource(asset: AVAsset(url: url))
        let trackItem = TrackItem(resource: resource)
        trackItem.videoConfiguration.contentMode = .aspectFit
        return trackItem
  }()


  let overlay1: TrackItem = {

        let url = Bundle.main.url(forResource: "filmstrip_background1", withExtension: "png")!
        let image = CIImage(contentsOf: url)!
        let resource = ImageResource(image: image, duration: CMTime.init(seconds: 7, preferredTimescale: 600))
        let trackItem = TrackItem(resource: resource)
        trackItem.startTime = CMTime.init(seconds: 0, preferredTimescale: 600)
        trackItem.videoConfiguration.contentMode = .aspectFit
        let overlayHeight = renderSize.width/4
        
        let frame = CGRect.init(x: 0, y: 0, width: renderSize.width, height: overlayHeight)
        trackItem.videoConfiguration.contentMode = .custom
        trackItem.videoConfiguration.frame = frame;
        return trackItem
    }()


 let overlay2: TrackItem = {

       let url = Bundle.main.url(forResource: "black_overlay", withExtension: "png")!
        let image = CIImage(contentsOf: url)!
        let resource = ImageResource(image: image, duration: CMTime.init(seconds: 7, preferredTimescale: 600))
        let trackItem = TrackItem(resource: resource)
        trackItem.startTime = CMTime.init(seconds: 2, preferredTimescale: 600)

        trackItem.videoConfiguration.contentMode = .custom
        trackItem.videoConfiguration.frame = CGRect(x: 0, y: 0, width: renderSize.width/4, height: renderSize.height/4)
        return trackItem

    }()

  let timeline = Timeline()
  timeline.videoChannel = [backgroundTrackItem]
  timeline.overlays = [overlay1, overlay2]

 do {
           try Timeline.reloadVideoStartTime(providers: timeline.videoChannel)
    } catch {
        assert(false, error.localizedDescription)
    }
    timeline.renderSize = renderSize;

  let compositionGenerator = CompositionGenerator(timeline: timeline)
  let playerItem = compositionGenerator.buildPlayerItem()
  return playerItem

`

This code always return a black video. If I change the start time of overlay2 to 0, it works well.
So How I can customize the start time of each overlay on video timeline?

P/s: Another question, How can I add text overlay on the video?

How to create many audio track item with different start time?

I want to build application add multi music to video.
I have a video track item and some audio track item.
I need to misc some audio track (with different start time) and video.
Ref: I have referenced on page --> https://github.com/vitoziv/VideoCat
Example:
video: ___________[=======================] video track
audios : [--------===========] audio track 1
____________[-----========] audio track 2
__________________[=======================--------] audio track 3
____[-------------=======================---------] audio track 4
Note:
====== : available
-------- : unavailable

     video: ______________________[=======================] video track
     audio track 1:___[~~~~~~~~===========] offsetTime < 0
     audio track 2:_______________~~~~~~~~[===========] offsetTime > 0
     ==> ~~~~~ : offsetTime

    
     video__:[========================60s===================] total
     result: 60s --> 20s with lowtime 20s, uptime 40s
     trimed_:[[-----20s-----][======20s=====][______20s_____]
     trimed_:[[+++++++++++++++++++++++++++++]
     ==> [----] lowtime
     ==> [+++] up time
    
   convenience init --> AudioData :
    self.offsetTime = offsetTime
    self.lowTime = lowTime > 0.0 ? lowTime : 0.0
    let upValue = max(upTime, lowTime)
    self.upTime = upValue
    let durationValue = upValue - lowTime
    self.duration = durationValue
    let startTime = lowTime + offsetTime
    self.startTime = startTime
    let distance = min(durationValue, videoDuration)
    let available = distance - startTime
    self.available = available

My code:
Video track item:

    let asset = AVAsset(url: url)
    let resource = AVAssetTrackResource(asset: asset)
    let lowTime = CMTime(seconds: video.lowTime, preferredTimescale: 600) 
    //default lowtime = 0.0
    let durationTime = CMTime(seconds: video.duration, preferredTimescale: 600)
    //default video.duration = total time of video
    resource.selectedTimeRange = CMTimeRange(start: CMTime.zero, duration: durationTime)
    let videoTrackItem = TrackItem(resource: resource)
    videoTrackItem.startTime = lowTime
    videoTrackItem.videoConfiguration.contentMode = .aspectFill
    videoTrackItem.audioConfiguration.volume = video.volume

    context.viewModel.addVideoTrackItem(videoTrackItem)
    context.videoView.player.replaceCurrentItem(context.viewModel.playerItem)
    context.timelineView.reload(with: context.viewModel.videoTrackItems)

A Audio track item:

    let asset = AVAsset(url: url)
    let resource = AVAssetTrackResource(asset: asset)
    
    print("lowtime \(audio.lowTime)") // default low time = 0.0
    print("upTime \(audio.upTime)")
    print("startTime \(audio.startTime)")
    print("available \(audio.available)")
    print("videoDuration \(self.durationTimeOfVideo)")

    let startTime = CMTime(seconds: audio.startTime, preferredTimescale: 600)
    let availableTime = CMTime(seconds: audio.available, preferredTimescale: 600)

    resource.selectedTimeRange = CMTimeRange(start: CMTime.zero, duration: availableTime)
    let trackItem = TrackItem(resource: resource)
    trackItem.startTime = startTime
    trackItem.audioConfiguration.volume = audio.volume

TimelineViewModel

 class TimelineManager {
       static let current = TimelineManager()
      var timeline = Timeline()
 }
 
 class TimelineViewModel {

      // MARK: - Vars
     private(set) var audioTrackItems = [TrackItem]()
     private(set) var videoTrackItems = [TrackItem]()
     private(set) var renderSize: CGSize = CGSize.zero
     private(set) var lut: String = "original_lut"
     private(set) var playerItem = AVPlayerItem(asset: AVComposition())

    func buildTimeline() -> Timeline {
           let timeline = TimelineManager.current.timeline
           reloadTimeline(timeline)
           return timeline
    }

    // MARK: - Add/Replace

    func addVideoTrackItem(_ trackItem: TrackItem) {
           videoTrackItems.append(trackItem)
           reloadPlayerItems()
    }

    func insertTrackItem(_ trackItem: TrackItem, at index: Int) {
         guard audioTrackItems.count >= index else { return }
         audioTrackItems.insert(trackItem, at: index)
         reloadPlayerItems()
    }

   func updateTrackItem(_ trackItem: TrackItem, at index: Int) {
       guard audioTrackItems.count > index else { return }
       audioTrackItems[index] = trackItem
       reloadPlayerItems()
  }

   func removeTrackItem(_ trackItem: TrackItem) {
        guard let index = audioTrackItems.index(of: trackItem) else { return }
        audioTrackItems.remove(at: index)
        reloadPlayerItems()
   }

   func removeTrackItem(at index: Int) {
         guard audioTrackItems.count > index else { return }
         audioTrackItems.remove(at: index)
         reloadPlayerItems()
   }

   func removeAllAudioTrackItems() {
        audioTrackItems.removeAll()
        reloadPlayerItems()
  }

  func removeAll() {
       audioTrackItems.removeAll()
       videoTrackItems.removeAll()
       reloadPlayerItems()
  }

   func reloadPlayerItems() {
          let timeline = TimelineManager.current.timeline
          timeline.renderSize = renderSize
         reloadTimeline(timeline)
         do {
               try Timeline.reloadVideoStartTime(providers: videoTrackItems)
         } catch {
              assert(false, error.localizedDescription)
         }
         build(with: timeline)
     }

   fileprivate func build(with timeline: Timeline) {
         let compositionGenerator = CompositionGenerator(timeline: timeline)
        let playerItem = compositionGenerator.buildPlayerItem()
        self.playerItem = playerItem
   }

   fileprivate func reloadTimeline(_ timeline: Timeline) {
        timeline.videoChannel = videoTrackItems
        timeline.audios = videoTrackItems + audioTrackItems
   }
 extension TrackItem {
       func reloadTimelineDuration() {
            self.duration = self.resource.selectedTimeRange.duration
       }
}

==> Errors:

  • When offsettime < 0:
    video: ___________________[=======================] video track
    audio track:
    [~~~~~~~~~===========] offsetTime < 0
    --> audio track play incorrect starttime

图片无法在VIPlayer播放

我参考VideoCat的Demo从相册中选择了图片

let resource = PHAssetImageResource.init(asset: asset, duration: CMTime.init(value: 3000, 600))
            guard let task = resource.prepare(progressHandler: progressHandler, completion: { (status, error) in
                if status == .avaliable {
                    resource.selectedTimeRange = CMTimeRange.init(start: CMTime.zero, end: resource.duration)
                    let trackItem: TrackItem = TrackItem(resource: resource)
                    let transition = CrossDissolveTransition()
                    transition.duration = CMTime(value: 900, timescale: 600)
                    trackItem.videoTransition = transition
                    let audioTransition = FadeInOutAudioTransition(duration: CMTime(value: 66150, timescale: 44100))
                    trackItem.audioTransition = audioTransition
                    if resource.isKind(of: ImageResource.self) {
                        trackItem.videoConfiguration.contentMode = .custom
                    } else {
                        trackItem.videoConfiguration.contentMode = .aspectFill
                    }
                    complete(trackItem)
                } else {
                    Log.error("image track status is \(status), check prepare function, error: \(error?.localizedDescription ?? "")")
                    complete(nil)
                }
            })

然后检查了,图片的选择,在 resource.prepare中,Image是存在的,status也是 available的。

public func reloadPlayerItem(_ items: [TrackItem]) -> AVPlayerItem {
        let timeLine = TimeLineManager.current.timeline
        let width = UIScreen.main.bounds.width * UIScreen.main.scale
        let height = width
        timeLine.videoChannel = items
        timeLine.audioChannel = items
        do {
            try Timeline.reloadVideoStartTime(providers: timeLine.videoChannel)
        } catch {
            assert(false, error.localizedDescription)
        }
        timeLine.renderSize = CGSize.init(width: width, height: height)
        let compositonGenerator = CompositionGenerator.init(timeline: timeLine)
        return compositonGenerator.buildPlayerItem()
    }

这是buildItem的方法,视屏和livephoto都没有问题,只有图片无法正常播放,在播放器的时长也不对。

master分支。
xcode 10.2.2
swift 5.0

Adding audios renders black playerItem if tracks have transitions

I'm creating a video editing app and I have functionality to add transitions and audios.

When adding transitions, without added audios, it renders fine, with transitions and all.

Also when adding audios, without added transitions, it renders fine, audio plays fine.

However, if I add a transition and audio on the timeline, only the audio plays and the video is only black. Here's my code.

private func buildTracks() {
	var videoChannel: [TrackItem] = []
	var audioChannel: [TrackItem] = []
	
	for asset in assets {
		let resource = trackResource(for: asset)

		let trackItem = TrackItem(resource: resource)
		trackItem.videoConfiguration.contentMode = .aspectFit
		
		switch asset.transition {
		case 1:
			let transitionDuration = CMTime(seconds: 0.5, preferredTimescale: preferredTimeScale)
			let transition = CrossDissolveTransition(duration: transitionDuration)
			trackItem.videoTransition = transition
			print("CROSS DISSOLVE")
		case 2:
			let transitionDuration = CMTime(seconds: 0.5, preferredTimescale: preferredTimeScale)
			let transition = FadeTransition(duration: transitionDuration)
			trackItem.videoTransition = transition
			print("FADE BLACK")
		case 3:
			let transitionDuration = CMTime(seconds: 0.5, preferredTimescale: preferredTimeScale)
			let transition = FadeTransition(duration: transitionDuration)
			trackItem.videoTransition = transition
			print("FADE WHITE")
		default:
			trackItem.videoTransition = nil
			print("NONE")
		}
		
		if let asset = asset as? VVideoAsset {
			trackItem.audioConfiguration.volume = asset.volume
		}
		
		videoChannel.append(trackItem)
		audioChannel.append(trackItem)
		
		let filterConfigurations = videoEdit.filters.map { FilterConfiguration(filter: $0, totalDuration: totalDuration) }
		trackItem.videoConfiguration.configurations = filterConfigurations
	}
	
	timeline.videoChannel = videoChannel
	timeline.audioChannel = audioChannel
}

private func buildAudios() -> [AudioProvider] {
	var audios: [AudioProvider] = []
	
	videoEdit.audios.forEach { (audio) in
		guard let audioURL = audio.audioAsset.mp3Path else {
			return
		}
		
		let audioAsset = AVAsset(url: audioURL)
		let resource = AVAssetTrackResource(asset: audioAsset)
		let duration = audio.duration * totalDuration
		
		resource.selectedTimeRange = CMTimeRange.init(start: CMTime.zero, end: CMTimeMakeWithSeconds(duration, preferredTimescale: preferredTimeScale))
		let audioTrackItem = TrackItem(resource: resource)
		audioTrackItem.audioConfiguration.volume = audio.volume
		audioTrackItem.startTime = CMTimeMakeWithSeconds(audio.componentStart * totalDuration, preferredTimescale: preferredTimeScale)
		
		audios.append(audioTrackItem)
	}
	
	return audios
}

private func buildAddedComponents() {
	timeline.audios = buildAudios()
	timeline.overlays = buildOverlays()
}

关于TrackItem拷贝的问题。About TrackItem Copy.

当我通过以下方式去填充音频轨道时,我遇到了以下问题:
1、对resouorce实例不进行深拷贝,所有音频资源的selectTimeRange属性总是会与最后一次循环的设置一致。
2、对resource实例进行深拷贝,音频资源的scaledDuration属性,总是为音频长度。
代码如下:

private func caculateMusicTrack(resource: AVAssetTrackResource, duration: CMTime) -> [TrackItem] {
        Log.out(">> Total VIDEO DURATION: \(duration.seconds)")
        Log.out(">> Music File Duration: \(resource.duration.seconds)")
        let numOfLoops = (duration.seconds - currentMusicStartOffset) / resource.duration.seconds
        let numOfLoopsRoundedUp = numOfLoops.rounded(.up)
        var sumPartsTotals = CMTime.zero
        var endS = CMTime.zero
        var result: [TrackItem] = []
        //Audio Trim
        for i in 0..<Int(numOfLoopsRoundedUp) {
            let mResource = resource.copy() as! AVAssetTrackResource
            Log.out(mResource)
            guard let musicAsset = mResource.asset else {
                continue
            }
            //Audio Trim
            let start = CMTimeMake(value: Int64(0.0 * 600), timescale: 600)
            if i == Int(numOfLoopsRoundedUp) - 1 { //is the last chunk of audio
                let lastChunkTimeFrac = numOfLoops.truncatingRemainder(dividingBy: 1) // ex 1.5 will give 0.5
                let lastChunkTimeSecs = musicAsset.duration.seconds * lastChunkTimeFrac //music from 0 to this value
                endS = CMTimeMake(value: Int64((lastChunkTimeSecs-0.05) * 600), timescale: 600)
            } else {
                endS = CMTimeMake(value: Int64(musicAsset.duration.seconds * 600), timescale: 600)
            }
            let timeOffset = CMTime.init(seconds: currentMusicStartOffset, preferredTimescale: 600)
            if i == 0 {
                let startTime = currentMusicStartOffset < 0 ? start - timeOffset : start
                mResource.selectedTimeRange = CMTimeRange.init(start: startTime, end: endS)
            } else {
                mResource.selectedTimeRange = CMTimeRange(start:start , end: endS)
            }
            mResource.selectedTimeRange = CMTimeRange(start:start , end: endS)
            Log.out("selectedStart:\(mResource.selectedTimeRange.start.seconds) - totalPart:\(mResource.selectedTimeRange.end.seconds)")
            let partMyTrackItem = TrackItem(resource: mResource)
            let zeroOffsetTime = CMTimeMultiply(musicAsset.duration, multiplier: Int32(i))
            if i == 0 {
                 partMyTrackItem.startTime = zeroOffsetTime + (currentMusicStartOffset < 0 ? CMTime.zero : timeOffset)
            } else {
                partMyTrackItem.startTime = zeroOffsetTime + timeOffset
            }
            partMyTrackItem.startTime = zeroOffsetTime
            Log.out("start:\(partMyTrackItem.startTime.seconds) - totalPart:\(mResource.scaledDuration.seconds)")
            sumPartsTotals = CMTimeAdd(sumPartsTotals, mResource.scaledDuration)
            result.append(partMyTrackItem)
        }
        return result
    }

How can I change the black background when create track iteam with image resource

I make a simple demo to add text overlay on video as your suggestion:

(About text overlay, I suggest you add text's image to Timeline.passingThroughVideoCompositionProvider)

It works but I got the issue: the text overlay always had a black background.

I understand you used 1 black video to render image as video frame but in this case how can I delete this black background

P/s: How can I custom the video resource to create a track item from different resource type as a GIF file?

can't run on Xcode 9.4.1, iPhoneX 11.4

dyld: Library not loaded: @rpath/libswiftAVFoundation.dylib
Referenced from: /var/containers/Bundle/Application/769EC65D-11F5-4A32-A297-D95EF256AC93/Cabbage.app/Cabbage
Reason: no suitable image found. Did find:
/private/var/containers/Bundle/Application/769EC65D-11F5-4A32-A297-D95EF256AC93/Cabbage.app/Frameworks/libswiftAVFoundation.dylib: code signing blocked mmap() of '/private/var/containers/Bundle/Application/769EC65D-11F5-4A32-A297-D95EF256AC93/Cabbage.app/Frameworks/libswiftAVFoundation.dylib'

ImageCompositionGroupProvider 初始化方法未公开

如题,pod之后无法创建ImageCompositionGroupProvider实例,因为没有公开的实例方法。
另外CompositionGenerator类中buildVideoComposition()方法生成的videoComposition,能否支持下自定义frameDuration?目前视频会被统一处理成30FPS

Change volume while preview

is it possible to control the volume for audios or videos audio without the need to rebuild the timeline?

Optimize channel struct. 优化 channel 结构

讨论:
现在的结构只支持一个音频和视频的主 channel,而没有办法添加其它 channel。

overlays 可以放置其它视频数据,但是由于用了 AVComposition 添加到不同的 track id 的方式,导致现在有最大 16 个的限制。
audios 同理。

目标:
可以有多 channel;
overlays 实现方式优化,不限制 overlays 个数;
audios 暂时没有更好的改进建议

关于过渡动画

你好,我很想了解您的自定义过渡动画的实现原理,到底是如何实现的,如果我想自己设置过渡动画样式,又如何实现呢?

Start Video with Transition

Hi,

We'd like to add a transition to start the video but we're not sure how to accomplish this. For example, we'd like to fade in from black to start the video. Thank you for this great library!

Ambiguous use of operators

i use it XCODE10 beta 4

23 error and not build this project. Ambiguous use of operator '+', '-', '==' etc.
screen shot 2018-08-03 at 13 14 40
-fyi

Is there a way to change only the URL resource without having to rebuild composition?

Hi Vitoziv!

I am currently using your library in some video editing projects and it is really awesome. Thank you for your efforts to create this library and to make editing videos easier.

While using your library I have encountered some issues that I would like to share and look forward to your feedback.

I often have to change track items and re-build player item. Ex: change the URL of a track item ...
But I'm having performance issues because the build player item takes a long time and takes up a lot of device resources

So is there a way for me to not have to re-build the player item but only update the changes?

Ex: The code below when I want to change the URL of bambooTrackItem, I am currently rebuilding the compositionGenerator and re-build player item.

Looking forward to hearing from you. Thank you!

let bambooTrackItem: TrackItem = {
            let url = Bundle.main.url(forResource: "bamboo", withExtension: "mp4")!
            let resource = AVAssetTrackResource(asset: AVAsset(url: url))
            let trackItem = TrackItem(resource: resource)
            trackItem.videoConfiguration.contentMode = .aspectFit
            return trackItem
        }()
        
        let overlayTrackItem: TrackItem = {
            let url = Bundle.main.url(forResource: "overlay", withExtension: "jpg")!
            let image = CIImage(contentsOf: url)!
            let resource = ImageResource(image: image, duration: CMTime.init(seconds: 5, preferredTimescale: 600))
            let trackItem = TrackItem(resource: resource)
            trackItem.videoConfiguration.contentMode = .aspectFit
            return trackItem
        }()
        
        let seaTrackItem: TrackItem = {
            let url = Bundle.main.url(forResource: "sea", withExtension: "mp4")!
            let resource = AVAssetTrackResource(asset: AVAsset(url: url))
            let trackItem = TrackItem(resource: resource)
            trackItem.videoConfiguration.contentMode = .aspectFit
            return trackItem
        }()
        
        let transitionDuration = CMTime(seconds: 2, preferredTimescale: 600)
        bambooTrackItem.videoTransition = PushTransition(duration: transitionDuration)
        bambooTrackItem.audioTransition = FadeInOutAudioTransition(duration: transitionDuration)
        
        overlayTrackItem.videoTransition = BoundingUpTransition(duration: transitionDuration)
        
        let timeline = Timeline()
        timeline.videoChannel = [bambooTrackItem, overlayTrackItem, seaTrackItem]
        timeline.audioChannel = [bambooTrackItem, seaTrackItem]
        
        do {
            try Timeline.reloadVideoStartTime(providers: timeline.videoChannel)
        } catch {
            assert(false, error.localizedDescription)
        }
        timeline.renderSize = CGSize(width: 1920, height: 1080)
        
        let compositionGenerator = CompositionGenerator(timeline: timeline)
        let playerItem = compositionGenerator.buildPlayerItem()
        return playerItem

请问,我使用的是AVMutableVideoComposition中的animationTool给添加layer?怎么添加不了贴画和字幕呢?请问各位大佬有解决方案没?

public func buildVideoComposition() -> AVVideoComposition? {

    if let videoComposition = self.videoComposition, !needRebuildVideoComposition {
        return videoComposition
    }
    buildComposition()
    var layerInstructions: [VideoCompositionLayerInstruction] = []
    mainVideoTrackInfo.forEach { info in
        info.info.forEach({ (provider) in
            let layerInstruction = VideoCompositionLayerInstruction.init(trackID: info.track.trackID, videoCompositionProvider: provider)
            layerInstruction.prefferdTransform = info.track.preferredTransforms[provider.timeRange.vf_identifier]
            layerInstruction.timeRange = provider.timeRange
            layerInstruction.transition = provider.videoTransition
            layerInstructions.append(layerInstruction)
        })
    }
    overlayTrackInfo.forEach { (info) in
        let track = info.track
        let provider = info.info
        let layerInstruction = VideoCompositionLayerInstruction.init(trackID: track.trackID, videoCompositionProvider: provider)
        layerInstruction.prefferdTransform = track.preferredTransforms[provider.timeRange.vf_identifier]
        layerInstruction.timeRange = provider.timeRange
        layerInstructions.append(layerInstruction)
    }

    layerInstructions.sort { (left, right) -> Bool in
        return left.timeRange.start < right.timeRange.start
    }
    
    // Create multiple instructions,each instructions contains layerInstructions whose time range have insection with instruction,layerrinstruction
    // When rendering the frame, the instruction can quickly find layerInstructions
    let layerInstructionsSlices = calculateSlices(for: layerInstructions)
    let mainTrackIDs = mainVideoTrackInfo.map({ $0.track.trackID })
    let instructions: [VideoCompositionInstruction] = layerInstructionsSlices.map({ (slice) in
        let trackIDs = slice.1.map({ $0.trackID })
        let instruction = VideoCompositionInstruction(theSourceTrackIDs: trackIDs as [NSValue], forTimeRange: slice.0)
        instruction.backgroundColor = timeline.backgroundColor
        instruction.layerInstructions = slice.1
        instruction.passingThroughVideoCompositionProvider = timeline.passingThroughVideoCompositionProvider
        instruction.mainTrackIDs = mainTrackIDs.filter({ trackIDs.contains($0) })
        return instruction
    })
    
    let videoComposition = AVMutableVideoComposition()

    videoComposition.frameDuration = CMTime(value: 1, timescale: 30)
    videoComposition.renderSize = self.timeline.renderSize
    videoComposition.instructions = instructions
    videoComposition.customVideoCompositorClass = VideoCompositor.self
   //我使用的是以下方式  新增的添加字幕和贴画的代码
    let layerTool = SubtitlePlayerLayerTool()
    layerTool.renderSize = self.timeline.renderSize
    layerTool.subtitlesAndStickersModel = compositionSubTitleAndStickerData.0
    videoComposition.animationTool = layerTool.makeAnimationTool()
    self.videoComposition = videoComposition
    return videoComposition
}

技术解决方案探讨

你好!

我们在你的中文说明当中看到以下信息:
对 CALayer 的支持,可以把 CALayer 所支持的所有 CoreAnimation 动画带入到视频画面中。比如使用 Lottie,设计师在 AE 中导出的动画配置,客户端用配置生成 CALayer 类,添加到 AVVideoCompositionCoreAnimationTool 中就可以很方便的实现视频中做贴纸动画的功能。

能否提供1~2个相应使用Lottie导出的动画配置做的视频合成的例子吗?

Jack
WeChat:15915895880

Hope it can be support to Carthage.

Hi, I find it support Pod install. but it seem to not support carthage.
Carthage is a good project that can help us manage other framework.
if Cabbage can support cartahge, I think it will be cool.

How to loop a video if the audio is longer?

UPD: I was trying it without a player and then after checking with a AVPlayer instance I see that it actually loops the video, but does it only in player, not in AVExportSession's output.

So the question now sounds like: Why the result differs if I do export the composition?

生成的视频颜色有偏差 sRGB / RGB

VideoCompositor 中最后渲染的代码修改就可以解决

VideoCompositor.ciContext.render(image, to: outputPixels)

改为:

let colorSpace = CGColorSpace.init(name: CGColorSpace.sRGB) ?? CGColorSpaceCreateDeviceRGB()
        VideoCompositor.ciContext.render(image, to: outputPixels, bounds: image.extent, colorSpace: colorSpace)

selectedTimeRange 问题

AVAssetTrackResource 设置selectedTimeRange后生成的trackItem, 或是生成的playeritem 还是asset的全部资源,未裁剪到选中时间,以下是代码示例-

        let resource = AVAssetTrackResource(asset: model.asset)// model.asset duration 7秒
        resource.setSpeed(model.speed)
        if let startTime = model.startTime, let endTime = model.endTime {
            let startTime = CMTime(seconds: startTime, preferredTimescale: model.asset.duration.timescale)
            let endTime = CMTime(seconds: endTime, preferredTimescale: model.asset.duration.timescale)
            let timeRange = CMTimeRange(start: startTime, end: endTime)
            print(CMTimeGetSeconds(timeRange.start),
                  CMTimeGetSeconds(timeRange.end),
                  CMTimeGetSeconds(timeRange.duration))// 0, 4, 4
            resource.selectedTimeRange = CMTimeRange(start: startTime, end: endTime)
        }
        let trackItem = TrackItem(resource: resource)// 这里打印trackItem.duration 还是7秒
        trackItem.videoConfiguration.transform = model.transform
        trackItem.videoConfiguration.contentMode = .aspectFit
        timeline.videoChannel.append(trackItem)
        timeline.audioChannel.append(trackItem)`

    try! Timeline.reloadVideoStartTime(providers: timeline.videoChannel)
    try! Timeline.reloadAudioStartTime(providers: timeline.audioChannel)

    let playerItem = CompositionGenerator(timeline: timeline).buildPlayerItem()//这里playitem.duration 还是7 秒

关于Timeline中的overlays轨道

没有找到关于overlayers轨道的demo。研究了下你的源码,不知道理解的对不对,请教一下。如果要使用overlays轨道,实现当前track中x:50,y,50的坐标中放一个size为50*50的overlay。参考了了ImageOverlayItem的实现方法,可以通过trackItem.configuration.videoConfiguration.transform配置来传相应的transform实现,这样可以实现,但使用起来不是很方便。

我的理解,overlays: [VideoProvider] ,overlays不只是VideoProvider协议,更合适的是进一步封装了frame的协议。

通过 Images 创建 AVPlayerItem,视频后半段变黑屏了

DEBUG 后发现 VideoCompositionInstruction 的方法
open func apply(request: AVAsynchronousVideoCompositionRequest) -> CIImage?
在 16 秒后 requestsourceTrackIDsnil,调用 sourceFrame(byTrackID trackID: CMPersistentTrackID) 也返回 nil。

求助问题原因所在。

下面是我通过 images 创建 AVPlayerItem 的代码:

func makePlayerItemFromImages(_ images: [UIImage]) {
        let ciImages = images.compactMap { $0.cgImage }.map { CIImage.init(cgImage: $0) }
        
        let resources = ciImages.map { ImageResource(image: $0) }

        for item in resources {
            let timeRange = CMTimeRange(start: kCMTimeZero, duration: CMTimeMake(100, 100))
            item.duration = timeRange.duration
            item.selectedTimeRange = timeRange
        }
        
        let items = resources.map { TrackItem(resource: $0) }
        
        var timeLineDuration = kCMTimeZero
        items.forEach {
            $0.configuration.videoConfiguration.baseContentMode = .aspectFit
            
            let timeRange = CMTimeRange(start: timeLineDuration, duration: $0.resource.duration)
            $0.configuration.timelineTimeRange = timeRange
            timeLineDuration = CMTimeAdd(timeLineDuration, timeRange.duration)

        }
        
        let timeline = Timeline()
        timeline.videoChannel = items
        
        let compositionGenerator = CompositionGenerator(timeline: timeline)
        compositionGenerator.renderSize = CGSize(width: 480, height: 480)
        let playerItem = compositionGenerator.buildPlayerItem()
        
        let controller = AVPlayerViewController.init()
        controller.player = AVPlayer.init(playerItem: playerItem)
        controller.view.backgroundColor = UIColor.white
        present(controller, animated: true, completion: nil)
    }

How to set Music for entire Timeline or individual Video tracks

Hi im back :)

So previously I successfully implemented your suggestions to merge videos with their corresponding audio tracks. Now Im wondering if it's possible 2 things.

1- Is it possible to define separate audio for each video with specific range of that audio ?
example:

let tLine = Timeline()
var vChannel = [TrackItem]()
var aChannel = [TrackItem]()
//VIDEO Tracks
... trackVideoItem1, trackVideoItem2, trackVideoItem3...
//AUDIO Tracks
        let musicUrl = Bundle.main.url(forResource: "HumansWater", withExtension: "MP3")!
        let musicAsset = AVAsset(url: musicUrl)
        let resourceA = AVAssetTrackResource(asset: musicAsset)
        let trackAudioItem1 = TrackItem(resource: resourceA)
... same for trackAudioItem2, trackAudioItem3...
But how do I specify the start-end and duration of those tracks.  ??

tLine.videoChannel = [trackVideoItem1,trackVideoItem2,trackVideoItem3]
tLine.audioChannel = [trackAudioItem1, trackAudioItem2, trackAudioItem3]
try! Timeline.reloadVideoStartTime(providers: tLine.videoChannel)

Currently the above creates an unreadable video.

2- The other question is, If it's possible to define 1 music track for the entire video composition.
example:

let tLine = Timeline()
var vChannel = [TrackItem]()
var aChannel = [TrackItem]()
//VIDEO Tracks
... trackVideoItem1, trackVideoItem2, trackVideoItem3...
//AUDIO Track for everything
        let musicUrl = Bundle.main.url(forResource: "HumansWater", withExtension: "MP3")!
        let musicAsset = AVAsset(url: musicUrl)
        let resourceA = AVAssetTrackResource(asset: musicAsset)
        let trackMusicItem = TrackItem(resource: resourceA)
But how do I specify the start-end of the audio (trimming the audio)

tLine.videoChannel = [trackVideoItem1,trackVideoItem2,trackVideoItem3]
tLine.audioChannel = [trackMusicItem]
try! Timeline.reloadVideoStartTime(providers: tLine.videoChannel)

Is it possible to set 1 audio for everything? and what happens if the audioTrack is shorter than the entire video composition or the video composition is shorter than the audioTrack, would it repeat the audioTrack ?

Thanks in advance :) :)

Cannot not play audio in time range after added

Hi vitoziv, I cannot play audio in a specific time range after added. This is my source code. I cannot play audio1.mp3 after set startTime and duration, if I don't set this, it plays well. Thank you.

    let video1TrackItem: TrackItem = {
        let url = Bundle.main.url(forResource: "video1", withExtension: "mp4")!
        let resource = AVAssetTrackResource(asset: AVAsset(url: url))
        let trackItem = TrackItem(resource: resource)
        trackItem.videoConfiguration.contentMode = .aspectFit
        return trackItem
    }()

   let mp3TrackItem: TrackItem = {
        let url = Bundle.main.url(forResource: "audio1", withExtension: "mp3")!
        let resource = AVAssetTrackResource(asset: AVAsset(url: url))
        let trackItem = TrackItem(resource: resource)
        trackItem.startTime = CMTime(value: 1, timescale: 1)
        trackItem.duration = CMTime(value: 10, timescale: 1)
        return trackItem
    }()

   let timeline = Timeline()
    timeline.videoChannel = [video1TrackItem]
    timeline.audioChannel = [video1TrackItem]
    timeline.audios = [mp3TrackItem]

Adjusting intensity of VideoConfigurationProtocol with Slider

Basic example -

public class MyFilter: VideoConfigurationProtocol {
    
    var filter: TestFilter!
    public var setIntensity: CGFloat = 0.3
    
    public init() { }
    
    public func applyEffect(to sourceImage: CIImage, info: VideoConfigurationEffectInfo) -> CIImage {
        
        var finalImage = sourceImage
        
        filter = TestFilter()
        filter.inputImage = finalImage
        filter.intensity = setIntensity
        finalImage = filter.outputImage!
        
        return finalImage
    }
    
}

Then when appending this class into trackItem.videoConfiguration.configurations, would there be a preferred method to adjust the setIntensity variable with a UISlider?

Add a new clip without rebuilding the composition

I am playing a few video clips together by using this marvelous library. Also, from time to time I need to append a new video clip and play the whole video just after adding it. When the videos amount increases it takes more time to regenerate the play item.

Is there any way to remove this delay?
Can we do changes to the timeline dynamically and see them on preview(player item) without rebuilding the composition?

Merge Videos with different aspectRatios

I have a list of videos AVAssets.
I want to merge them into 1 video with their corresponding audios.

But the videos sometimes are portrait, sometimes square, sometimes landscape (they may have infinite different width and heights). I want the videos to merge and stay aspectFit to the size frame of the first video.

Is this possible with Cabbage ?
Im having a hard time understanding your ¨timeline¨ concept.

Custom Filter

Hi Vito, I'm wondering what the process looks like for processing a filter now that filterProcessor was removed in version 0.2. The VideoCat demo used this when applying Lookup table filters but I'm wondering how I should go about doing this now it's gone. Do you have an example I could try? Thank you for your work and time!

关于对视频添加字幕和动画的实现讨论?

你好,首先非常感谢vitoziv的无私分享!大致把Cabbage的中文说明和源代码梳理了一下,但对Cabbage的设计思路和使用还有一些不太清楚的地方希望请教讨论一下。目前,项目需要实现给视频添加字幕和动画贴图的功能,我不太清楚是通过timeline上的overlays实现,还是使用CALayer去实现。但如果使用CALayer但的话,预览和渲染需要维护两套业务逻辑感觉比较麻烦。想请教一下vitoziv,Cabbage对这方面需求的支持是怎样考虑的以及你的建议?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.