AVFoundation detailed analysis (a) video merge and mix

review

In a detailed analysis of the GPUImage (eight) video merge mixer describes how to use GPUImage for video merge, as well as mixing. The use of the AVFoundation framework to achieve this function.

concept

  • AVPlayer video playback class, does not display the video itself, you need to create a AVPlayerLayer layer, add to the view
  • AVAssetTrack resource tracks, including audio tracks and video tracks
  • AVAsset media information
  • AVURLAsset media information created according to URL path
  • AVPlayerItem media resource management objects, the basic information and status of the management of the video
  • AVMutableVideoCompositionInstruction video operating instructions
  • AVMutableVideoCompositionLayerInstruction video track operating instructions that need to be added to the AVMutableVideoCompositionInstruction
  • AVMutableAudioMixInputParameters audio operation parameters
  • AVMutableComposition contains multiple tracks of media information, you can add, delete tracks
  • AVMutableVideoComposition video operation instruction set

Effect

Video effects are as follows, audio effects can run demo.

AVFoundation detailed analysis (a) video merge and mix

Core idea

Loading a number of AVURLAsset, with GCD to ensure that after the completion of the asynchronous load callback, call the Editor class configuration track information, video operating instructions and audio command parameters.

AVFoundation detailed analysis (a) video merge and mix

Specific details

Flow chart is as follows:

AVFoundation detailed analysis (a) video merge and mix

A, configuration track information

  • 1, calculate the length of the change to ensure that the length of the transform is not greater than the half of the length of the smallest video; consider how the 1:demo is less than half the computation and why is less than half?
  • 2, add two video tracks, two audio tracks;
  • 3, in the video index corresponding to the track (%2), insert the video track information and audio track information; think 2: when multiple videos in the same track insert multiple information, how to ensure that no overlap?
  • 4, calculate the time of direct playback and conversion;
Make sure the last / / merged video, transform length does not exceed the minimum length of the half of the CMTime transitionDuration for (I = self.transitionDuration; I = 0; < clipsCount; i++) {NSValue *clipTimeRange = [self.clipTimeRanges objectAtIndex:i]; if (clipTimeRange) {CMTime halfClipDuration = [clipTimeRange CMTimeRangeValue].duration; halfClipDuration.timescale = 2; transitionDuration = CMTimeMinimum (transitionDuration, halfClipDuration)}}; AVMutableCompositionTrack * compositionVideoTracks[2]; AVMutableCompositionTrack *compositionAudioTracks[2]; compositionVideoTracks[0] = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentT RackID_Invalid]; / / add video track 0 compositionVideoTracks[1] = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid]; / / add video track 1 compositionAudioTracks[0] = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid]; / / add audio track 0 compositionAudioTracks[1] = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid]; / / add audio track 1

B, configure video operating instructions

  • 1, the new video operation instruction set;
  • 2, according to the video where the corresponding orbit (%2), the new video operation instruction passThroughInstruction, length of passThroughTimeRanges, the definition of passThroughLayer direct play video track operating instructions at the same time, and set the passThroughLayer to passThroughInstruction video track operation instruction set;
  • 3, according to the video where the corresponding track, the new video operation instruction transitionInstruction, length of transitionTimeRanges, and according to the definition of the track video track fromLayer instruction and toLayer, transform the way and set the fromLayer and toLayer with time;
  • 4, add passThroughInstruction and transitionInstruction to the video command set;
AVMutableVideoCompositionInstruction *passThroughInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction]; / / new directive passThroughInstruction.timeRange = passThroughTimeRanges[i]; *passThroughLayer = [AVMutableVideoCompositionLayerInstruction / / direct broadcast AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:compositionVideoTracks[alternatingIndex]]; / / video track operation instruction passThroughInstruction.layerInstructions = [NSArray arrayWithObject:passThroughLayer]; [instructions addObject:passThroughInstruction]; / / add to the instruction set of if (i+1 < clipsCount) {/ / not the last AVMutableVideoCompositionInstruction * TransitionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction]; / / new directive transitionInstruction.timeRange = transitionTimeRanges[i]; *fromLayer = AVMutableVideoCompositionLayerInstruction / / transformation time [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:compositionVideoTracks[alternatingIndex]]; / / video track operation instruction AVMutableVideoCompositionLayerInstruction *toLayer = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:compositionVideoTracks[1-alternatingIndex]]; / / Dao / / 1 new track instruction 0 [fromLayer setOpacityRampFromStartOpacity:1.0 toEndOpacity:0.0 timeRa Nge:transitionTimeRanges[i]]; / / to track, from 0 to 1 [toLayer setOpacityRampFromStartOpacity:0.0 toEndOpacity:1.0 timeRange:transitionTimeRanges[i]]; transitionInstruction.layerInstructions [NSArray = arrayWithObjects:toLayer, fromLayer, nil]; [instructions addObject:transitionInstruction];}

C, configure audio track parameters

  • 1, new audio track parameter set;
  • 2, according to the index of the video, the new track parameters trackMix1, set the conversion time volume from 1 to 0;
  • 3, according to the video index, another new track parameters trackMix2, set the conversion time volume from 0 to 1; set the volume of direct play time has been 1;
  • 4, add the parameters trackMix1 and trackMix2 to the audio track parameter set;
AVMutableAudioMixInputParameters *trackMix1 = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:compositionAudioTracks[alternatingIndex]]; / / [trackMix1 setVolumeRampFromStartVolume:1.0 toEndVolume:0.0 track 0 parameter timeRange: transitionTimeRanges[i]]; / / track 0, volume from 1 to 0 during the transformation of [trackMixArray addObject:trackMix1]; AVMutableAudioMixInputParameters *trackMix2 = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:compositionAudioTracks[1 alternatingIndex]]; / / setVolumeRampFromStartVolume:0.0 toEndVolume:1.0 [trackMix2 track 1 parameter timeRange:transitionTimeRanges[i]]; / / volume from 0 to 1 during the transformation of [trackMixArray addObject:trackMix2];

summary

AVPlayer through KVO monitoring rate property, status property, notification is used to monitor the use of
and AVPlayer to complete the play; AVPlayerItem is not complex, analytic focus how to configure track information and audio and video instructions in class SimpleEditor.
code address can click here.

Reflection

About 1 by timescale*2, and CMTimeMinimum; in the middle of the video to experience the two transform, so the transform length cannot be greater than half the minimum length of the video; thinking function 2 track inserting the start point and duration, as long as the interval does not overlap, the audio will not overlap;