-
To play sound files, you can use
AVAudioPlayer. -
To record audio, you can use
AVAudioRecorder. -
The primary class that the AV Foundation framework uses to represent media is
AVAsset.
Using Assets
Asset can come from a file or from media in the user’s iPod Library or Photo library. Simply creating an asset object, though, does not necessarily mean that all the information that you might want to retrieve for that item is immediately available. Once you have a movie asset, you can extract still images from it, transcode it to another format, or trim the contents.Creating an Asset Objec
-
If you only intend to play the asset, either pass
nilinstead of a dictionary, or pass a dictionary that contains theAVURLAssetPreferPreciseDurationAndTimingKeykey and a corresponding value ofNO(contained in anNSValueobject).If you want to add the asset to a composition (
AVMutableComposition), you typically need precise random access. Pass a dictionary that contains theAVURLAssetPreferPreciseDurationAndTimingKeykey and a corresponding value ofYES(contained in anNSValueobject—recall thatNSNumberinherits fromNSValue):NSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>;
NSDictionary *options = @{ AVURLAssetPreferPreciseDurationAndTimingKey : @YES };AVURLAsset *anAssetToUseInAComposition = [[AVURLAsset alloc] initWithURL:url options:options];
Accessing the User’s Assets
To access the assets managed the iPod Library or by the Photos application, you need to get a URL of the asset you want.
-
To access the iPod Library, you create an
MPMediaQueryinstance to find the item you want, then get its URL usingMPMediaItemPropertyAssetURL. -
To access the assets managed by the Photos application, you use
ALAssetsLibrary.
The following example shows how you can get an asset to represent the first video in the Saved Photos Album.
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init]; |
| |
// Enumerate just the photos and videos group by using ALAssetsGroupSavedPhotos. |
[library enumerateGroupsWithTypes:ALAssetsGroupSavedPhotos usingBlock:^(ALAssetsGroup *group, BOOL *stop) { |
| |
// Within the group enumeration block, filter to enumerate just videos. |
[group setAssetsFilter:[ALAssetsFilter allVideos]]; |
| |
// For this example, we're only interested in the first item. |
[group enumerateAssetsAtIndexes:[NSIndexSet indexSetWithIndex:0] |
options:0 |
usingBlock:^(ALAsset *alAsset, NSUInteger index, BOOL *innerStop) { |
| |
// The end of the enumeration is signaled by asset == nil. |
if (alAsset) { |
ALAssetRepresentation *representation = [alAsset defaultRepresentation]; |
NSURL *url = [representation url]; |
AVAsset *avAsset = [AVURLAsset URLAssetWithURL:url options:nil]; |
// Do something interesting with the AV asset. |
} |
}]; |
} |
failureBlock: ^(NSError *error) { |
// Typically you should handle an error more gracefully than this. |
NSLog(@"No groups"); |
}]; |
ALAssetsLibrary 库
ALAssetsGroup 同一类型的资源集合
AVAsset 一个资源。
AVURLAsset AVAsset子类
Reading and Writing Assets
You use an AVAssetReader when you want to perform an operation on the contents of an asset. For example, you might read the audio track of an asset to produce a visual representation of the waveform. To produce an asset from media such as sample buffers or still images, you use an AVAssetWriter object.
You can use an asset reader and asset writer object in tandem to convert an asset from one representation to another. Using these objects you have more control over the conversion than you do with AVExportSession. For example of you want to choose which of the tracks you want to be represented in the output file, specify your own output format, or modify the asset during the conversion process.
playback
To control the playback of assets, you use an AVPlayer object. During playback, you can use an AVPlayerItem object to manage the presentation state of an asset as a whole, and an AVPlayerItemTrack to manage the presentation state of an individual track. To display video, you use an AVPlayerLayer object.

Handling Different Types of Asset
The way you configure an asset for playback may depend on the sort of asset you want to play. Broadly speaking, there are two main types:file-based assets, to which you have random access (such as from a local file, the camera roll, or the Media Library), and stream-based (HTTP Live Stream format).
To load and play a file-based asset. There are several steps to playing a file-based asset:
-
Create an asset using
AVURLAssetand load its tracks usingloadValuesAsynchronouslyForKeys:completionHandler:. -
When the asset has loaded its tracks, create an instance of
AVPlayerItemusing the asset. -
Associate the item with an instance of
AVPlayer. -
Wait until the item’s
statusindicates that it’s ready to play (typically you usekey-value observing to receive a notification when the status changes).
AVPlayerItem
using the URL. (You cannot directly create an
AVAsset
instance to represent the media in an HTTP Live Stream.)
If you don’t know what kind of URL you have. Follow these steps:
-
Try to initialize an
AVURLAssetusing the URL, then load itstrackskey.If the tracks load successfully, then you create a player item for the asset.
-
If 1 fails, create an
AVPlayerItemdirectly from the URL.Observe the player’s
statusproperty to determine whether it becomes playable.
AVAsset is the core class in the AV Foundation framework. It provides a format-independent abstraction of time-based audiovisual data, such as a movie file or a video stream. In many cases, you work with one of itssubclasses: you use the composition subclasses when you create new assets (see “Editing”), and you use AVURLAsset to create a new asset instance from media at a given URL (including assets from the MPMedia framework or the Asset Library framework—see “Using Assets”).
CMTime time1 = CMTimeMake(200, 2); // 200 half-seconds 200*1/2 |
CMTime time2 = CMTimeMake(400, 4); // 400 quarter-seconds 400*1/4 |
| |
// time1 and time2 both represent 100 seconds, but using different timescales. |
Media Capture
To manage the capture from a device such as a camera or microphone, you assemble objects to represent inputs and outputs, and use an instance of AVCaptureSession to coordinate the data flow between them. Minimally you need:
-
An instance of
AVCaptureDeviceto represent the input device, such as a camera or microphone -
An instance of a concrete subclass of
AVCaptureInputto configure the ports from the input device -
An instance of a concrete subclass of
AVCaptureOutputto manage the output to a movie file or still image -
An instance of
AVCaptureSessionto coordinate the data flow from the input to the output
To show the user what a camera is recording, you can use an instance of AVCaptureVideoPreviewLayer (a subclass of CALayer).
You can configure multiple inputs and outputs, coordinated by a single session:
When you add an input or an output to a session, the session “greedily” forms connections between all the compatible capture inputs’ ports and capture outputs. A connection between a capture input and a capture output is represented by an AVCaptureConnection object.
You can use a capture connection to enable or disable the flow of data from a given input or to a given output. You can also use a connection to monitor the average and peak power levels in an audio channel.
Configuring a Session
| Symbol | Resolution | Comments |
|---|---|---|
| High | Highest recording quality. This varies per device. | |
| Medium | Suitable for WiFi sharing. The actual values may change. | |
| Low | Suitable for 3G sharing. The actual values may change. | |
| 640x480 | VGA. | |
| 1280x720 | 720p HD. | |
| Photo | Full photo resolution. This is not supported for video output. |
Device Capture Settings
| Feature | iPhone 3G | iPhone 3GS | iPhone 4 (Back) | iPhone 4 (Front) |
|---|---|---|---|---|
| Focus mode | NO | YES | YES | NO |
| Focus point of interest | NO | YES | YES | NO |
| Exposure mode | YES | YES | YES | YES |
| Exposure point of interest | NO | YES | YES | YES |
| White balance mode | YES | YES | YES | YES |
| Flash mode | NO | NO | YES | NO |
| Torch mode | NO | NO | YES | NO |
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]; |
NSMutableArray *torchDevices = [[NSMutableArray alloc] init]; |
| |
for (AVCaptureDevice *device in devices) { |
[if ([device hasTorch] && |
[device supportsAVCaptureSessionPreset:AVCaptureSessionPreset640x480]) { |
[torchDevices addObject:device]; |
} |
} |
if ([device isFocusModeSupported:AVCaptureFocusModeLocked]) { |
NSError *error = nil; |
if ([device lockForConfiguration:&error]) { |
device.focusMode = AVCaptureFocusModeLocked; |
[device unlockForConfiguration]; |
} |
else { |
// Respond to the failure as appropriate. |
Switching Between Devices
AVCaptureSession *session = <#A capture session#>; |
[session beginConfiguration]; |
| |
[session removeInput:frontFacingCameraDeviceInput]; |
[session addInput:backFacingCameraDeviceInput]; |
| |
[session commitConfiguration]; |
Use Capture Inputs to Add a Capture Device to a Session
AVCaptureSession *captureSession = <#Get a capture session#>; |
AVCaptureDeviceInput *captureDeviceInput = <#Get a capture device input#>; |
if ([captureSession canAddInput:captureDeviceInput]) { |
[captureSession addInput:captureDeviceInput]; |
} |
else { |
// Handle the failure. |
} |
Use Capture Outputs to Get Output from a Session
To get output from a capture session, you add one or more outputs. An output is an instance of a concrete subclass of AVCaptureOutput; you use:
-
AVCaptureMovieFileOutputto output to a movie file -
AVCaptureVideoDataOutputif you want to process frames from the video being captured -
AVCaptureAudioDataOutputif you want to process the audio data being captured -
AVCaptureStillImageOutputif you want to capture still images with accompanying metadata
AVCaptureSession *captureSession = <#Get a capture session#>; |
AVCaptureMovieFileOutput *movieInput = <#Create and configure a movie output#>; |
if ([captureSession canAddOutput:movieInput]) { |
[captureSession addOutput:movieInput]; |
} |
else { |
// Handle the failure. |
} |
Saving to a Movie File
AVCaptureMovieFileOutput *aMovieFileOutput = [[AVCaptureMovieFileOutput alloc] init]; |
CMTime maxDuration = <#Create a CMTime to represent the maximum duration#>; |
aMovieFileOutput.maxRecordedDuration = maxDuration; |
aMovieFileOutput.minFreeDiskSpaceLimit = <#An appropriate minimum given the quality of the movie format and the duration#>; |
| Preset | iPhone 3G | iPhone 3GS | iPhone 4 (Back) | iPhone 4 (Front) |
|---|---|---|---|---|
| High | No video Apple Lossless | 640x480 3.5 mbps | 1280x720 10.5 mbps | 640x480 3.5 mbps |
| Medium | No video Apple Lossless | 480x360 700 kbps | 480x360 700 kbps | 480x360 700 kbps |
| Low | No video Apple Lossless | 192x144 128 kbps | 192x144 128 kbps | 192x144 128 kbps |
| 640x480 | No video Apple Lossless | 640x480 3.5 mbps | 640x480 3.5 mbps | 640x480 3.5 mbps |
| 1280x720 | No video Apple Lossless | No video 64 kbps AAC | No video 64 kbps AAC | No video 64 kbps AAC |
| Photo | Not supported for video output | Not supported for video output | Not supported for video output | Not supported for video output |
Starting a Recording
AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>; |
NSURL *fileURL = <#A file URL that identifies the output location#>; |
[aMovieFileOutput startRecordingToOutputFileURL:fileURL recordingDelegate:<#The delegate#>]; |
Ensuring the File Was Written Successfully
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput |
didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL |
fromConnections:(NSArray *)connections |
error:(NSError *)error { |
| |
BOOL recordedSuccessfully = YES; |
if ([error code] != noErr) { |
// A problem occurred: Find out if the recording was successful. |
id value = [[error userInfo] objectForKey:AVErrorRecordingSuccessfullyFinishedKey]; |
if (value) { |
recordedSuccessfully = [value boolValue]; |
} |
} |
// Continue as appropriate... |
Adding Metadata to a File
AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>; |
NSArray *existingMetadataArray = aMovieFileOutput.metadata; |
NSMutableArray *newMetadataArray = nil; |
if (existingMetadataArray) { |
newMetadataArray = [existingMetadataArray mutableCopy]; |
} |
else { |
newMetadataArray = [[NSMutableArray alloc] init]; |
} |
| |
AVMutableMetadataItem *item = [[AVMutableMetadataItem alloc] init]; |
item.keySpace = AVMetadataKeySpaceCommon; |
item.key = AVMetadataCommonKeyLocation; |
| |
CLLocation *location - <#The location to set#>; |
item.value = [NSString stringWithFormat:@"%+08.4lf%+09.4lf/" |
location.coordinate.latitude, location.coordinate.longitude]; |
| |
[newMetadataArray addObject:item]; |
| |
aMovieFileOutput.metadata = newMetadataArray; |
Capturing Still Images
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init]; |
NSDictionary *outputSettings = @{ AVVideoCodecKey : AVVideoCodecJPEG}; |
[stillImageOutput setOutputSettings:outputSettings]; |
Capturing an Image
AVCaptureConnection *videoConnection = nil; |
for (AVCaptureConnection *connection in stillImageOutput.connections) { |
for (AVCaptureInputPort *port in [connection inputPorts]) { |
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) { |
videoConnection = connection; |
break; |
} |
} |
if (videoConnection) { break; } |
} |
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: |
^(CMSampleBufferRef imageSampleBuffer, NSError *error) { |
CFDictionaryRef exifAttachments = |
CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL); |
if (exifAttachments) { |
// Do something with the attachments. |
} |
// Continue as appropriate. |
}]; |
Video Preview
AVCaptureSession *captureSession = <#Get a capture session#>; |
CALayer *viewLayer = <#Get a layer from the view in which you want to present the preview#>; |
| |
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession]; |
[viewLayer addSublayer:captureVideoPreviewLayer]; |
Putting it all Together: Capturing Video Frames as UIImage Objects
-
Create an
AVCaptureSessionobject to coordinate the flow of data from an AV input device to an output -
Find the
AVCaptureDeviceobject for the input type you want -
Create an
AVCaptureDeviceInputobject for the device -
Create an
AVCaptureVideoDataOutputobject to produce video frames -
Implement a delegate for the
AVCaptureVideoDataOutputobject to process video frames -
Implement a function to convert the CMSampleBuffer received by the delegate into a
UIImageobject
AVCaptureSession *session = [[AVCaptureSession alloc] init]; |
session.sessionPreset = AVCaptureSessionPresetMedium; |
AVCaptureDevice *device = |
[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; |
| |
NSError *error = nil; |
AVCaptureDeviceInput *input = |
[AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; |
if (!input) { |
// Handle the error appropriately. |
} |
[session addInput:input]; |
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init]; |
[session addOutput:output]; |
output.videoSettings = |
@{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) }; |
output.minFrameDuration = CMTimeMake(1, 15); |
The data output object uses delegation to vend the video frames. The delegate must adopt theAVCaptureVideoDataOutputSampleBufferDelegate protocol. When you set the data output’s delegate, you must also provide a queue on which callbacks should be invoked.
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL); |
[output setSampleBufferDelegate:self queue:queue]; |
dispatch_release(queue); |
- (void)captureOutput:(AVCaptureOutput *)captureOutput |
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer |
fromConnection:(AVCaptureConnection *)connection { |
| |
UIImage *image = imageFromSampleBuffer(sampleBuffer); |
// Add your code here that uses the image. |
} |
Starting and Stopping Recording
After configuring the capture session, you send it a startRunning message to start the recording.
[session startRunning]; |
To stop recording, you send the session a stopRunning message.
Converting a CMSampleBuffer to a UIImage
UIImage *imageFromSampleBuffer(CMSampleBufferRef sampleBuffer) { |
| |
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); |
// Lock the base address of the pixel buffer. |
CVPixelBufferLockBaseAddress(imageBuffer,0); |
| |
// Get the number of bytes per row for the pixel buffer. |
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); |
// Get the pixel buffer width and height. |
size_t width = CVPixelBufferGetWidth(imageBuffer); |
size_t height = CVPixelBufferGetHeight(imageBuffer); |
| |
// Create a device-dependent RGB color space. |
static CGColorSpaceRef colorSpace = NULL; |
if (colorSpace == NULL) { |
colorSpace = CGColorSpaceCreateDeviceRGB(); |
if (colorSpace == NULL) { |
// Handle the error appropriately. |
return nil; |
} |
} |
| |
// Get the base address of the pixel buffer. |
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); |
// Get the data size for contiguous planes of the pixel buffer. |
size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer); |
| |
// Create a Quartz direct-access data provider that uses data we supply. |
CGDataProviderRef dataProvider = |
CGDataProviderCreateWithData(NULL, baseAddress, bufferSize, NULL); |
// Create a bitmap image from data supplied by the data provider. |
CGImageRef cgImage = |
CGImageCreate(width, height, 8, 32, bytesPerRow, |
colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little, |
dataProvider, NULL, true, kCGRenderingIntentDefault); |
CGDataProviderRelease(dataProvider); |
| |
// Create and return an image object to represent the Quartz image. |
UIImage *image = [UIImage imageWithCGImage:cgImage]; |
CGImageRelease(cgImage); |
| |
CVPixelBufferUnlockBaseAddress(imageBuffer, 0); |
| |
return image; |
} |
本文详细介绍了如何使用AVFoundation框架进行声音文件播放、录音、资产访问、读写、播放控制、不同类型的资产配置、设备捕获设置、会话管理、文件输出、图像捕捉、视频预览及整体流程整合,旨在为开发者提供多媒体处理的强大工具集。
2658

被折叠的 条评论
为什么被折叠?



