AV Foundation AVAudioPlayer,AVAudioRecoder

本文详细介绍了如何使用AVFoundation框架进行声音文件播放、录音、资产访问、读写、播放控制、不同类型的资产配置、设备捕获设置、会话管理、文件输出、图像捕捉、视频预览及整体流程整合,旨在为开发者提供多媒体处理的强大工具集。
  • To play sound files, you can use AVAudioPlayer.

  • To record audio, you can use AVAudioRecorder.

  • The primary class that the AV Foundation framework uses to represent media is AVAsset

Using Assets

Asset can come from a file or from media in the user’s iPod Library or Photo library. Simply creating an asset object, though, does not necessarily mean that all the information that you might want to retrieve for that item is immediately available. Once you have a movie asset, you can extract still images from it, transcode it to another format, or trim the contents.

Creating an Asset Objec

    • If you only intend to play the asset, either pass nil instead of a dictionary, or pass a dictionary that contains theAVURLAssetPreferPreciseDurationAndTimingKey key and a corresponding value of NO (contained in an NSValue object).

    • If you want to add the asset to a composition (AVMutableComposition), you typically need precise random access. Pass a dictionary that contains the AVURLAssetPreferPreciseDurationAndTimingKey key and a corresponding value of YES(contained in an NSValue object—recall that NSNumber inherits from NSValue):

      NSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>;
      NSDictionary *options = @{ AVURLAssetPreferPreciseDurationAndTimingKey : @YES };
      AVURLAsset *anAssetToUseInAComposition = [[AVURLAsset alloc] initWithURL:url options:options];

Accessing the User’s Assets

To access the assets managed the iPod Library or by the Photos application, you need to get a URL of the asset you want.


The following example shows how you can get an asset to represent the first video in the Saved Photos Album.

ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
 
// Enumerate just the photos and videos group by using ALAssetsGroupSavedPhotos.
[library enumerateGroupsWithTypes:ALAssetsGroupSavedPhotos usingBlock:^(ALAssetsGroup *group, BOOL *stop) {
 
// Within the group enumeration block, filter to enumerate just videos.
[group setAssetsFilter:[ALAssetsFilter allVideos]];
 
// For this example, we're only interested in the first item.
[group enumerateAssetsAtIndexes:[NSIndexSet indexSetWithIndex:0]
                        options:0
                     usingBlock:^(ALAsset *alAsset, NSUInteger index, BOOL *innerStop) {
 
                         // The end of the enumeration is signaled by asset == nil.
                         if (alAsset) {
                             ALAssetRepresentation *representation = [alAsset defaultRepresentation];
                             NSURL *url = [representation url];
                             AVAsset *avAsset = [AVURLAsset URLAssetWithURL:url options:nil];
                             // Do something interesting with the AV asset.
                         }
                     }];
                 }
                 failureBlock: ^(NSError *error) {
                     // Typically you should handle an error more gracefully than this.
                     NSLog(@"No groups");
                 }];
 



ALAssetsLibrary 库

ALAssetsGroup  同一类型的资源集合

AVAsset    一个资源。

AVURLAsset     AVAsset子类

Reading and Writing Assets

You use an AVAssetReader when you want to perform an operation on the contents of an asset. For example, you might read the audio track of an asset to produce a visual representation of the waveform. To produce an asset from media such as sample buffers or still images, you use an AVAssetWriter object.

You can use an asset reader and asset writer object in tandem to convert an asset from one representation to another. Using these objects you have more control over the conversion than you do with AVExportSession. For example of you want to choose which of the tracks you want to be represented in the output file, specify your own output format, or modify the asset during the conversion process.


playback 

To control the playback of assets, you use aAVPlayer object. During playback, you can use aAVPlayerItem object to manage the presentation state of an asset as a whole, and an AVPlayerItemTrack to manage the presentation state of an individual track. To display video, you use an AVPlayerLayer object.

image: ../Art/avplayerLayer.jpg

Handling Different Types of Asset

The way you configure an asset for playback may depend on the sort of asset you want to play. Broadly speaking, there are two main types:file-based assets, to which you have random access (such as from a local file, the camera roll, or the Media Library), and stream-based (HTTP Live Stream format).

To load and play a file-based asset. There are several steps to playing a file-based asset:

  • Create an asset using AVURLAsset and load its tracks usingloadValuesAsynchronouslyForKeys:completionHandler:.

  • When the asset has loaded its tracks, create an instance of AVPlayerItem using the asset.

  • Associate the item with an instance of AVPlayer.

  • Wait until the item’s status indicates that it’s ready to play (typically you usekey-value observing to receive a notification when the status changes).

To create and prepare an HTTP live stream for playback.  Initialize an instance of AVPlayerItem  using the URL. (You cannot directly create an  AVAsset  instance to represent the media in an HTTP Live Stream.)

If you don’t know what kind of URL you have. Follow these steps:

  1. Try to initialize an AVURLAsset using the URL, then load its tracks key.

    If the tracks load successfully, then you create a player item for the asset.

  2. If 1 fails, create an AVPlayerItem directly from the URL.

    Observe the player’s status property to determine whether it becomes playable.


AVAsset is the core class in the AV Foundation framework. It provides a format-independent abstraction of time-based audiovisual data, such as a movie file or a video stream. In many cases, you work with one of itssubclasses: you use the composition subclasses when you create new assets (see “Editing”), and you use AVURLAsset to create a new asset instance from media at a given URL (including assets from the MPMedia framework or the Asset Library framework—see “Using Assets”).

image: ../Art/avassetHierarchy.jpg


CMTime time1 = CMTimeMake(200, 2); // 200 half-seconds      200*1/2
CMTime time2 = CMTimeMake(400, 4); // 400 quarter-seconds   400*1/4
 
// time1 and time2 both represent 100 seconds, but using different timescales.

Media Capture

To manage the capture from a device such as a camera or microphone, you assemble objects to represent inputs and outputs, and use an instance of AVCaptureSession to coordinate the data flow between them. Minimally you need:

  • An instance of AVCaptureDevice to represent the input device, such as a camera or microphone

  • An instance of a concrete subclass of AVCaptureInput to configure the ports from the input device

  • An instance of a concrete subclass of AVCaptureOutput to manage the output to a movie file or still image

  • An instance of AVCaptureSession to coordinate the data flow from the input to the output

To show the user what a camera is recording, you can use an instance of AVCaptureVideoPreviewLayer (a subclass of CALayer).

You can configure multiple inputs and outputs, coordinated by a single session:

image: ../Art/captureOverview.png

When you add an input or an output to a session, the session “greedily” forms connections between all the compatible capture inputs’ ports and capture outputs. A connection between a capture input and a capture output is represented by an AVCaptureConnection object.

image: ../Art/captureDetail.png

You can use a capture connection to enable or disable the flow of data from a given input or to a given output. You can also use a connection to monitor the average and peak power levels in an audio channel.

Configuring a Session

Symbol

Resolution

Comments

AVCaptureSessionPresetHigh

High

Highest recording quality.

This varies per device.

AVCaptureSessionPresetMedium

Medium

Suitable for WiFi sharing.

The actual values may change.

AVCaptureSessionPresetLow

Low

Suitable for 3G sharing.

The actual values may change.

AVCaptureSessionPreset640x480

640x480

VGA.

AVCaptureSessionPreset1280x720

1280x720

720p HD.

AVCaptureSessionPresetPhoto

Photo

Full photo resolution.

This is not supported for video output.


Device Capture Settings

Feature

iPhone 3G

iPhone 3GS

iPhone 4 (Back)

iPhone 4 (Front)

Focus mode

NO

YES

YES

NO

Focus point of interest

NO

YES

YES

NO

Exposure mode

YES

YES

YES

YES

Exposure point of interest

NO

YES

YES

YES

White balance mode

YES

YES

YES

YES

Flash mode

NO

NO

YES

NO

Torch mode

NO

NO

YES

NO

NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
NSMutableArray *torchDevices = [[NSMutableArray alloc] init];
 
for (AVCaptureDevice *device in devices) {
    [if ([device hasTorch] &&
         [device supportsAVCaptureSessionPreset:AVCaptureSessionPreset640x480]) {
        [torchDevices addObject:device];
    }
}

if ([device isFocusModeSupported:AVCaptureFocusModeLocked]) {
    NSError *error = nil;
    if ([device lockForConfiguration:&error]) {
        device.focusMode = AVCaptureFocusModeLocked;
        [device unlockForConfiguration];
    }
    else {
        // Respond to the failure as appropriate.

Switching Between Devices

AVCaptureSession *session = <#A capture session#>;
[session beginConfiguration];
 
[session removeInput:frontFacingCameraDeviceInput];
[session addInput:backFacingCameraDeviceInput];
 
[session commitConfiguration];

Use Capture Inputs to Add a Capture Device to a Session

AVCaptureSession *captureSession = <#Get a capture session#>;
AVCaptureDeviceInput *captureDeviceInput = <#Get a capture device input#>;
if ([captureSession canAddInput:captureDeviceInput]) {
    [captureSession addInput:captureDeviceInput];
}
else {
    // Handle the failure.
}

Use Capture Outputs to Get Output from a Session

To get output from a capture session, you add one or more outputs. An output is an instance of a concrete subclass of AVCaptureOutput; you use:


AVCaptureSession *captureSession = <#Get a capture session#>;
AVCaptureMovieFileOutput *movieInput = <#Create and configure a movie output#>;
if ([captureSession canAddOutput:movieInput]) {
    [captureSession addOutput:movieInput];
}
else {
    // Handle the failure.
}

Saving to a Movie File

AVCaptureMovieFileOutput *aMovieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
CMTime maxDuration = <#Create a CMTime to represent the maximum duration#>;
aMovieFileOutput.maxRecordedDuration = maxDuration;
aMovieFileOutput.minFreeDiskSpaceLimit = <#An appropriate minimum given the quality of the movie format and the duration#>;

Preset

iPhone 3G

iPhone 3GS

iPhone 4 (Back)

iPhone 4 (Front)

High

No video

Apple Lossless

640x480

3.5 mbps

1280x720

10.5 mbps

640x480

3.5 mbps

Medium

No video

Apple Lossless

480x360

700 kbps

480x360

700 kbps

480x360

700 kbps

Low

No video

Apple Lossless

192x144

128 kbps

192x144

128 kbps

192x144

128 kbps

640x480

No video

Apple Lossless

640x480

3.5 mbps

640x480

3.5 mbps

640x480

3.5 mbps

1280x720

No video

Apple Lossless

No video

64 kbps AAC

No video

64 kbps AAC

No video

64 kbps AAC

Photo

Not supported for video output

Not supported for video output

Not supported for video output

Not supported for video output

Starting a Recording
AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>;
NSURL *fileURL = <#A file URL that identifies the output location#>;
[aMovieFileOutput startRecordingToOutputFileURL:fileURL recordingDelegate:<#The delegate#>];

Ensuring the File Was Written Successfully
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput
        didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL
        fromConnections:(NSArray *)connections
        error:(NSError *)error {
 
    BOOL recordedSuccessfully = YES;
    if ([error code] != noErr) {
        // A problem occurred: Find out if the recording was successful.
        id value = [[error userInfo] objectForKey:AVErrorRecordingSuccessfullyFinishedKey];
        if (value) {
            recordedSuccessfully = [value boolValue];
        }
    }
    // Continue as appropriate...
Adding Metadata to a File
AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>;
NSArray *existingMetadataArray = aMovieFileOutput.metadata;
NSMutableArray *newMetadataArray = nil;
if (existingMetadataArray) {
    newMetadataArray = [existingMetadataArray mutableCopy];
}
else {
    newMetadataArray = [[NSMutableArray alloc] init];
}
 
AVMutableMetadataItem *item = [[AVMutableMetadataItem alloc] init];
item.keySpace = AVMetadataKeySpaceCommon;
item.key = AVMetadataCommonKeyLocation;
 
CLLocation *location - <#The location to set#>;
item.value = [NSString stringWithFormat:@"%+08.4lf%+09.4lf/"
    location.coordinate.latitude, location.coordinate.longitude];
 
[newMetadataArray addObject:item];
 
aMovieFileOutput.metadata = newMetadataArray;

Capturing Still Images


AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = @{ AVVideoCodecKey : AVVideoCodecJPEG};
[stillImageOutput setOutputSettings:outputSettings];
Capturing an Image
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections) {
    for (AVCaptureInputPort *port in [connection inputPorts]) {
        if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
            videoConnection = connection;
            break;
        }
    }
    if (videoConnection) { break; }
}

[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:
    ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
        CFDictionaryRef exifAttachments =
            CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
        if (exifAttachments) {
            // Do something with the attachments.
        }
        // Continue as appropriate.
    }];

Video Preview

AVCaptureSession *captureSession = <#Get a capture session#>;
CALayer *viewLayer = <#Get a layer from the view in which you want to present the preview#>;
 
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession];
[viewLayer addSublayer:captureVideoPreviewLayer];

Putting it all Together: Capturing Video Frames as UIImage Objects

AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;

AVCaptureDevice *device =
        [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
 
NSError *error = nil;
AVCaptureDeviceInput *input =
        [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
    // Handle the error appropriately.
}
[session addInput:input];

AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];
output.videoSettings =
                @{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) };
output.minFrameDuration = CMTimeMake(1, 15);

The data output object uses delegation to vend the video frames. The delegate must adopt theAVCaptureVideoDataOutputSampleBufferDelegate protocol. When you set the data output’s delegate, you must also provide a queue on which callbacks should be invoked.

dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
- (void)captureOutput:(AVCaptureOutput *)captureOutput
         didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
         fromConnection:(AVCaptureConnection *)connection {
 
    UIImage *image = imageFromSampleBuffer(sampleBuffer);
    // Add your code here that uses the image.
}

Starting and Stopping Recording

After configuring the capture session, you send it a startRunning message to start the recording.

[session startRunning];

To stop recording, you send the session a stopRunning message.


Converting a CMSampleBuffer to a UIImage

UIImage *imageFromSampleBuffer(CMSampleBufferRef sampleBuffer) {
 
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    // Lock the base address of the pixel buffer.
    CVPixelBufferLockBaseAddress(imageBuffer,0);
 
    // Get the number of bytes per row for the pixel buffer.
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    // Get the pixel buffer width and height.
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
 
    // Create a device-dependent RGB color space.
    static CGColorSpaceRef colorSpace = NULL;
    if (colorSpace == NULL) {
        colorSpace = CGColorSpaceCreateDeviceRGB();
            if (colorSpace == NULL) {
            // Handle the error appropriately.
            return nil;
        }
    }
 
    // Get the base address of the pixel buffer.
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
    // Get the data size for contiguous planes of the pixel buffer.
    size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);
 
    // Create a Quartz direct-access data provider that uses data we supply.
    CGDataProviderRef dataProvider =
        CGDataProviderCreateWithData(NULL, baseAddress, bufferSize, NULL);
    // Create a bitmap image from data supplied by the data provider.
    CGImageRef cgImage =
        CGImageCreate(width, height, 8, 32, bytesPerRow,
                        colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little,
                        dataProvider, NULL, true, kCGRenderingIntentDefault);
    CGDataProviderRelease(dataProvider);
 
    // Create and return an image object to represent the Quartz image.
    UIImage *image = [UIImage imageWithCGImage:cgImage];
    CGImageRelease(cgImage);
 
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
 
    return image;
}





第三方支付功能的技术人员;尤其适合从事电商、在线教育、SaaS类项目开发的工程师。; 使用场景及目标:① 实现微信与支付宝的Native、网页/APP等主流支付方式接入;② 掌握支付过程中关键的安全机制如签名验签、证书管理与敏感信息保护;③ 构建完整的支付闭环,包括下单、支付、异步通知、订单状态更新、退款与对账功能;④ 通过定时任务处理内容支付超时与概要状态不一致问题:本文详细讲解了Java,提升系统健壮性。; 阅读应用接入支付宝和建议:建议结合官方文档与沙微信支付的全流程,涵盖支付产品介绍、开发环境搭建箱环境边学边练,重点关注、安全机制、配置管理、签名核心API调用及验签逻辑、异步通知的幂等处理实际代码实现。重点与异常边界情况;包括商户号与AppID获取、API注意生产环境中的密密钥与证书配置钥安全与接口调用频率控制、使用官方SDK进行支付。下单、异步通知处理、订单查询、退款、账单下载等功能,并深入解析签名与验签、加密解密、内网穿透等关键技术环节,帮助开发者构建安全可靠的支付系统。; 适合人群:具备一定Java开发基础,熟悉Spring框架和HTTP协议,有1-3年工作经验的后端研发人员或希望快速掌握第三方支付集成的开发者。; 使用场景及目标:① 实现微信支付Native模式与支付宝PC网页支付的接入;② 掌握支付过程中核心的安全机制如签名验签、证书管理、敏感数据加密;③ 处理支付结果异步通知、订单状态核对、定时任务补偿、退款及对账等生产级功能; 阅读建议:建议结合文档中的代码示例与官方API文档同步实践,重点关注支付流程的状态一致性控制、幂等性处理和异常边界情况,建议在沙箱环境中完成全流程测试后再上线。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值