1.coreImage的介绍
2.coreImage框架中的对象
2.1 CIImage
- 1.CIImage*image=[CIImage
imageWithContentsOfURL:myURL]; - 2.CIImage*image=[CIImage
imageWithData:myData]; - 3.CIImage*image=[CIImage
imageWithCGImage:myCgimage]; - 4.CIImage*image=[CIImage
imageWithCVPixelBuffer:CVBuffer];
2.2 CIFilter
[CIFilterfilterNamesInCat
调用[CIFilter attributes]会返回filter详细信息,下面我们以一个具体列子来看看他返回的信息。
- 2012-09-18
16:17:09.155 SZFYKJHomeWorkVersion1[2836:f803] { -
CIAttributeFilterCategor ies = (//滤镜所示种类,通常一个滤镜可以属于几种 -
CICategoryColorEffect, //总类,这只是根据滤镜效果,作用来分类的 -
CICategoryVideo, //可以用种类名来搜索Fileter; -
CICategoryInterlaced, -
CICategoryNonSquarePixel s, -
CICategoryStillImage, -
CICategoryBuiltIn -
); -
CIAttributeFilterDisplay Name = "Sepia Tone"; -
CIAttributeFilterName = CISepiaTone; //滤镜的名称,通过该名称来 -
//调用滤镜,具体见下面实例 -
-
inputImage = { //滤镜使用需要输入的参数,该 -
CIAttributeClass = CIImage; //参数类型为CIImage。 -
CIAttributeType = CIAttributeTypeImage; -
}; -
inputIntensity = { //输入强度,参数的名称 -
CIAttributeClass = NSNumber; //类型 -
CIAttributeDefault = 1; //默认值 -
CIAttributeIdentity = 0; -
CIAttributeMax = 1; //最大值 -
CIAttributeMin = 0; //最小值 -
CIAttributeSliderMax = 1; -
CIAttributeSliderMin = 0; -
CIAttributeType = CIAttributeTypeScalar; -
}; - }
- 程序中使用CISepiaTone的代码为:CIFilter
*filter = [CIFilter filterWithName:@"CISepiaTone"]; - [filter
setValue:inputImage forKey:@"inputImage"]; - [filter
setValue:[NSNumber numberWithFloat:0.8] forKey:@"inputIntensity"];
大家可以
2.3 CIContext
- CIContext
*context = [CIContext contextWithOptions:[NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:kCIContextUseSoftwareRen derer]];//CPU渲染
渲染后的图片使用:
1.在imageView中使用:
- //
Create the CIContext to render into -
CIContext *context = [CIContext context]; -
- //
Get outputImage from the last filter in chain -
CIImage *ciimage = [filter outputImage]; -
- //
Render the CIImage into a CGImageRef -
CGImageRef cgimg = [context createCGImage:ciimage fromRect:[ciimage extent]]; -
- //
Create a UIImage from the CGImageRef - UIImage
*uiimage = [UIImage imageWithCGImage:cgimg scale:1.0f - orientation:ui_orientation([ciimage
properties])]; - CGImageRelease(cgimg);
- //
Use the UIImage in an UIImageView - imageView.image
= uiimage;
2.将图片保存到photoLibrary
- //
Create a CGImage from the CIImage -
CIImage *outputImage = [filter outputImage]; -
CGImageRef cgimage = [cpu_context createCGImage:outputImage -
fromRect:[outputImage extent]]; -
// Add the CGImage to the photo library -
ALAssetsLibrary *library = [ALAssetsLibrary new]; -
[library writeImageToSavedPhotosA lbum:cgimage -
metadata:[outputImage properties] -
completionBlock:^(NSURL *assetURL NSError *error) { -
CGImageRelease(cgimg); - }];
2.4 CIDetector和CIFeature
- CIDetector
*faceDetector = [CIDetector -
detectorOfType:CIDetectorTypeFace -
context:self.imageContext -
options:options]; - NSArray
*faces = [faceDetector featuresInImage:coreImage -
options:nil]; - for(CIFaceFeature
*face in faces){ -
coreImage = [CIFilter filterWithName:@"CISourceOverCompositing" -
keysAndValues:kCIInputImageKey, [self makeBoxForFace:face], -
kCIInputBackgroundImageK ey, coreImage, nil].outputImage; -
}
3 注意事项
1 CoreImage在IOS上有很高的效率,但是滤镜和渲染操作也会对主线程造成影响。应该将CoreImage滤镜渲染操作放在后台线程执行,当这些操作介绍后在返回主线程进行界面的更新。
- dispatch_async(
-
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), -
^(void){ -
-
//CGImageRef cgImage = [self autoAdjustImage]; -
NSArray *filters; -
// Create Core Image -
CGImageRef cgImg = self.imageView.image.CGImage; -
CIImage *coreImage = [CIImage imageWithCGImage:cgImg]; -
-
// Iterate through all of our filters and apply -
// them to the CIImage -
for(CIFilter *filter in filters){ -
[filter setValue:coreImage forKey:kCIInputImageKey]; -
coreImage = filter.outputImage; -
} -
-
// Create a new CGImageRef by rendering through CIContext -
// This won't slow down main thread since we're in a background -
// dispatch queue -
CGImageRef newImg = [self.imageContext createCGImage:coreImage -
fromRect:[coreImage extent]]; -
-
dispatch_async(dispatch_get_main_queue(), ^(void){ -
// Update our image view on the main thread -
// You can also perform any other UI updates needed -
// here such as hidding activity spinners -
self.imageView.image = [UIImage imageWithCGImage:newImg]; -
[self.adjustSpinner stopAnimating]; -
[sender setEnabled:YES]; -
}); -
});
上面这段代码,就是为了防止阻塞主线程,用GCD异步执行滤镜与渲染操作,在获取渲染后的照片以后,返回主线程进行界面的更新。(完整的程序见本文末连接)
2 不要重复应用滤镜,即使是同一个滤镜也不要应用两次,因为滤镜后输出照片包含滤镜链,在进行照片渲染是会将滤镜链效果叠加到原始数据上,这时会造成问题。比如,有一个CIImage,上面配置了强度为0.5的棕色滤镜,现在通过滑块将强度改为0.6,这个滤镜应该用在新的CIImage上,如果不是新的CIImage上,那么原来的CIImage中将包含强度为0.5和0.6的棕色滤镜,而我们只想0.6的棕色滤镜,这样就造成错误,这一点在编写程序的时候一定要切忌。
3 app中应用的滤镜太多,改变速率太快,如果是根据滑块来产生事件的话,一定要注意在使用滑条值前要首先判断更改的滤镜当前是否正在起作用,如果该滤镜正在生成新的渲染图片,则应该这次滑块的更新。这一点也是很重要的,弄的不好常常导致程序崩溃,出现内存泄露问题。
这些问题常常会导致程序的崩溃.
4 总结
CoreImage处理图像的流程:
1:创建一个新的CIImage;
2:创建一个行的CIFIlter,并通过键-值设置各种输入值,这些值有些是有默认值的,有些没有默认值,需要编程者的设置;
3:冲CIFilter中生成输出图像,如果存在滤镜链则将输出图像作为输入参数传入到下一个滤镜,跳回步骤2继续进行,如果到达滤镜末,则调用CIContext渲染CIImage对象。这个context可以是基于CPU或GPU的,基于CPU的产出CGImageRef对象,基于GPU的调用OpenGL ES在屏幕上画出结果,默认是基于CPU的。
在使用CoreImage时,一定要记住CIImage对象在开始时不会操作图像数据,知道使用CIContext渲染图片是才会这么做。还要记住最好在后台执行图像处理的操作,然后在主线程中修改界面。
大家有机会可以参考一下IOS5核心框架中的CoreImage章节和AVFoundation章节,下面在贴一个书本中的列子的连接,我认为很有助理解。http://ioscoreframeworks.com/download/。
另外也可以参考下wwdc上面的视频Session510-Getting Started with Core Image和Session 511 -