3.1使用图像
为了便于操作图像IOS中定义图像类,UIImage饰UIKit框架中定义的图像类,其封装了高层次图像类,可以通过多种方式创建这些对象。在Core Graphics框架(或Quartz 2D)中也定义了CGImage,它表示位图图像,因为CGImage被封装起来了,所以通常通过CGImageRef来使用CGImage。
除了UIImage和CGImage,在Core Image框架中也有一个图像类CIImage,CIImage封装的图像类能够很好地进行图像效果处理,例如,滤镜的使用。UIImage、CGImage和CIImage之间可以相互转化,这个过程中需要注意内存释放问题,特别是CGImage与UIImage之间转化,涉及从C变量到OC对象转化,如果这里使用了ARC技术,反而会使内存释放问题更复杂,随着后面的学习逐步了解这些内存释放问题。
3.1.1 创建图像
如果一个icon.png文件放在应用程序包中(资源文件)加载图像,可以通过下面的几种代码实现:
UIImage *image = [UIImage imageNamed:@"icon.png"];
NSString *path = [[NSBundle mainBundle] pathForResource:@"icon" ofType:@"png"];
UIImage *image = [UIImage imageWithContentsOfFile:path];
或
UIImage *image = [[UIImage alloc]initWithContentsOfFile:path];
NSString *path = [[NSBundle mainBundle] pathForResource:@"icon" ofType:@"png"];
NSData *data = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [UIImage imageWithData:data];
或
UIImage *image = [[UIImage alloc ]initWithData:data];
如果icon.png文件放在应用程序杀箱目录中国年的Document目录下面,可以通过下面的几种代码实现:
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *path = [[paths lastObject] stringByAppendingPathComponent:@"icon.png"];
UIImage *image = [UIImage imageWithContentsOfFile:path];
或
UIImage *image = [[UIImage alloc] initWithContentsOfFile:path];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *path = [[paths lastObject] stringByAppendingPathComponent:@"icon.png"];
NSData *data = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [UIImage imageWithData:data];
或
UIImage *image = [[UIImage alloc] initWithData:data];
在上述代码中获得应用程序沙箱目录中国年Document目录语句如下:
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *path = [[paths lastObject] stringByAppendingPathComponent:@"icon.png"];
如果icon.png文件放在云端服务器端http://xxx/icon.png下,可以通过如下的几种方式创建UIImage图像对象
NSURL *url = [NSURL URLWithString:@"http://xxx/icon.png"];
NSData *data = [[NSData alloc] initWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData:data];
或
UIImage *image = [[UIImage alloc] initWithData:data];
3.1.2 从设备图片库选取或从照相机抓取
UIKit中提供了一个图像选择器UIImagePickerController,UIImagePickerController不仅不可实现选取图像,还可以从相簿和相机胶卷中选择。相簿和相机胶卷是有区别的,相簿包含了相机胶卷。相簿可以查看所有图片,而相机胶卷是通过照相机拍摄或截屏获得的图片。
UIImagePickerController的主要属性是sourceType,sourceType属性是在枚举UIImagePickerControllerSourceType中定义的三个常量:
- UIImagePickerControllerSourceTypePhotoLibrary设置图片来源于”相簿”
- UIImagePickerControllerSourceTypeCamera设置图片来源于”照相机”
- UIImagePickerControllerSourceTypeSavePhotosAlbum设置图片来源于”相机胶卷”
UIImagePickerController委托对象需要实现UIImagePickerControllerDelegate委托协议。UIImagePickerControllerDelegate中定义了以下两个方法:
- -imagePickerController:didFinishPickerMediaWithInfo:当选择完成时调用
- -imagePickerControllerDidCancel:当选择取消时调用
下面通过一个实例具体介绍:
ViewController.h
#import <UIKit/UIKit.h>
@interface ViewController : UIViewController
<UIImagePickerControllerDelegate,UINavigationControllerDelegate>
@property (strong, nonatomic) UIImagePickerController *imagePicker;
@property (retain, nonatomic) IBOutlet UIImageView *imageView;
- (IBAction)pickPhotoLibrary:(id)sender;
- (IBAction)pickPhotoCamera:(id)sender;
@end
在.h文件的文件声明实现UIImagePickerControllerDelegate和UINavigationControllerDelegate委托协议,其中UINavigationControllerDelegate也是UIImagePickerControllerDelegate属性要求实现的协议,UINavigationControllerDelegate协议定义了两个方法
-navigationController:WillShowViewController:animated:
-navigationController:didShowViewController:animated:
这两个方法是在抓取界面出现前后回调的方法。UIImagePickerController *imagePicker;定义了图像控制器的一个属性。
ViewController.m主要代码如下:
#import "ViewController.h"
@interface ViewController ()
@end
@implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (void)dealloc {
[_imageView release];
[super dealloc];
}
- (IBAction)pickPhotoLibrary:(id)sender {
if (_imagePicker == nil) {
_imagePicker = [[UIImagePickerController alloc] init];
}
_imagePicker.delegate = self;
_imagePicker.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum;
[self presentViewController:_imagePicker animated:YES completion:nil];
}
- (IBAction)pickPhotoCamera:(id)sender {
//UIImagePickerController的类方法isSourceTypeAvailable:判断是否设备支持照相机图像源(UIImagePickerControllerSourceTypeCamera).如果实在IOS设备上运行该方法则返回true,如果实在模拟器上运行则返回false
if ([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]) {
if (_imagePicker == nil) {
_imagePicker = [[UIImagePickerController alloc] init];
}
_imagePicker.delegate = self;
// 指定UIImagePickerController对象的图像源为照相机
_imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;
// 呈现系统提供的图像选择器界面
[self presentViewController:_imagePicker animated:YES completion:nil];
} else {
NSLog(@"照相机不可用。");
}
}
- (void) imagePickerControllerDidCancel: (UIImagePickerController *) picker {
_imagePicker.delegate = nil;
[self dismissViewControllerAnimated:YES completion:nil];
}
- (void) imagePickerController: (UIImagePickerController *) picker
didFinishPickingMediaWithInfo: (NSDictionary *) info {
/**
从参数info中取出原始图片数据,参数info如果抓取的是图片,则包含了原始或编辑后的图片数据,如果抓取的是视屏,则包含的是视屏存放路径。。
UIImagePickerControllerOriginalImage键是获取原始图片数据,此外常用的键还有
UIImagePickerControllerMediaType 由用户指定的媒体类型
UIImagePickerControllerOriginalImage 获取原始图片数据
UIImagePickerControllerEditedImage 编辑后的图片数据
UIImagePickerControllerCropRect 裁减后的图片数据
UIImagePickerControllerMediaURL 视屏存放路径
*/
UIImage *originalImage = (UIImage *) [info objectForKey:
UIImagePickerControllerOriginalImage];
self.imageView.image = originalImage;
self.imageView.contentMode = UIViewContentModeScaleAspectFill;
_imagePicker.delegate = nil;
[self dismissViewControllerAnimated:YES completion:nil];
}
3.2 Core Image框架
Core Image是图像处理中非常重要的框架。Core Image用来实时地处理和分析图像,它能处理来自于Core Graphics,Core Video,and Image I/O等框架的数据类型。并使用CPU进行渲染,Core Image能够屏蔽很多低层次的技术细节,如OpenGL ES和GCD(Grand Central Dispatch)等技术。
Core Image框架中有以下几个重要的类
- CIImage Core Image框架中的图像类
- CIContext 上下文对象,所有图像处理都在一个CIContext中完成,通过Quartz 2D和OpenGL渲染CIImage对象
- CIFilter 滤镜类包含一个字典结构,对各种滤镜定义了术语各自的属性
- CIDetector 面部表情类,借助于CIFaceFeature可以识别嘴和眼睛的位置。
在Core Image框架中国年最常用的是CIImage类,有一些构造方法和静态创建方法(即直接通过类名调用静态方法创建)
- +imageWithCGImage:通过CGImageRef创建图像对象
- +imageWithContentsOfURL:通过文件路径创建图像对象
- +imageWithData:通过内存中NSdata对象创建图像对象
- -initWithCGImage:通过内存中CGImageRef对象创建图像对象
- -initWithContentsOfURL:通过文件路径创建图像对象
- -initWithData:通过内存中NSData对象创建图像对象
在IOS设备中,CIImage图像来源主要有4中不同渠道
- 从应用程序包中(资源文件)加载
- 从应用程序沙箱目录加载
- 从云服务器端获取
- 从设备图片库选取或从照相机抓取
具体使用方法和UIImage类似 这里不再赘述
3.3 滤镜
滤镜通常用于相机镜头作为调色、添加效果之用。如UV镜、偏振镜、星光镜、各种色彩滤光片。滤镜也是绘图软件中用于制造特殊效果的工具统称,以PS为例,他拥有风格化、画笔描边、模糊、扭曲、锐化、视屏、素描、纹理、像素化、渲染、艺术效果、其他等12个滤镜。
在IOS中滤镜的API是指Core Image框架定义好的,并且非常重要
3.3.1 使用滤镜
IOS有90多种滤镜,而Mac OS X10.8提供了120多种滤镜。滤镜数量很多,而且有很多很多参数和属性使用起来有点麻烦。
滤镜使用流程一般分为三步:
- 创建滤镜CIFilter对象
- 设置滤镜参数
- 输出结果
实例代码如下:
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *cImage = [CIImage imageWithCGImage:[imageView.image CGImage]];
// 创建滤镜对象,使用filterWithName:方法创建,还可以使用filterWithName:keysAndValues:方法创建
// filterWithName:keysAndValues:在创建滤镜对象的同时可以设置其参数,使用实例代码如下:
// CIFilter *invert = [CIFilter filterWithName:@"CIColorInvert" keysAndValues:@"inputImage", cImage, nil];
CIFilter *invert = [CIFilter filterWithName:@"CIColorInvert"];
// 设置滤镜的默认参数 由于每个滤镜都有很多参数 这些参数不需要一一设置,可以使用如下语句设置默认值
[invert setDefaults];
// 设置输入参数 是必须要设定的参数
[invert setValue:cImage forKey:@"inputImage"];
// 获得输出的CIImage图形对象
// 可以调用滤镜的outputImage方法获得输出图像对象,代码CIImage *result = [invert outputImage];
CIImage *result = [invert valueForKey:@"outputImage"];
3.3.2 示例:旧色调和高斯模糊滤镜
下面通过一个具体的实例介绍滤镜使用,通过屏幕下方的两个按钮分别从两种不同的滤镜(旧色调和高斯滤镜)。选择”旧色调”段后拖曳下面的滑块,可以改变色调强度。选择”高斯模糊”段后拖曳下面的滑块,改变高斯模糊半径。
ViewController.h文件
#import <UIKit/UIKit.h>
@interface ViewController : UIViewController
{
int flag; // 0 为CISepiaTone 1 为CIGaussianBlur
}
@property (retain, nonatomic) IBOutlet UIImageView *imageView;
@property (retain, nonatomic) IBOutlet UISlider *slider;
@property (retain, nonatomic) UIImage *image;
@property (retain, nonatomic) IBOutlet UILabel *label;
- (IBAction)changeValue:(id)sender;
- (IBAction)segmentSelected:(id)sender;
@end
ViewController.m文件
#import "ViewController.h"
@interface ViewController ()
@end
@implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
_image = [UIImage imageNamed:@"SkyDrive340.png"];
_imageView.image = _image;
flag = 0;
_label.text = @"";
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (void)dealloc {
[_imageView release];
[_slider release];
[_label release];
[super dealloc];
}
- (IBAction)changeValue:(id)sender {
if (flag == 0) {
[self filterSepiaTone];
} else if (flag == 1) {
[self filterGaussianBlur];
}
}
- (IBAction)segmentSelected:(id)sender {
UISegmentedControl * seg = (UISegmentedControl*)sender;
if (seg.selectedSegmentIndex == 0) {//旧色调
flag = 0;
} else { //高斯模糊
flag = 1;
}
}
- (void)filterSepiaTone {
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *cImage = [CIImage imageWithCGImage:[_image CGImage]];
CIImage *result;
CIFilter *sepiaTone = [CIFilter filterWithName: @"CISepiaTone"];
[sepiaTone setValue: cImage forKey: @"inputImage"];
double value = [_slider value];
NSString *text =[[NSString alloc] initWithFormat:@"旧色调 Intensity : %.2f",value];
_label.text = text;
[text release];
[sepiaTone setValue: [NSNumber numberWithFloat: value]
forKey: @"inputIntensity"];
result = [sepiaTone valueForKey:@"outputImage"];
CGImageRef imageRef = [context createCGImage:result fromRect:CGRectMake(0, 0, self.imageView.image.size.width, self.imageView.image.size.height)];
UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
_imageView.image = image;
CFRelease(imageRef);
[image release];
flag = 0;
}
- (void)filterGaussianBlur {
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *cImage = [CIImage imageWithCGImage:[_image CGImage]];
CIImage *result;
CIFilter *gaussianBlur = [CIFilter filterWithName: @"CIGaussianBlur"];
[gaussianBlur setValue: cImage forKey: @"inputImage"];
double value = [_slider value];
value *=10;
//NSLog(@"高斯模糊 Radius : %.2f",value);
NSString *text =[[NSString alloc] initWithFormat:@"高斯模糊 Radius : %.2f",value];
_label.text = text;
[text release];
[gaussianBlur setValue: [NSNumber numberWithFloat: value] //250
forKey: @"inputRadius"];
result = [gaussianBlur valueForKey:@"outputImage"];
CGImageRef imageRef = [context createCGImage:result fromRect:CGRectMake(0, 0, self.imageView.image.size.width, self.imageView.image.size.height)];
UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
_imageView.image = image;
CFRelease(imageRef);
[image release];
flag = 1;
}
@end
3.4 人脸识别
人脸识别特指利用分析比较人脸视觉特征信息进行身份鉴别的计算机技术,主要用于身份识别。采用快速人脸检测技术可以从监控视屏图像中实时查找人脸,并与人脸数据库进行实时比对,从而实现快速身份识别
3.4.1 人脸识别开发
一般分为三个步骤:
1.首先建立人脸的面纹数据库。可以通过照相机或摄像机采集人脸的面部图片,将这些画像图片生成面纹编码保存到数据库中。
2.获取当前人脸面象图片。即通过照相机或摄像机采集人脸的面象素材,将当前的面象文件生成面纹编码
3.用当前的面纹编码与数据库中的面纹编码进行比对
在IOS5之后提供人脸识别的API,通过提供的CIDetector类可以进行人脸特征识别,CIDetector是Core Image框架中国年的一个特征识别滤镜,CIDetector主要用于人脸特征识别,通过它还可以获得眼镜和嘴的特征信息。但是CIDetector并不包括面纹编码提取,面纹编码提取还需要更为复杂的算法处理。也就是说使用CIDetector类可以找到一张图片上的人脸,但是这张脸是谁,CIDector无法判断,这需要有一个面纹数据库,把当前人脸提取面纹编码然后与数据库进行比对,这些内容超出了本书的范围。
在此之前IOS开发这方面的应用可以采用OpenVC和Face.con。其中,OpenVC(http://opencv.org/)是开源的C编写的图像处理和识别库,它提供了图像处理和计算机视觉方面的很多通用算法。Face.com提供了在线方式人脸识别服务,开发人员需要Face.com网站注册key才可以使用REST WebService风格的API,提交人脸图片,Face.com返回识别结果
3.4.2 实例:是猩猩还是小女孩
#import <UIKit/UIKit.h>
#import <QuartzCore/QuartzCore.h>
@interface ViewController : UIViewController
@property (retain, nonatomic) IBOutlet UIImageView *inputImageView;
@property (retain, nonatomic) IBOutlet UIImageView *outputImageView;
@property (retain, nonatomic) IBOutlet UIButton *button;
- (IBAction)detect:(id)sender;
@end
#import "ViewController.h"
@interface ViewController ()
@end
@implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
UIImage *image = [UIImage imageNamed:@"faces1.png"];
_inputImageView.image = image;
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
}
- (void)dealloc {
[_inputImageView release];
[_outputImageView release];
[_button release];
[super dealloc];
}
- (IBAction)detect:(id)sender {
CIContext *context = [CIContext contextWithOptions:nil];
UIImage *imageInput = [_inputImageView image];
CIImage *image = [CIImage imageWithCGImage:imageInput.CGImage];
//设置识别参数
//参数是放在NSDictionary中的,本例中的Accuracy代表识别精度,CIDetectorAccuracyHigh代表识别精度为“高”
NSDictionary *param = [NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh
forKey:CIDetectorAccuracy];
//声明一个CIDetector,并设定识别类型
//指定识别类型,目前只有CIDetectorTypeFace人脸类型
CIDetector* faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace
context:context options:param];
//取得识别结果
//进行识别,识别的结果放到NSArray集合中,集合中的每一个元素是CIFaceFeature类型
NSArray *detectResult = [faceDetector featuresInImage:image];
//创建一个UIView对象,并在上面标识出脸、眼睛和嘴巴的位置
UIView *resultView = [[UIView alloc] initWithFrame:_inputImageView.frame];
[self.view addSubview:resultView];
//从NSArray集合中取出CIFaceFeature元素,CIFaceFeature元素代表一个识别出来的对象
for(CIFaceFeature* faceFeature in detectResult) {
//脸部
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
faceView.layer.borderWidth = 1;
faceView.layer.borderColor = [UIColor orangeColor].CGColor;
[resultView addSubview:faceView];
[faceView release];
//左眼
//faceFeature.hasLeftEyePosition是判断是否识别左眼,类似还有右眼,嘴巴
if (faceFeature.hasLeftEyePosition) {
UIView* leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 5, 5)];
//获得左眼的位置
[leftEyeView setCenter:faceFeature.leftEyePosition];
leftEyeView.layer.borderWidth = 1;
leftEyeView.layer.borderColor = [UIColor redColor].CGColor;
[resultView addSubview:leftEyeView];
[leftEyeView release];
}
//右眼
if (faceFeature.hasRightEyePosition) {
UIView* rightEyeView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 5, 5)];
[rightEyeView setCenter:faceFeature.rightEyePosition];
rightEyeView.layer.borderWidth = 1;
rightEyeView.layer.borderColor = [UIColor redColor].CGColor;
[resultView addSubview:rightEyeView];
[rightEyeView release];
}
//嘴巴
if (faceFeature.hasMouthPosition) {
UIView* mouthView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 10, 5)];
[mouthView setCenter:faceFeature.mouthPosition];
mouthView.layer.borderWidth = 1;
mouthView.layer.borderColor = [UIColor redColor].CGColor;
[resultView addSubview:mouthView];
[mouthView release];
}
}
//是沿y轴进行镜像变换,因为Core Image坐标是在左下角,UIKit的坐标是在左上角,需要进行坐标变换
[resultView setTransform:CGAffineTransformMakeScale(1, -1)];
[resultView release];
//将识别出来的脸部图像,显示在屏幕的下半部分
if ([detectResult count] > 0)
{
//从识别出的人脸位置中裁剪出CIImage图像
CIImage *faceImage = [image imageByCroppingToRect:[[detectResult objectAtIndex:0] bounds]];
UIImage *face = [UIImage imageWithCGImage:[context createCGImage:faceImage fromRect:faceImage.extent]];
self.outputImageView.image = face;
[self.button setTitle:[NSString stringWithFormat:@"识别 人脸数 %i",
[detectResult count]] forState:UIControlStateNormal];
}
}
@end