iOS 毛玻璃效果和高斯图片模糊实现

一. 毛玻璃效果的实现:

1.苹果在iOS7.0之后,很多系统界面都使用了毛玻璃效果,增加了界面的美观性,比如通知中心界面;其实在iOS7.0(包括)之前还是有系统的类可以实现毛玻璃效果的, 就是 UIToolbar这个类。

/*iOS7.0

毛玻璃的样式(枚举)

UIBarStyleDefault = ,

UIBarStyleBlack = ,

UIBarStyleBlackOpaque = , // Deprecated.Use UIBarStyleBlack

UIBarStyleBlackTranslucent = , //Deprecated. Use UIBarStyleBlack and set the translucent property to YES

*/

UIImageView *bgImgView = [[UIImageViewalloc] initWithFrame:self.view.bounds];

bgImgView.image = [UIImageimageNamed:@"testPicture"];

[self.view addSubview:bgImgView];

UIToolbar *toolbar = [[UIToolbar alloc]initWithFrame:CGRectMake(, , bgImgView.frame.size.width*.,bgImgView.frame.size.height)];

toolbar.barStyle =UIBarStyleBlackTranslucent;

[bgImgView addSubview:toolbar];


2.在iOS8.0之后,苹果新增了一个类UIVisualEffectView,通过这个类来实现毛玻璃效果与上面的UIToolbar一样,而且效率也非常之高,使用也是非常简单,几行代码搞定. UIVisualEffectView是一个抽象类,不能直接使用,需通过它下面的三个子类来实现(UIBlurEffect, UIVisualEffevt, UIVisualEffectView);子类UIBlurEffect只有一个类方法,用来快速创建一个毛玻璃效果,参数是一个枚举,用来设置毛玻璃的样式,而UIVisualEffectView则多了两个属性和两个构造方法,用来快速将创建的毛玻璃添加到这个UIVisualEffectView上.同样是先快速的实例化UIBlurEffect并设置毛玻璃的样式,然后再通过UIVisualEffectView的构造方法将UIBlurEffect的实例添加上去, 最后设置frame或者是通过添加约束, 将effectView添加到要实现了毛玻璃的效果的view控件上,效果图和上面的一样.使用如下:

iOS8.0

 毛玻璃的样式(枚举)

 UIBlurEffectStyleExtraLight,

 UIBlurEffectStyleLight,

 UIBlurEffectStyleDark

 

 UIBlurEffect *effect = [UIBlurEffecteffectWithStyle:UIBlurEffectStyleDark];

 UIVisualEffectView *effectView =[[UIVisualEffectView alloc] initWithEffect:effect];

 effectView.frame = CGRectMake(0, 0,bgImgView.frame.size.width*0.5, bgImgView.frame.size.height);

 [bgImgView addSubview:effectView];


二.高斯图片模糊效果

1. CoreImage:

iOS5.0之后就出现了Core Image的API,CoreImage的API被放在CoreImage.framework库中, 在iOS和OS X平台上,Core Image都提供了大量的滤镜(Filter),在OS X上有120多种Filter,而在iOS上也有90多。

+(UIImage *)coreBlurImage:(UIImage *)imagewithBlurNumber:(CGFloat)blur

{

   CIContext *context = [CIContext contextWithOptions:nil];

   CIImage *inputImage= [CIImage imageWithCGImage:image.CGImage];

   //设置filter

   CIFilter *filter = [CIFilterfilterWithName:@"CIGaussianBlur"];

   [filter setValue:inputImage forKey:kCIInputImageKey]; [filtersetValue:@(blur) forKey: @"inputRadius"];

   //模糊图片

   CIImage *result=[filter valueForKey:kCIOutputImageKey];

   CGImageRef outImage=[context createCGImage:result fromRect:[resultextent]];      

   UIImage *blurImage=[UIImage imageWithCGImage:outImage];          

   CGImageRelease(outImage);

   return blurImage;

}

 

2. vImage(推荐使用)

vImage属于Accelerate.Framework,需要导入Accelerate下的 Accelerate头文件, Accelerate主要是用来做数字信号处理、图像处理相关的向量、矩阵运算的库。图像可以认为是由向量或者矩阵数据构成的,Accelerate里既然提供了高效的数学运算API,自然就能方便我们对图像做各种各样的处理,模糊算法使用的是vImageBoxConvolve_ARGB8888这个函数。

 

+(UIImage *)boxblurImage:(UIImage *)imagewithBlurNumber:(CGFloat)blur

 {

    if (blur < 0.f || blur > 1.f) {

       blur = 0.5f;

    }

    int boxSize = (int)(blur * 40);

    boxSize = boxSize - (boxSize % 2) + 1;

     CGImageRef img = image.CGImage;

    vImage_Buffer inBuffer, outBuffer;

    vImage_Error error;

    void *pixelBuffer;

    //从CGImage中获取数据

    CGDataProviderRef inProvider = CGImageGetDataProvider(img);

    CFDataRef inBitmapData = CGDataProviderCopyData(inProvider);

    //设置从CGImage获取对象的属性

    inBuffer.width = CGImageGetWidth(img);

    inBuffer.height = CGImageGetHeight(img);

    inBuffer.rowBytes = CGImageGetBytesPerRow(img);

    inBuffer.data = (void*)CFDataGetBytePtr(inBitmapData);

    pixelBuffer = malloc(CGImageGetBytesPerRow(img) *CGImageGetHeight(img));        

    if(pixelBuffer == NULL)

        NSLog(@"No pixelbuffer");

    outBuffer.data = pixelBuffer;

    outBuffer.width = CGImageGetWidth(img);

    outBuffer.height = CGImageGetHeight(img);

    outBuffer.rowBytes = CGImageGetBytesPerRow(img);

    error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer, NULL,0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend);

    if (error) {

          NSLog(@"error from convolution %ld", error);

    }

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    CGContextRef ctx = CGBitmapContextCreate( outBuffer.data,outBuffer.width, outBuffer.height, 8, outBuffer.rowBytes, colorSpace,kCGImageAlphaNoneSkipLast);    

    CGImageRef imageRef = CGBitmapContextCreateImage (ctx);

    UIImage *returnImage = [UIImage imageWithCGImage:imageRef];

    //clean up CGContextRelease(ctx);

    CGColorSpaceRelease(colorSpace);

     free(pixelBuffer);

    CFRelease(inBitmapData);

    CGColorSpaceRelease(colorSpace);

    CGImageRelease(imageRef);

    return returnImage;

}

 

区别:

效果:第一种Core Image设置模糊之后会在周围产生白边,vImage使用不存在任何问题;

性能:图像模糊处理属于复杂的计算,大部分图片模糊选择的是vImage,性能最佳。

方法调用

UIImageView *imageView=[[UIImageView alloc]initWithFrame:CGRectMake(0, 0,

screenWidth

,

screenHeight

)];

imageView.contentMode=UIViewContentModeScaleAspectFill;

imageView.image=[UIImage boxblurImage:imagewithBlurNumber:0.5];

imageView.clipsToBounds=YES;

[self.view addSubview:imageView];

3.GPUImage

GPUImage是用设备的GPU来实时处理图片,给图片加各种滤镜效果的一个开源库。

可以实时地给照相机加上滤镜效果,很多App都支持这种实时滤镜。


// 内部方法,核心代码,封装了毛玻璃效果 参数:半径,颜色,色彩饱和度-(UIImage *)imageBluredWithRadius:(CGFloat)blurRadius tintColor:(UIColor*)tintColor saturationDeltaFactor:(CGFloat)saturationDeltaFactormaskImage:(UIImage *)maskImage {

CGRect imageRect = { CGPointZero, self.size};

UIImage *effectImage = self; BOOL hasBlur =blurRadius > __FLT_EPSILON__;

BOOL hasSaturationChange =fabs(saturationDeltaFactor - 1.) > __FLT_EPSILON__; if (hasBlur ||hasSaturationChange) { UIGraphicsBeginImageContextWithOptions(self.size, NO,[[UIScreen mainScreen] scale]);

CGContextRef effectInContext =UIGraphicsGetCurrentContext();

CGContextScaleCTM(effectInContext, 1.0,-1.0);

CGContextTranslateCTM(effectInContext, 0,-self.size.height);

CGContextDrawImage(effectInContext,imageRect, self.CGImage);

vImage_Buffer effectInBuffer;effectInBuffer.data = http://www.open-open.com/code/view/CGBitmapContextGetData(effectInContext);

effectInBuffer.width =CGBitmapContextGetWidth(effectInContext);

effectInBuffer.height =CGBitmapContextGetHeight(effectInContext);

effectInBuffer.rowBytes =CGBitmapContextGetBytesPerRow(effectInContext);

UIGraphicsBeginImageContextWithOptions(self.size,NO, [[UIScreen mainScreen] scale]);

CGContextRef effectOutContext =UIGraphicsGetCurrentContext();

vImage_Buffer effectOutBuffer;

effectOutBuffer.data =CGBitmapContextGetData(effectOutContext);

effectOutBuffer.width =CGBitmapContextGetWidth(effectOutContext);

effectOutBuffer.height =CGBitmapContextGetHeight(effectOutContext);

effectOutBuffer.rowBytes =CGBitmapContextGetBytesPerRow(effectOutContext); if (hasBlur) { CGFloatinputRadius = blurRadius * [[UIScreen mainScreen] scale];

NSUInteger radius = floor(inputRadius * 3.* sqrt(2 * M_PI) / 4 + 0.5);

if (radius % 2 != 1) {

radius += 1; // force radius to be odd sothat the three box-blur methodology works.

}

vImageBoxConvolve_ARGB8888(&effectInBuffer,&effectOutBuffer, NULL, 0, 0, (short)radius, (short)radius, 0,kvImageEdgeExtend); vImageBoxConvolve_ARGB8888(&effectOutBuffer,&effectInBuffer, NULL, 0, 0, (short)radius, (short)radius, 0,kvImageEdgeExtend); vImageBoxConvolve_ARGB8888(&effectInBuffer,&effectOutBuffer, NULL, 0, 0, (short)radius, (short)radius, 0,kvImageEdgeExtend);

}

BOOL effectImageBuffersAreSwapped = NO;

if (hasSaturationChange) {

CGFloat s = saturationDeltaFactor;

CGFloat floatingPointSaturationMatrix[] = {

0.0722 + 0.9278 * s, 0.0722 - 0.0722 * s,0.0722 - 0.0722 * s,

0,

0.7152 - 0.7152 * s, 0.7152 + 0.2848 * s,0.7152 - 0.7152 * s,

0,

0.2126 - 0.2126 * s, 0.2126 - 0.2126 * s,0.2126 + 0.7873 * s,

0,

0,

0,

0,

1,

};

const int32_t divisor = 256;

NSUInteger matrixSize =sizeof(floatingPointSaturationMatrix)/sizeof(floatingPointSaturationMatrix[0]);int16_t saturationMatrix[matrixSize]; for (NSUInteger i = 0; i < matrixSize;++i) {

saturationMatrix[i] =(int16_t)roundf(floatingPointSaturationMatrix[i] * divisor);

}

if (hasBlur) {

vImageMatrixMultiply_ARGB8888(&effectOutBuffer,&effectInBuffer, saturationMatrix, divisor, NULL, NULL, kvImageNoFlags);

effectImageBuffersAreSwapped = YES;

}

else {

vImageMatrixMultiply_ARGB8888(&effectInBuffer,&effectOutBuffer, saturationMatrix, divisor, NULL, NULL, kvImageNoFlags);

}

}

if (!effectImageBuffersAreSwapped)

effectImage =UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

if (effectImageBuffersAreSwapped)

effectImage = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

}

// 开启上下文 用于输出图像

UIGraphicsBeginImageContextWithOptions(self.size,NO, [[UIScreen mainScreen] scale]);

CGContextRef outputContext =UIGraphicsGetCurrentContext();

CGContextScaleCTM(outputContext, 1.0,-1.0);

CGContextTranslateCTM(outputContext, 0,-self.size.height);

// 开始画底图CGContextDrawImage(outputContext, imageRect, self.CGImage);

// 开始画模糊效果

if (hasBlur)

{

CGContextSaveGState(outputContext);

if (maskImage)

{

CGContextClipToMask(outputContext,imageRect, maskImage.CGImage);

} CGContextDrawImage(outputContext,imageRect, effectImage.CGImage);

CGContextRestoreGState(outputContext);

}

// 添加颜色渲染

if (tintColor)

{

CGContextSaveGState(outputContext);

CGContextSetFillColorWithColor(outputContext,tintColor.CGColor);

CGContextFillRect(outputContext,imageRect);

CGContextRestoreGState(outputContext);

}

// 输出成品,并关闭上下文

UIImage *outputImage =UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

return outputImage;}







评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值