Tutorial: Detecting When A User Blows Into The Mic

本文介绍了一种使用iOS设备内置麦克风检测用户吹气动作的方法。通过利用AVAudioRecorder类,结合低通滤波器减少高频噪音,实现对特定低频声音的准确识别。

http://mobileorchard.com/tutorial-detecting-when-a-user-blows-into-the-mic/


If, a couple of years back, you’d told me that people would expect to be able to shake their phone or blow into the mic to make something happen I would have laughed. And here we are.

Detecting a shake gesture is straightforward, all the more so in 3.0 with the introduction of motion events.

Detecting when a user blows into the microphone is a bit more difficult. In this tutorial we’ll create a simple simple single-view app that writes a log message to the console when a user blows into the mic.

Source/Github

The code for this tutorial is available on GitHub. You can either clone the repository or download this zip.

Overview

The job of detecting when a user blows into the microphone is separable into two parts: (1) taking input from the microphone and (2) listening for a blowing sound.

We’ll use the new-in-3.0 AVAudioRecorder class to grab the mic input. Choosing AVAudioRecorder lets us use Objective-C without — as other options require — dropping down to C.

The noise/sound of someone blowing into the mic is made up of low-frequency sounds. We’ll use a low pass filter to reduce the high frequency sounds coming in on the mic; when the level of the filtered signal spikes we’ll know someone’s blowing into the mic.

Creating The Project

Launch Xcode and create a new View-Based iPhone application called MicBlow:

  1. Create a new project using File > New Project… from Xcode’s menu
  2. Select View-based Application from the iPhone OS > Application section, click Choose…
  3. Name the project as MicBlow and click Save

Adding The AVFoundation Framework

In order to use the SDK’s AVAudioRecorder class, we’ll need to add the AVFoundation framework to the project:

  1. Expand the Targets branch in the Groups & Files panel of the project
  2. Control-click or right-click the MicBlow item
  3. Choose Add > Existing Frameworks…
  4. Click the + button at the bottom left beneath Linked Libraries
  5. Choose AVFoundation.framework and click Add
  6. AVFoundation.framework will now be listed under Linked Libraries. Close the window

Next, we’ll import the AVFoundation headers in our view controller’s interface file and set up an AVAudioRecorder instance variable:

  1. Expand the MicBlow project branch in the Groups & Files panel of the project
  2. Expand the Classes folder
  3. Edit MicBlowViewController.h by selecting it
  4. Update the file. Changes are bold:

#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreAudio/CoreAudioTypes.h>

@interface MicBlowViewController : UIViewController {
	AVAudioRecorder *recorder;
}

@end

To save a step later, we also imported the CoreAudioTypes headers; we’ll need some of its constants when we set up the AVAudioRecorder.

Taking Input From The Mic

We’ll set everything up and start listening to the mic in ViewDidLoad:

  1. Uncomment the boilerplate ViewDidLoad method
  2. Update it as follows. Changes are bold:

- (void)viewDidLoad {
	[super viewDidLoad];

  	NSURL *url = [NSURL fileURLWithPath:@"/dev/null"];

  	NSDictionary *settings = [NSDictionary dictionaryWithObjectsAndKeys:
  	  	[NSNumber numberWithFloat: 44100.0],                 AVSampleRateKey,
  	  	[NSNumber numberWithInt: kAudioFormatAppleLossless], AVFormatIDKey,
  	  	[NSNumber numberWithInt: 1],                         AVNumberOfChannelsKey,
   	  	[NSNumber numberWithInt: AVAudioQualityMax],         AVEncoderAudioQualityKey,
  	  nil];

  	NSError *error;

  	recorder = [[AVAudioRecorder alloc] initWithURL:url settings:settings error:&error];

  	if (recorder) {
  		[recorder prepareToRecord];
  		recorder.meteringEnabled = YES;
  		[recorder record];
  	} else
  		NSLog([error description]);

}

The primary function of AVAudioRecorder is, as the name implies, to record audio. As a secondary function it provides audio-level information. So, here we discard the audio input by dumping it to the /dev/null bit bucket — while I can’t find any documentation to support it, the consensus seems to be that /dev/null will perform the same as on any Unix — and explicitly turn on audio metering.

Note: if you’re adapting the code for your own use, be sure to send the prepareToRecord (or, record) message before setting the meteringEnabled property or the audio level metering won’t work.

Remember to release the recorder in dealloc. Changes are bold:

- (void)dealloc {
  	[recorder release];
  	[super dealloc];
}

Sampling The Audio Level

We’ll use a timer to check the audio levels approximately 30 times a second. Add an NSTimer instance variable and its callback method to it in MicBlowViewController.h. Changes are bold:

#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreAudio/CoreAudioTypes.h>

@interface MicBlowViewController : UIViewController {
	AVAudioRecorder *recorder;
	NSTimer *levelTimer;
}

- (void)levelTimerCallback:(NSTimer *)timer;

@end

Update the .m file’s ViewDidLoad to enable the timer. Changes are bold:

- (void)viewDidLoad {
	[super viewDidLoad];

  	NSURL *url = [NSURL fileURLWithPath:@"/dev/null"];

  	NSDictionary *settings = [NSDictionary dictionaryWithObjectsAndKeys:
  	  	[NSNumber numberWithFloat: 44100.0],                 AVSampleRateKey,
  	  	[NSNumber numberWithInt: kAudioFormatAppleLossless], AVFormatIDKey,
  	  	[NSNumber numberWithInt: 1],                         AVNumberOfChannelsKey,
   	  	[NSNumber numberWithInt: AVAudioQualityMax],         AVEncoderAudioQualityKey,
  	  nil];

  	NSError *error;

  	recorder = [[AVAudioRecorder alloc] initWithURL:url settings:settings error:&error];

  	if (recorder) {
  		[recorder prepareToRecord];
  		recorder.meteringEnabled = YES;
  		[recorder record];
		levelTimer = [NSTimer scheduledTimerWithTimeInterval: 0.03 target: self selector: @selector(levelTimerCallback:) userInfo: nil repeats: YES];
  	} else
  		NSLog([error description]);

}

For now, we’ll just sample the audio input level directly/with no filtering. Add the implementation of levelTimerCallback: to the .m file:

- (void)levelTimerCallback:(NSTimer *)timer {
	[recorder updateMeters];
	NSLog(@"Average input: %f Peak input: %f", [recorder averagePowerForChannel:0], [recorder peakPowerForChannel:0]);
}

Sending the updateMeters message refreshes the average and peak power meters. The meter use a logarithmic scale, with -160 being complete quiet and zero being maximum input.

Don’t forget to release the timer in dealloc. Changes are bold:

- (void)dealloc {
	[levelTimer release];
	[recorder release];
  	[super dealloc];
}

Listening For A Blowing Sound

As mentioned in the overview, we’ll be using a low pass filter to diminish high frequencies sounds’ contribution to the level. The algorithm creates a running set of results incorporating past sample input; we’ll need an instance variable to hold the results. Update the .h file. Changes are bold:

#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreAudio/CoreAudioTypes.h>

@interface MicBlowViewController : UIViewController {
	AVAudioRecorder *recorder;
	NSTimer *levelTimer;
	double lowPassResults;
}

Implement the algorithm by replacing the levelTimerCallback: method with:

- (void)levelTimerCallback:(NSTimer *)timer {
	[recorder updateMeters];

	const double ALPHA = 0.05;
	double peakPowerForChannel = pow(10, (0.05 * [recorder peakPowerForChannel:0]));
	lowPassResults = ALPHA * peakPowerForChannel + (1.0 - ALPHA) * lowPassResults;	

	NSLog(@"Average input: %f Peak input: %f Low pass results: %f", [recorder averagePowerForChannel:0], [recorder peakPowerForChannel:0], lowPassResults);
}

Each time the timer’s callback method is triggered the lowPassResults level variable is recalculated. As a convenience, it’s converted to a 0-1 scale, where zero is complete quiet and one is full volume.

We’ll recognize someone as having blown into the mic when the low pass filtered level crosses a threshold. Choosing the threshold number is somewhat of an art. Set it too low and it’s easily triggered; set it too high and the person has to breath into the mic at gale force and at length. For my app’s need, 0.95 works. We’ll replace the log line with a simple conditional:

- (void)listenForBlow:(NSTimer *)timer {
	[recorder updateMeters];

	const double ALPHA = 0.05;
	double peakPowerForChannel = pow(10, (0.05 * [recorder peakPowerForChannel:0]));
	lowPassResults = ALPHA * peakPowerForChannel + (1.0 - ALPHA) * lowPassResults;

	if (lowPassResults > 0.95)
		NSLog(@"Mic blow detected");
}

Voila! You can detect when someone blows into the mic.


内容概要:本文详细介绍了“秒杀商城”微服务架构的设计与实战全过程,涵盖系统从需求分析、服务拆分、技术选型到核心功能开发、分布式事务处理、容器化部署及监控链路追踪的完整流程。重点解决了高并发场景下的超卖问题,采用Redis预减库存、消息队列削峰、数据库乐观锁等手段保障数据一致性,并通过Nacos实现服务注册发现与配置管理,利用Seata处理跨服务分布式事务,结合RabbitMQ实现异步下单,提升系统吞吐能力。同时,项目支持Docker Compose快速部署和Kubernetes生产级编排,集成Sleuth+Zipkin链路追踪与Prometheus+Grafana监控体系,构建可观测性强的微服务系统。; 适合人群:具备Java基础和Spring Boot开发经验,熟悉微服务基本概念的中高级研发人员,尤其是希望深入理解高并发系统设计、分布式事务、服务治理等核心技术的开发者;适合工作2-5年、有志于转型微服务或提升架构能力的工程师; 使用场景及目标:①学习如何基于Spring Cloud Alibaba构建完整的微服务项目;②掌握秒杀场景下高并发、超卖控制、异步化、削峰填谷等关键技术方案;③实践分布式事务(Seata)、服务熔断降级、链路追踪、统一配置中心等企业级中间件的应用;④完成从本地开发到容器化部署的全流程落地; 阅读建议:建议按照文档提供的七个阶段循序渐进地动手实践,重点关注秒杀流程设计、服务间通信机制、分布式事务实现和系统性能优化部分,结合代码调试与监控工具深入理解各组件协作原理,真正掌握高并发微服务系统的构建能力。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值