Welcome to OpenEars iPhone voice recognition API!

From:   Download Politepix’s OpenEars

OpenEars is an shared-source iOS framework for iPhone voice recognition and TTS. It lets you implement round-trip English language speech recognition and text-to-speech on the iPhone and iPad and uses the open source CMU Pocketsphinx, CMU Flite, and CMUCLMTK libraries. Highly-accurate large-vocabulary recognition (that is, trying to recognize any word the user speaks out of many thousands of known words) is not yet a reality for local in-app processing on the iPhone given the hardware limitations of the platform; even Siri does its large-vocabulary recognition on the server side. However, Pocketsphinx (the open source voice recognition engine that OpenEars uses) is capable of local recognition on the iPhone of vocabularies with hundreds of words depending on the environment and other factors, and performs very well with command-and-control language models. The best part is that it uses no network connectivity — all processing occurs locally on the device.

Table of Contents:

The current version of the OpenEars iPhone speech recognition API is 1.1.

OpenEars can:

  • Listen continuously for speech on a background thread, while suspending or resuming speech processing on demand, all while using less than 8% CPU on average on a first-generation iPhone (decoding speech, text-to-speech, updating the UI and other intermittent functions use more CPU),
  • Use any of 9 voices for speech, including male and female voices with a range of speed/quality level, and switch between them on the fly,
  • Change the pitch, speed and variance of any text-to-speech voice,
  • Know whether headphones are plugged in and continue voice recognition during text-to-speech only when they are plugged in,
  • Support bluetooth audio devices (experimental),
  • Dispatch information to any part of your app about the results of speech recognition and speech, or changes in the state of the audio session (such as an incoming phone call or headphones being plugged in),
  • Deliver level metering for both speech input and speech output so you can design visual feedback for both states.
  • Support JSGF grammars,
  • Dynamically generate new ARPA language models in-app based on input from an NSArray of NSStrings,
  • Switch between ARPA language models or JSGF grammars on the fly,
  • Get n-best lists with scoring,
  • Test existing recordings,
  • Be easily interacted with via standard and simple Objective-C methods,
  • Control all audio functions with text-to-speech and speech recognition in memory instead of writing audio files to disk and then reading them,
  • Drive speech recognition with a low-latency Audio Unit driver for highest responsiveness,
  • Be installed in a Cocoa-standard fashion using an easy-peasy already-compiled framework.

In addition to its various new features and faster recognition/text-to-speech responsiveness, OpenEars now has improved recognition accuracy.

Before using OpenEars, please note that its low-latency Audio Unit driver is not compatible with the Simulator, so it has a fallback Audio Queue driver for the Simulator provided as a convenience so you can debug recognition logic. This means is that recognition is better on the device, and that I’d appreciate it if bug reports are limited to issues which affect the device.

To use OpenEars:

1. Download the distribution and unpack it.

2. Create your own app, and add the iOS frameworks AudioToolbox and AVFoundation to it.

3. Inside your downloaded distribution there is a folder called “frameworks” that is inside the folder called “OpenEars”. Drag the “frameworks” folder into your app project in Xcode.

OK, now that you’ve finished laying the groundwork, you have to…wait, that’s everything. You’re ready to start using OpenEars.

Before shipping your app, you will want to remove unused voices from it so that the app size won’t be too big, as explained here.

If the steps on this page didn’t work for you, you can get free support at the forums, read the FAQ, or open a private email support incident at the Politepix shop. Otherwise, carry on to the next part: using OpenEars in your app.

OpenEars uses the open source speech recognition engine Pocketsphinx from Carnegie Mellon University:
内容概要:本文详细介绍了基于FPGA的144输出通道可切换电压源系统的设计与实现,涵盖系统总体架构、FPGA硬件设计、上位机软件设计以及系统集成方案。系统由上位机控制软件(PC端)、FPGA控制核心和高压输出模块(144通道)三部分组成。FPGA硬件设计部分详细描述了Verilog代码实现,包括PWM生成模块、UART通信模块和温度监控模块。硬件设计说明中提及了FPGA选型、PWM生成方式、通信接口、高压输出模块和保护电路的设计要点。上位机软件采用Python编写,实现了设备连接、命令发送、序列控制等功能,并提供了一个图形用户界面(GUI)用于方便的操作和配置。 适合人群:具备一定硬件设计和编程基础的电子工程师、FPGA开发者及科研人员。 使用场景及目标:①适用于需要精确控制多通道电压输出的实验环境或工业应用场景;②帮助用户理解和掌握FPGA在复杂控制系统中的应用,包括PWM控制、UART通信及多通道信号处理;③为研究人员提供一个可扩展的平台,用于测试和验证不同的电压源控制算法和策略。 阅读建议:由于涉及硬件和软件两方面的内容,建议读者先熟悉FPGA基础知识和Verilog语言,同时具备一定的Python编程经验。在阅读过程中,应结合硬件电路图和代码注释,逐步理解系统的各个组成部分及其相互关系。此外,实际动手搭建和调试该系统将有助于加深对整个设计的理解。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值