语音合成论文优选:Efficiently Trainable Text-to-Speech System Based on Deep Convolutional Networks with Guide

部署运行你感兴趣的模型镜像

声明:语音合成论文优选系列主要分享论文,分享论文不做直接翻译,所写的内容主要是我对论文内容的概括和个人看法。如有转载,请标注来源。

欢迎关注微信公众号:低调奋进

Efficiently Trainable Text-to-Speech System Based on Deep Convolutional Networks with Guided Attention

本文是icassp2018的文章,是语音合成端到端兴起的初始文章,本文章是使用卷积网络来替代rnn网络,从而提升速度,文章链接

https://arxiv.org/pdf/1710.08969.pdf

(我看这篇文章主要是为了总结attention,本文提出Guided Attention。另外本文很多符号,看着很唬人)

1 研究背景

基于端到端的语音合成在2017年开始兴起,但端到端的系统大多使用rnn层,因此限制了训练速度,本文尝试使用卷积网络来实现端到端系统。实验证明只使用15小时就可以训练出较好的模型。

2 系统结构

该系统结构如图1所示,跟tacotron的结构差不多,不过该系统的encoder和decoder都是用卷积来实现。textenc使用non-causal convolution 而audioenc和audioDec使用causal convolution。具体的每层结构如图2所示(全部使用符号来表示,有兴趣的读者可以自己阅读,其实不难)。

图片

图片

其实我比较感兴趣的部分是本文提出guided attention。因为音频具有时序性,因此attention的对齐满足对角线n~at,其中a~N/T(图片借鉴一下李宏毅老师的ppt)。guided attention把该条件带入到attention的限制中,每当对齐的值偏离该对角线,就要对齐惩罚。其函数如下所示。

图片

图片

图片

3 实验

table 1列出本文的参数设置。table2实验与tacotron(tacotron刚提出系统)对比试验,由结果可以看出,本文提出的DCTTS花费15小时就可以获得较好的模型。图3和图4展示本文的guided attention的效果。

图片

图片

图片

4 总结

本文章是使用卷积网络来替代rnn网络,从而提升速度。另外本文提出guided attention使该系统attention收敛更加迅速。

您可能感兴趣的与本文相关的镜像

HunyuanVideo-Foley

HunyuanVideo-Foley

语音合成

HunyuanVideo-Foley是由腾讯混元2025年8月28日宣布开源端到端视频音效生成模型,用户只需输入视频和文字,就能为视频匹配电影级音效

### ImageNet Classification Using Deep Convolutional Neural Networks Paper Implementation and Explanation #### Overview of the Approach The approach described involves utilizing a deep convolutional neural network (ConvNet) for classifying images from the ImageNet dataset. When an unseen image enters this system, it undergoes forward propagation within the ConvNet structure. The outcome is a set of probabilities corresponding to different classes that the input could belong to[^1]. These probabilities result from computations involving optimized weights derived during training. #### Training Process Insights Training plays a crucial role in ensuring accurate classifications by optimizing these weights so they can effectively categorize previously seen data points accurately. A sufficiently large training set enhances generalization capabilities; thus, when presented with entirely novel inputs post-training phase completion, the model should still perform reliably well at assigning appropriate labels based on learned features rather than memorized instances. #### Historical Context and Impact In 2012, a groundbreaking paper titled "ImageNet Classification with Deep Convolutional Neural Networks" was published, marking significant advancements in computer vision technology. This work introduced innovations such as deeper architectures compared to earlier models along with improved techniques like ReLU activation functions which accelerated learning processes significantly over traditional methods used before then[^2]. #### Detailed Architecture Review For those interested in delving deeper into recent developments surrounding CNNs up until around 2019, surveys provide comprehensive reviews covering various aspects including architectural improvements made since AlexNet's introduction back in 2012[^3]. Such resources offer valuable insights not only regarding specific design choices but also broader trends shaping modern approaches towards building efficient yet powerful visual recognition systems capable of handling complex tasks efficiently while maintaining high accuracy levels across diverse datasets similar or even larger scale versions of what existed originally within ImageNet itself. ```python import torch from torchvision import models # Load pretrained ResNet-50 model trained on ImageNet model = models.resnet50(pretrained=True) # Set evaluation mode model.eval() def predict_image(image_tensor): """Predicts the class label given an image tensor.""" with torch.no_grad(): outputs = model(image_tensor.unsqueeze(0)) _, predicted_class = torch.max(outputs.data, 1) return predicted_class.item() ```
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

我叫永强

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值