Title:T-GSA: TRANSFORMER WITH GAUSSIAN-WEIGHTED SELF-ATTENTION FOR SPEECH ENHANCEMENT
-
What’s main claim? Key idea?
This paper introduces a self-supervised speech pre-training method called TERA. The authors use a multi-target auxiliary task to pre-train Transformer Encoders on a large amount of unlabeled speech. And TERA achieved strong performance on many tasks by improving upon surface features.
-
Is there code available? Data?
code: https://github.com/andi611/Self-Supervised-Speech-Pretraining-and-Representation-Learning
data: LibriSpeech and TIMIT
-
Is the idea neat? Is it counter-intuitive?
I think it’s a neat idea. Self-supervised learning has emerged as an attractive approach to leverage knowledge from a large amount of unlabeled data. This paper intr

本文介绍了名为T-GSA的自监督语音预训练方法,利用多目标辅助任务在大量未标记的语音上预训练Transformer编码器。TERA通过改进表面特征,在多个任务上表现出强大性能。
最低0.47元/天 解锁文章
431





