音频的情感分类

本文介绍了一种使用一维卷积神经网络(CNN)直接对原始音频进行情感分类的方法。通过选取音频的特定点作为输入,经过三层CNN处理,最终通过分类器输出情感类别。文中详细展示了代码实现过程,包括数据读取、预处理、模型搭建及训练等步骤。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

近期,自己的研究方向定了下来,大致为多模态的情感分类,恰逢赶上一个很急的工程。从大佬手里得到一些代码,写下久违的博客,以表尊敬!

下面代码是基于原始音频直接进行情感分类的,并没有抽取音频的特征。大致思路为在原始音频中选区200000个点来表示该音频,然后通过三层一维CNN,最后通过classifier来获取分类结果(虽然最后结果不是很好,但是因素有很多,包括数据集质量等等,而且对面提供的测试集只有100条)

废话不多说,直接上代码:

1、导入要用到的包以及GPU的配置等

import os
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from keras.layers import Dense, Input, Bidirectional, LSTM, Dropout, TimeDistributed,BatchNormalization, Conv1D, Flatten, Dropout
from keras.optimizers import Adam
from keras.models import Model, load_model
from keras.optimizers import RMSprop
from keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
from sklearn.model_selection import train_test_split
import keras.backend as K
### 使得这里的编号和nvidia-smi看到的编号是一样的
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
### 指定只能看到编号为0的GPU
os.environ["CUDA_VISIBLE_DEVICES"]="2"
### 设定显存按需分配
config = tf.ConfigProto()  
config.gpu_options.allow_growth=True   
session = tf.Session(config=config)
K.tensorflow_backend.set_session(session)

2、数据读取函数

def load_data(train_data='A', test_data='D'):
    """
    mode: 'ABC'
    """
    data_dir = './backup/'
    print(data_dir+train_data[0]+'_X.npy')
#     X = np.load(os.path.join(data_dir, train_data[0], train_data[0]+'_X.npy'))
#     Y = np.load(os.path.join(data_dir, train_data[0], train_data[0]+'_Y.npy'))
    X = np.load(data_dir+train_data[0]+'_X.npy')
    Y = np.load(data_dir+train_data[0]+'_Y.npy')
   
    for name in train_data[1:]:
        cur_X = np.load(data_dir+name+'_X.npy')
        cur_Y = np.load(data_dir+name+'_Y.npy')
        X = np.concatenate([X,cur_X], axis=0)
        Y = np.concatenate([Y,cur_Y], axis=0)
    # test
    X2 = np.load(data_dir+test_data[0]+'_X.npy')
    Y2 = np.load(data_dir+test_data[0]+'_Y.npy')
    for name in test_data[1:]:
        cur_X = np.load(data_dir+ name+'_X.npy')
        cur_Y = np.load(data_dir+ name+'_Y.npy')
        X2 = np.concatenate([X2,cur_X], axis=0)
        Y2 = np.concatenate([Y2,cur_Y], axis=0)
    # reshape
    y = []
    y2 = []
    for i in range(len(Y)):
        y.append(int(Y[i][0]))
    for i in range(len(Y2)):
        y2.append(int(Y2[i][0]))
    train_X, train_Y = X.transpose([0,2,1]), y
    test_X, test_Y = X2.transpose([0,2,1]), y2
    # return
    return train_X, test_X, train_Y, test_Y

3、获取数据

train_data,test_data = 'ABC', 'D'
train_X, test_X, train_Y, test_Y = load_data(train_data=train_data, test_data=test_data)

4、对数据进行归一化(跟一个专门研究音频分类的清华博士交流得知,在音频领域,归一化是一个非常重要的环节,往往可会有很大的提升)

train_mean = np.mean(train_X)
train_std = np.std(train_X)
train_X = (train_X-train_mean) / train_std
test_X = (test_X-train_mean) / train_std

5、配置导出的图片路径等

model_name = '3Conv1d-'+train_data+'-'+test_data+'.h5'
png_name = '3Conv1d-'+train_data+'-'+test_data+'.png'
data_shape = train_X[0].shape

6、model设计

input_data = Input(shape=data_shape)
conv1d_1 = Conv1D(128, kernel_size=100, strides=100, padding='same')(input_data)
conv1d_2 = Conv1D(128, kernel_size=50, strides=50, padding='same')(conv1d_1)
conv1d_3 = Conv1D(128, kernel_size=10, strides=10, padding='same')(conv1d_2)
flat = Flatten()(conv1d_3)
### 分类器输出
dense1 = Dense(128, activation='relu')(flat)
BN = BatchNormalization()(dense1)
drop1 = Dropout(0.5)(BN)
dense2 = Dense(64, activation='relu')(drop1)
BN2 = BatchNormalization()(dense2)
drop3 = Dropout(0.5)(BN2)
output = Dense(1, activation='sigmoid')(drop3)

7、train

model = Model(inputs=input_data, outputs=output)
model.summary()

8、测试model

Reduce = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=3,
                           mode='auto', cooldown=0, min_lr=0.000001, verbose = 1)
opt = Adam(lr=0.0001)
model.compile(optimizer=opt,
              loss='binary_crossentropy',
              metrics=['accuracy'])
callbacks = [
    EarlyStopping(monitor='val_loss', patience=10, verbose=0),
    ModelCheckpoint(model_name, monitor='val_acc', mode='max', save_best_only=True),
    Reduce
]
print('\nTrain...')
history = model.fit(x = train_X, y = train_Y,
                    batch_size=64,
                    epochs=1000,
                    shuffle=True,
                    validation_data=(test_X, test_Y),
                    callbacks=callbacks)

print("\nTesting...")
model = load_model(model_name)
score, accuracy = model.evaluate(test_X, test_Y,
                                 batch_size=64,
                                 verbose=1)
print("Test loss:  ", score)
print("Test accuracy:  ", accuracy)

9、绘图

# 绘制训练 & 验证的准确率值
plt.cla()
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.savefig('./png_save/'+'acc-'+png_name)

# 绘制训练 & 验证的损失值
plt.cla()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.savefig('./png_save/'+'loss-'+png_name)

代码下载链接:https://download.youkuaiyun.com/download/mr_wuliboy/11289833

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值