使用序列到序列深度学习方法自动睡眠阶段评分

本文介绍了一种使用深度学习方法,特别是卷积神经网络(CNN)对单通道脑电图数据进行处理,以实现自动睡眠阶段评分的技术,包括了不同大小滤波器的CNN结构和dropout层的应用。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

深度学习方法,用于使用单通道脑电图进行自动睡眠阶段评分。

 

def build_firstPart_model(input_var,keep_prob_=0.5):
        # List to store the output of each CNNs
        output_conns = []

        ######### CNNs with small filter size at the first layer #########

        # Convolution
        network = tf.layers.conv1d(inputs=input_var, filters=64, kernel_size=50, strides=6,
                                 padding='same', activation=tf.nn.relu)

        network = tf.layers.max_pooling1d(inputs=network, pool_size=8, strides=8, padding='same')

        # Dropout
        network = tf.nn.dropout(network, keep_prob_)


        # Convolution
        network = tf.layers.conv1d(inputs=network, filters=128, kernel_size=8, strides=1,
                                 padding='same', activation=tf.nn.relu)

        network = tf.layers.conv1d(inputs=network, filters=128, kernel_size=8, strides=1,
                                 padding='same', activation=tf.nn.relu)
        network = tf.layers.conv1d(inputs=network, filters=128, kernel_size=8, strides=1,
                                 padding='same', activation=tf.nn.relu)


        # Max pooling
        network = tf.layers.max_pooling1d(inputs=network, pool_size=4, strides=4, padding='same')


        # Flatten
        network = flatten(name="flat1", input_var=network)


        output_conns.append(network)

        ######### CNNs with large filter size at the first layer #########



        # Convolution
        network = tf.layers.conv1d(inputs=input_var, filters=64, kernel_size=400, strides=50,
                                   padding='same', activation=tf.nn.relu)

        network = tf.layers.max_pooling1d(inputs=network, pool_size=4, strides=4, padding='same')

        # Dropout
        network = tf.nn.dropout(network, keep_prob_)

        # Convolution
        network = tf.layers.conv1d(inputs=network, filters=128, kernel_size=6, strides=1,
                                   padding='same', activation=tf.nn.relu)

        network = tf.layers.conv1d(inputs=network, filters=128, kernel_size=6, strides=1,
                                   padding='same', activation=tf.nn.relu)
        network = tf.layers.conv1d(inputs=network, filters=128, kernel_size=6, strides=1,
                                   padding='same', activation=tf.nn.relu)

        # Max pooling
        network = tf.layers.max_pooling1d(inputs=network, pool_size=2, strides=2, padding='same')

        # Flatten
        network = flatten(name="flat2", input_var=network)


        output_conns.append(network)

        # Concat
        network = tf.concat(output_conns,1, name="concat1")

        # Dropout
        network = tf.nn.dropout(network, keep_prob_)

        return network

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值