keras实战-多类别分割loss实现

本文样例采用3D数据的onehot标签形式,即y_true(batch_size,x,y,z,class_num),并介绍了几种损失函数,包括dice loss、generalized dice loss、tversky coefficient loss和IoU loss。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

(本文样例均为3d数据的onehot标签形式,即y_true(batch_size,x,y,z,class_num))

参考:https://blog.youkuaiyun.com/m0_37477175/article/details/83004746

1、dice loss

def dice_coef_fun(smooth=1):
    def dice_coef(y_true, y_pred):
        #求得每个sample的每个类的dice
        intersection = K.sum(y_true * y_pred, axis=(1,2,3))
        union = K.sum(y_true, axis=(1,2,3)) + K.sum(y_pred, axis=(1,2,3))
        sample_dices=(2. * intersection + smooth) / (union + smooth) #一维数组 为各个类别的dice
        #求得每个类的dice
        dices=K.mean(sample_dices,axis=0)
        return K.mean(dices) #所有类别dice求平均的dice
    return dice_coef

def dice_coef_loss_fun(smooth=0):
    def dice_coef_loss(y_true,y_pred):
        return 1-1-dice_coef_fun(smooth=smooth)(y_true=y_true,y_pred=y_pred)
    return dice_coef_loss

 

2、generalized dice loss

def generalized_dice_coef_fun(smooth=0):
    def generalized_dice(y_true, y_pred):
        # Compute weights: "the contribution of each label is corrected by the inverse of its volume"
        w = K.sum(y_true, axis=(0, 1, 2, 3))
        w = 1 / (w ** 2 + 0.00001)
        # w为各个类别的权重,占比越大,权重越小
        # Compute gen dice coef:
        numerator = y_true * y_pred
        numerator = w * K.sum(numerator, axis=(0, 1, 2, 3))
        numerator = K.sum(numerator)

        denominator = y_true + y_pred
        denominator = w * K.sum(denominator, axis=(0, 1, 2, 3))
        denominator = K.sum(denominator)

        gen_dice_coef = numerator / denominator

        return  2 * gen_dice_coef
    return generalized_dice

def generalized_dice_loss_fun(smooth=0):
    def generalized_dice_loss(y_true,y_pred):
        return 1 - generalized_dice_coef_fun(smooth=smooth)(y_true=y_true,y_pred=y_pred)
    return generalized_dice_loss

 

3、tversky coefficient loss

# Ref: salehi17, "Twersky loss function for image segmentation using 3D FCDN"
# -> the score is computed for each class separately and then summed
# alpha=beta=0.5 : dice coefficient
# alpha=beta=1   : tanimoto coefficient (also known as jaccard)
# alpha+beta=1   : produces set of F*-scores
# implemented by E. Moebel, 06/04/18
def tversky_coef_fun(alpha,beta):
    def tversky_coef(y_true, y_pred):
        p0 = y_pred  # proba that voxels are class i
        p1 = 1 - y_pred  # proba that voxels are not class i
        g0 = y_true
        g1 = 1 - y_true

        # 求得每个sample的每个类的dice
        num = K.sum(p0 * g0, axis=( 1, 2, 3))
        den = num + alpha * K.sum(p0 * g1,axis= ( 1, 2, 3)) + beta * K.sum(p1 * g0, axis=( 1, 2, 3))
        T = num / den  #[batch_size,class_num]
        
        # 求得每个类的dice
        dices=K.mean(T,axis=0) #[class_num]
        
        return K.mean(dices)
    return tversky_coef

def tversky_coef_loss_fun(alpha,beta):
    def tversky_coef_loss(y_true,y_pred):
        return 1-tversky_coef_fun(alpha=alpha,beta=beta)(y_true=y_true,y_pred=y_pred)
    return tversky_coef_loss

 

4、IoU loss

def IoU_fun(eps=1e-6):
    def IoU(y_true, y_pred):
        # if np.max(y_true) == 0.0:
        #     return IoU(1-y_true, 1-y_pred) ## empty image; calc IoU of zeros
        intersection = K.sum(y_true * y_pred, axis=[1,2,3])
        union = K.sum(y_true, axis=[1,2,3]) + K.sum(y_pred, axis=[1,2,3]) - intersection
        #
        ious=K.mean((intersection + eps) / (union + eps),axis=0)
        return K.mean(ious)
    return IoU

def IoU_loss_fun(eps=1e-6):
    def IoU_loss(y_true,y_pred):
        return 1-IoU_fun(eps=eps)(y_true=y_true,y_pred=y_pred)
    return IoU_loss

 

### 关于使用Python进行机器学习的实际项目或教程 对于希望深入理解如何利用Scikit-learn和Keras构建实际项目的开发者而言,存在多种资源可以提供指导和支持。这些工具分别代表了传统机器学习方法与深度学习框架的不同方面。 #### 使用Scikit-Learn的项目实例 Scikit-learn是一个强大的库,专注于经典监督和非监督算法实现。它提供了简单有效的数据挖掘和数据分析工具[^1]: ```python from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier # 加载鸢尾花数据集 data = load_iris() X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, test_size=0.2) # 创建并训练模型 model = KNeighborsClassifier(n_neighbors=3) model.fit(X_train, y_train) # 预测新样本类别 predictions = model.predict(X_test) print(predictions) ``` 此代码片段展示了如何加载内置的数据集、分割测试/训练集合以及应用k近邻分类器来预测未知类别的例子。 #### 利用Keras开发神经网络的应用案例 另一方面,Keras作为高级API接口简化了TensorFlow的操作流程,使得创建复杂的深层架构变得更加直观易懂。下面给出了一段简单的二元情感分析程序说明[^2]: ```python import tensorflow as tf from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences sentences = ["I love programming.", "Programming makes me happy."] tokenizer = Tokenizer(num_words=100) tokenizer.fit_on_texts(sentences) sequences = tokenizer.texts_to_sequences(sentences) padded = pad_sequences(sequences, maxlen=8) embedding_dim = 16 model = tf.keras.Sequential([ tf.keras.layers.Embedding(input_dim=100, output_dim=embedding_dim), tf.keras.layers.GlobalAveragePooling1D(), tf.keras.layers.Dense(24, activation='relu'), tf.keras.layers.Dense(1, activation='sigmoid') ]) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() # 假设我们已经有了标签labels # model.fit(padded, labels, epochs=10) ``` 这段脚本通过嵌入层将文本转换成向量表示形式,并最终连接全连接层完成二分类任务建模过程。
评论 9
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值