【Python/Pytorch - 网络模型】-- TV Loss损失函数

在这里插入图片描述

文章目录

00 写在前面

在医学图像重建过程中,经常在代价方程中加入TV 正则项,该正则项作为去噪项,对于重建可以起到很大帮助作用。但是对于一些纹理细节要求较高的任务,加入TV 正则项,在一定程度上可能会降低纹理细节。

对于连续函数,其表达式为:
在这里插入图片描述

对于图片而言,即为离散的数值,求每一个像素和横向下一个像素的差的平方,加上纵向下一个像素的差的平方,再开β/2次根:
在这里插入图片描述

01 基于Pytorch版本的TV Loss代码

import torch
from torch.autograd import Variable


class TVLoss(torch.nn.Module):
    """
    TV loss
    """

    def __init__(self, weight=1):
        super(TVLoss, self).__init__()
        self.weight = weight

    def forward(self, x):
        batch_size = x.size()[0]
        h_x = x.size()[2]
        w_x = x.size()[3]
        count_h = self._tensor_size(x[:, :, 1:, :])
        count_w = self._tensor_size(x[:, :, :, 1:])
        h_tv = torch.pow((x[:, :, 1:, :] - x[:, :, :h_x - 1, :]), 2).sum()
        w_tv = torch.pow((x[:, :, :, 1:] - x[:, :, :, :w_x - 1]), 2).sum()
        return self.weight * 2 * (h_tv / count_h + w_tv / count_w) / batch_size

    def _tensor_size(self, t):
        return t.size()[1] * t.size()[2] * t.size()[3]


if __name__ == "__main__":
    x = Variable(
        torch.FloatTensor([[[1, 2, 3], [4, 5, 6], [7, 8, 9]], [[1, 2, 3], [4, 5, 6], [7, 8, 9]]]).view(1, 2, 3, 3),
        requires_grad=True)
    tv = TVLoss()
    result = tv(x)
    print(result)

02 论文下载

Understanding Deep Image Representations by Inverting Them

### TV-G Model in Machine Learning and Computer Vision The TV-G (Time-Varying Graph) model represents a specialized approach within the broader domain of machine learning that focuses on handling dynamic graph structures where nodes and edges can change over time. This type of model is particularly useful for scenarios involving temporal data analysis, such as social network evolution, traffic flow prediction, or any system with evolving relationships between entities. In terms of application areas: - **Predictive Maintenance**: By analyzing how different components interact over time, TV-G models help predict potential failures more accurately by considering both spatial and temporal dependencies[^1]. - **Real-Time Anomaly Detection**: These models excel at identifying unusual patterns in streaming data due to their ability to adapt dynamically to changes in underlying graphs representing interconnected systems. For implementing TV-G models, several key considerations apply: #### Data Preparation Data must be structured into sequences of snapshots capturing states at discrete points in time. Each snapshot includes information about node attributes and edge connections present during that period. ```python import numpy as np from scipy.sparse import csr_matrix def prepare_data(time_series_data): """Convert raw time series data into adjacency matrices.""" adj_matrices = [] for t in range(len(time_series_data)): # Assume each entry contains pairs of connected nodes edges_at_t = time_series_data[t] n_nodes = max(max(edges_at_t)) + 1 row = [e[0] for e in edges_at_t] col = [e[1] for e in edges_at_t] data = np.ones_like(row) adj_mat = csr_matrix((data, (row, col)), shape=(n_nodes, n_nodes)) adj_matrices.append(adj_mat) return adj_matrices ``` #### Model Selection Choosing an appropriate algorithm depends heavily on specific use case requirements but generally involves selecting methods capable of processing sequential inputs effectively while maintaining awareness of changing topologies. Popular choices include variants of Recurrent Neural Networks (RNNs), Long Short-Term Memory networks (LSTMs), Gated Recurrent Units (GRUs), along with recent advancements like Temporal Graph Networks (TGN). #### Training Process Training typically requires defining loss functions tailored towards optimizing predictions across multiple timestamps simultaneously rather than focusing solely on individual instances. ```python import torch.nn.functional as F from torch_geometric_temporal.signal import StaticGraphTemporalSignal class TimeVaryingGCN(torch.nn.Module): def __init__(self, num_features, hidden_size, output_size): super(TimeVaryingGCN, self).__init__() self.gcn_conv = GCNConv(num_features, hidden_size) self.linear = Linear(hidden_size, output_size) def forward(self, x, edge_index, batch): h = F.relu(self.gcn_conv(x, edge_index)) y_hat = self.linear(h) return y_hat # Example training loop snippet for epoch in range(epochs): optimizer.zero_grad() out = model(data.x, data.edge_index, data.batch) loss = criterion(out[data.train_mask], data.y[data.train_mask]) loss.backward() optimizer.step() ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

电科_银尘

你的鼓励将是我最大的创作动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值