记一个State Loss断言

本文深入解析了在使用Android v4兼容包中的Fragment方法进行应用开发时,遇到的onSaveInstanceState异常的原因及解决策略,特别关注于Android Honeycomb版本的变化以及Fragment机制如何影响这一过程。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

可能很多朋友在使用v4兼容包中的Fragment方法进行应用开发时都遇到过这种异常,诈一看调用栈,根本无从下手解决。下面我就详细分析下这个断言出现的原因和解决方法。

java.lang.IllegalStateException: Can not perform this action after onSaveInstanceState 
at android.support.v4.app.FragmentManagerImpl.checkStateLoss(FragmentManager.java:1341) 
at android.support.v4.app.FragmentManagerImpl.enqueueAction(FragmentManager.java:1352) 
at android.support.v4.app.BackStackRecord.commitInternal(BackStackRecord.java:595) 
at android.support.v4.app.BackStackRecord.commit(BackStackRecord.java:574)

原因

在Android的Honeycomb(3.0)版本开始,Activity只有在被stop后才可能会被杀死。而在Honeycomb版本之前,Activity在被pause后就有可能被杀死。系统在Activity被杀死之前会调用onSaveInstanceState()方法来保存当前Activity状态,也即相当于给当前Activity做一个快照,为了后续重新创建该Activity时可以恢复到被杀死前状态,保证用户体验。这就是说onSaveInstanceState方法在Honeycomb版本之前会在onPause之前调用,Honeycomb之后会在onStop之前调用。如下表所示:
这里写图片描述

在Honeycomb版本中Android引入的一个最大特性就是Fragment机制。而Fragment的管理需要用Activity中的FragmentManager通过事务提交的方式来变更当前Activity中Fragment的状态。如果程序在系统调用onSaveInstanceState之后进行了commit操作。那么这些状态就会被丢失,就会抛出异常。

v4兼容包,因为需要兼容3.0以前版本,如果commit在onPause后就抛出异常则对3.0以后版本来说就显得过于严格,所以兼容包对不同版本区别对待。具体表现如下表:
这里写图片描述
在3.0版本以前,使用v4的commit时,如果发生在onPause和onStop之间则不抛出异常,而直接做状态丢失处理。

举例

一个Activity的onCreate方法中包含如下代码:
getSupportFragmentManager().beginTransaction().add(R.id.fragment_container, new TestFragment()).commit();

在TestFragment的onCreate方法中增加一条打印日志,然后不断旋转屏幕,你会发现,每旋转一次屏幕TestFragment的打印日志都会比上次多打出一条。

这是因为,这个fragment的管理栈在Activity因为屏幕旋转被销毁时而由系统进行了保存,没旋转一次意味着Fragment栈中就多增加一个fragment,而这些fragment都会因为Activity的重建而自己本身也重新创建一次。

Tips:setRetainInstance方法可以防止Fragment重建
详细做法见:《Activity重建时保持Fragment状态的方法》

注意事项

注意事务提交时所处的Activity生命周期
避免在异步方法中使用事务
使用commitAllowingStateLoss
``` import torch import yaml from pathlib import Path from torch.utils.tensorboard import SummaryWriter from model import DnCNN from data_loader import get_loaders import torch.nn as nn def main(): # 路径配置 BASE_DIR = Path(__file__).parent.parent CONFIG_PATH = BASE_DIR / "configs/train_config.yaml" # 加载配置 with open(CONFIG_PATH, encoding='utf-8') as f: # ← 添加编码参数 config = yaml.safe_load(f) # 设备配置 device = torch.device("cuda" if torch.cuda.is_available() else "cpu") \ if config['train']['device'] == 'auto' else torch.device(config['train']['device']) # 初始化组件 model = DnCNN().to(device) optimizer = torch.optim.Adam(model.parameters(), lr=config['train']['lr']) criterion = nn.L1Loss() writer = SummaryWriter(config['output']['log_dir']) # 数据加载 train_loader, valid_loader = get_loaders(CONFIG_PATH) # 训练循环 best_loss = float('inf') for epoch in range(config['train']['epochs']): # 训练阶段 model.train() train_loss = 0.0 for noisy, clean in train_loader: noisy = noisy.to(device) clean = clean.to(device) optimizer.zero_grad() denoised = model(noisy) loss = criterion(denoised, clean) loss.backward() optimizer.step() train_loss += loss.item() # 验证阶段 model.eval() val_loss = 0.0 with torch.no_grad(): for noisy, clean in valid_loader: noisy = noisy.to(device) clean = clean.to(device) val_loss += criterion(model(noisy), clean).item() # 录指标 avg_train_loss = train_loss / len(train_loader) avg_val_loss = val_loss / len(valid_loader) writer.add_scalars('Loss', {'train': avg_train_loss, 'val': avg_val_loss}, epoch) # 保存最佳模型 if avg_val_loss < best_loss: best_loss = avg_val_loss torch.save(model.state_dict(), Path(config['output']['checkpoint_dir']) / "best_model.pth") print(f"Epoch {epoch + 1}/{config['train']['epochs']} | " f"Train Loss: {avg_train_loss:.4f} | Val Loss: {avg_val_loss:.4f}") writer.close() if __name__ == "__main__": main()```Traceback (most recent call last): File "E:\DnCNN_DIV2K\src\train.py", line 75, in <module> main() File "E:\DnCNN_DIV2K\src\train.py", line 25, in main optimizer = torch.optim.Adam(model.parameters(), lr=config['train']['lr']) File "C:\Users\Administrator\.conda\envs\pythonProject5\lib\site-packages\torch\optim\adam.py", line 28, in __init__ if not 0.0 <= lr: TypeError: '<=' not supported between instances of 'float' and 'dict'
04-02
import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torchvision import transforms from torchvision.datasets import ImageFolder from torch.utils.data import DataLoader, random_split import matplotlib.pyplot as plt from sklearn.metrics import confusion_matrix, accuracy_score, recall_score, f1_score, precision_score import seaborn as sns import os # 定义基础的残差块(BasicBlock) class BasicBlock(nn.Module): expansion = 1 # 残差块的扩展系数,用于调整通道数 def __init__(self, in_planes, planes, stride=1): super(BasicBlock, self).__init__() # 第一个卷积层,3x3卷积核,步长为stride,填充为1 self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False) self.bn1 = nn.BatchNorm2d(planes) # 批归一化层 # 第二个卷积层,3x3卷积核,步长为1,填充为1 self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False) self.bn2 = nn.BatchNorm2d(planes) # 批归一化层 # 残差连接中的shortcut,根据stride和输入输出通道数是否一致来决定是否需要进行通道变换 self.shortcut = nn.Sequential() if stride != 1 or in_planes != self.expansion * planes: self.shortcut = nn.Sequential( nn.Conv2d(in_planes, self.expansion * planes, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(self.expansion * planes) ) def forward(self, x): identity = x # 保存输入的identity out = F.relu(self.bn1(self.conv1(x))) # 第一层卷积、批归一化和ReLU激活 out = self.bn2(self.conv2(out)) # 第二层卷积和批归一化 out += self.shortcut(identity) # 残差连接 out = F.relu(out) # 最后再通过ReLU激活 return out # 定义ResNet34模型 class ResNet34(nn.Module): def __init__(self, block, num_blocks, num_classes=10): super(ResNet34, self).__init__() self.in_planes = 64 # 输入通道数 # 第一层:7x7卷积核,步长为2,填充为3 self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False) self.bn1 = nn.BatchNorm2d(64) # 批归一化层 # 四个阶段的残差块组成的层 self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1) self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2) self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2) self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2) # 全连接层,输出为num_classes个类别 self.linear = nn.Linear(512 * block.expansion, num_classes) def _make_layer(self, block, planes, num_blocks, stride): strides = [stride] + [1] * (num_blocks - 1) # 第一个残差块的步长为stride,后面的都为1 layers = [] for stride in strides: layers.append(block(self.in_planes, planes, stride)) self.in_planes = planes * block.expansion return nn.Sequential(*layers) def forward(self, x): out = F.relu(self.bn1(self.conv1(x))) # 第一层卷积、批归一化和ReLU激活 out = F.max_pool2d(out, kernel_size=3, stride=2, padding=1) # 7x7卷积后添加一个3x3最大池化层 out = self.layer1(out) out = self.layer2(out) out = self.layer3(out) out = self.layer4(out) out = F.avg_pool2d(out, 4) # 全局平均池化 out = out.view(out.size(0), -1) # 展平 out = self.linear(out) # 全连接层 return out # 参数设置 num_classes = 10 batch_size = 16 num_epochs = 20 lr = 0.0001 # 数据增强 transform = transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), ]) # 载入数据集 dataset = ImageFolder('D:/实习/dataset1', transform=transform) torch.manual_seed(42) # 划分训练集、验证集、测试集 train_size = int(0.7 * len(dataset)) val_size = int(0.2 * len(dataset)) test_size = len(dataset) - train_size - val_size train_set, val_set, test_set = random_split(dataset, [train_size, val_size, test_size], generator=torch.Generator().manual_seed(42)) # 创建数据加载器 train_loader = DataLoader(train_set, batch_size=batch_size, shuffle=True) val_loader = DataLoader(val_set, batch_size=batch_size) test_loader = DataLoader(test_set, batch_size=batch_size) # 定义模型所需的参数 block = BasicBlock num_blocks = [3, 4, 6, 3] # 初始化模型 model = ResNet34(block, num_blocks, num_classes=num_classes) # 定义损失函数和优化器 criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=lr) # 设置设备(CPU或GPU) device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # 将模型移至设备 model.to(device) # 学习率调度器 scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.1) # 训练过程中收集的准确率和损失数据 train_losses = [] train_accuracies = [] val_losses = [] val_accuracies = [] # 保存最佳模型的参数 best_val_accuracy = 0.0 best_model_wts = None # 训练 for epoch in range(num_epochs): running_loss = 0.0 correct = 0 total = 0 model.train() for inputs, labels in train_loader: inputs = inputs.to(device) labels = labels.to(device) # 正向传播 outputs = model(inputs) loss = criterion(outputs, labels) # 反向传播和优化 optimizer.zero_grad() loss.backward() optimizer.step() running_loss += loss.item() * inputs.size(0) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted.to(device) == labels.to(device)).sum().item() # 打印训练集的损失率和准确率 epoch_loss = running_loss / len(train_set) epoch_accuracy = 100 * correct / total # 收集训练和验证集的准确率和损失数据 train_losses.append(epoch_loss) train_accuracies.append(epoch_accuracy) print(f"Epoch [{epoch + 1}/{num_epochs}], Loss: {epoch_loss:.4f}, Train Accuracy: {epoch_accuracy:.2f}%") # 验证 model.eval() correct = 0 total = 0 total_loss = 0.0 with torch.no_grad(): for inputs, labels in val_loader: inputs = inputs.to(device) labels = labels.to(device) outputs = model(inputs) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted.to(device) == labels.to(device)).sum().item() loss = criterion(outputs, labels) total_loss += loss.item() val_accuracy = 100 * correct / total val_loss = total_loss / len(val_loader) val_losses.append(val_loss) val_accuracies.append(val_accuracy) print(f"Validation Loss: {val_loss:.4f}, Validation Accuracy: {val_accuracy:.2f}%") # 保存最佳模型 if val_accuracy > best_val_accuracy: best_val_accuracy = val_accuracy best_model_wts = model.state_dict() # 更新学习率 scheduler.step() # 加载最佳模型 model.load_state_dict(best_model_wts) # 打印最佳模型的参数 print("Best model parameters:") for param_tensor in model.state_dict(): print(param_tensor, "\t", model.state_dict()[param_tensor].size()) # 确保保存模型的目录存在 save_dir = 'D:/实习/models' os.makedirs(save_dir, exist_ok=True) # 保存最佳模型 model_path = os.path.join(save_dir, 'best_model.pth') torch.save(model.state_dict(), model_path) # 测试 y_pred = [] y_true = [] model.eval() test_loss = 0.0 correct = 0 total = 0 with torch.no_grad(): for inputs, labels in test_loader: inputs = inputs.to(device) labels = labels.to(device) outputs = model(inputs) _, predicted = torch.max(outputs.data, 1) y_pred.extend(predicted.tolist()) y_true.extend(labels.tolist()) total += labels.size(0) correct += (predicted == labels).sum().item() test_loss += criterion(outputs, labels) test_accuracy = (100 * correct / total) test_loss = test_loss / len(test_loader) print(f'Test Loss: {test_loss}, Test Accuracy: {test_accuracy:.2f}%') # 计算混淆矩阵 conf_matrix = confusion_matrix(y_true, y_pred) # 可视化混淆矩阵 plt.figure(figsize=(10, 8)) sns.heatmap(conf_matrix, annot=True, fmt='d', cmap='Blues', xticklabels=range(num_classes), yticklabels=range(num_classes)) plt.title('Confusion Matrix') plt.xlabel('Predicted') plt.ylabel('True') plt.savefig('D:/实习/result/Confusion_Matrix.png') # 打印其他指标 print(f"Accuracy: {accuracy_score(y_true, y_pred)}") print(f"Recall: {recall_score(y_true, y_pred, average='macro')}") print(f"F1 Score: {f1_score(y_true, y_pred, average='macro')}") print(f"Precision: {precision_score(y_true, y_pred, average='macro')}") # 绘制训练和验证集的准确率图表 plt.figure(figsize=(10, 5)) plt.subplot(1, 2, 1) plt.plot(train_accuracies, label='Train Accuracy') plt.plot(val_accuracies, label='Validation Accuracy') plt.title('Training and Validation Accuracy') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.legend() plt.grid(True) # 绘制训练和验证集的损失图表 plt.subplot(1, 2, 2) plt.plot(train_losses, label='Train Loss') plt.plot(val_losses, label='Validation Loss') plt.title('Training and Validation Loss') plt.xlabel('Epoch') plt.ylabel('Loss') plt.legend() plt.grid(True) plt.savefig('D:/实习/t-v-l/training_validation_loss_accuracy.png') 怎么修改
06-23
import os import torch import numpy as np from torch.utils.data import DataLoader, Dataset import cv2 import segmentation_models_pytorch as smp from sklearn.metrics import jaccard_score, f1_score # 配置参数 DATA_PATH = "dataset100" # 替换为你的数据路径 BATCH_SIZE = 4 EPOCHS = 10 LR = 0.0001 DEVICE = "cuda" if torch.cuda.is_available() else "cpu" # 自定义数据集 class SegmentationDataset(Dataset): def __init__(self, image_dir, mask_dir, transform=None): self.image_dir = image_dir self.mask_dir = mask_dir self.images = sorted(os.listdir(image_dir)) self.masks = sorted(os.listdir(mask_dir)) self.transform = transform def __len__(self): return len(self.images) def __getitem__(self, idx): img_path = os.path.join(self.image_dir, self.images[idx]) mask_path = os.path.join(self.mask_dir, self.masks[idx]) # print(img_path, mask_path) image = cv2.imread(img_path).astype(np.float32) / 255.0 mask = cv2.imread(mask_path, cv2.IMREAD_GRAYSCALE).astype(np.float32) / 1.0 image = torch.from_numpy(image).permute(2, 0, 1) mask = torch.from_numpy(mask).unsqueeze(0) return image, mask # 初始化模型 model = smp.Unet( encoder_name="resnet34", encoder_weights=None, in_channels=3, classes=2, activation="sigmoid" ).to(DEVICE) state_dict = torch.load("resnet34-333f7ec4.pth") model.encoder.load_state_dict(state_dict) # 数据准备 train_dataset = SegmentationDataset( os.path.join(DATA_PATH, "train/images"), os.path.join(DATA_PATH, "train/masks") ) val_dataset = SegmentationDataset( os.path.join(DATA_PATH, "val/images"), os.path.join(DATA_PATH, "val/masks") ) train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True) val_loader = DataLoader(val_dataset, batch_size=BATCH_SIZE) # 损失函数和优化器 criterion = smp.losses.DiceLoss(mode="binary") optimizer = torch.optim.Adam(model.parameters(), lr=LR) # 训练函数 def train_model(): best_iou = 0.0 for epoch in range(EPOCHS): model.train() running_loss = 0.0 for images, masks in train_loader: images, masks = images.to(DEVICE), masks.to(DEVICE) optimizer.zero_grad() outputs = model(images) loss = criterion(outputs, masks) loss.backward() optimizer.step() running_loss += loss.item() # 验证 val_loss, iou, f1 = evaluate_model() print(f"Epoch {epoch+1}/{EPOCHS} | " f"Train Loss: {running_loss/len(train_loader):.4f} | " f"Val Loss: {val_loss:.4f} | " f"IoU: {iou:.4f} | F1: {f1:.4f}") # 保存最佳模型 if iou > best_iou: torch.save(model.state_dict(), "best_model.pth") best_iou = iou # 评估函数 def evaluate_model(): model.eval() total_loss = 0.0 all_preds = [] all_masks = [] with torch.no_grad(): for images, masks in val_loader: images, masks = images.to(DEVICE), masks.to(DEVICE) outputs = model(images) loss = criterion(outputs, masks) total_loss += loss.item() preds = (outputs > 0.5).float().cpu().numpy() masks = masks.cpu().numpy() all_preds.extend(preds.flatten()) all_masks.extend(masks.flatten()) iou = jaccard_score(all_masks, all_preds, average="macro") f1 = f1_score(all_masks, all_preds, average="macro") return total_loss/len(val_loader), iou, f1 # 预测函数 def predict(image_path): model.load_state_dict(torch.load("best_model.pth")) model.eval() image = cv2.imread(image_path).astype(np.float32) / 255.0 image_tensor = torch.from_numpy(image).permute(2, 0, 1).unsqueeze(0).to(DEVICE) with torch.no_grad(): output = model(image_tensor) prediction = (output > 0.5).float().cpu().numpy()[0, 0] return prediction # 执行训练和预测 if __name__ == "__main__": train_model() 遇到了:return soft_dice_score(output, target, smooth, eps, dims) File “/root/miniconda3/lib/python3.10/site-packages/segmentation_models_pytorch/losses/_functional.py”, line 172, in soft_dice_score assert output.size() == target.size() AssertionError
最新发布
07-25
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值