EOFError: Compressed file ended before the end-of-stream marker was reached

本文介绍在使用Keras进行文本分类时遇到的“Compressedfileendedbeforetheend-of-streammarkerwasreached”错误原因及解决方法。主要由于文件下载不完整导致,提供清空datasets文件夹或手动重新下载文件的解决方案。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

raise EOFError("Compressed file ended before the "
EOFError: Compressed file ended before the end-of-stream marker was reached

在文本分类使使用keras的库,出现上面的错误
出错的原因是之前下载了一部分,文件不完整,可能下载一半被中断,网络中断等。
可以采取两种解决办法(windows环境)

1、清空文件夹

找到datasets文件下面下载的文件夹直接清空,重新运行,就会重新下载,一般情况下目录都为

C:\Users\admin\.keras\datasets

如果没找到,可以直接整个计算机搜索“.keras”,查找位置

2、重新下载

第二种办法就是手动下载需要的文件内容并且保存到相应的datasets下面对应的文件夹下,重新运行,程序就会正常不报错。

Traceback (most recent call last): File “D:\Anaconda\envs\cpupytorch\lib\site-packages\monai\transforms\transform.py”, line 141, in apply_transform return _apply_transform(transform, data, unpack_items, lazy, overrides, log_stats) File “D:\Anaconda\envs\cpupytorch\lib\site-packages\monai\transforms\transform.py”, line 98, in _apply_transform return transform(data, lazy=lazy) if isinstance(transform, LazyTrait) else transform(data) File “D:\Anaconda\envs\cpupytorch\lib\site-packages\monai\transforms\io\array.py”, line 291, in call img_array, meta_data = reader.get_data(img) File “D:\Anaconda\envs\cpupytorch\lib\site-packages\monai\data\image_reader.py”, line 952, in get_data data = self._get_array_data(i) File “D:\Anaconda\envs\cpupytorch\lib\site-packages\monai\data\image_reader.py”, line 1026, in _get_array_data return np.asanyarray(img.dataobj, order=“C”) File “D:\Anaconda\envs\cpupytorch\lib\site-packages\nibabel\arrayproxy.py”, line 454, in array arr = self._get_scaled(dtype=dtype, slicer=()) File “D:\Anaconda\envs\cpupytorch\lib\site-packages\nibabel\arrayproxy.py”, line 421, in _get_scaled scaled = apply_read_scaling(self._get_unscaled(slicer=slicer), scl_slope, scl_inter) File “D:\Anaconda\envs\cpupytorch\lib\site-packages\nibabel\arrayproxy.py”, line 391, in _get_unscaled return array_from_file( File “D:\Anaconda\envs\cpupytorch\lib\site-packages\nibabel\volumeutils.py”, line 467, in array_from_file n_read = infile.readinto(data_bytes) File “D:\Anaconda\envs\cpupytorch\lib\gzip.py”, line 300, in read return self._buffer.read(size) File “D:\Anaconda\envs\cpupytorch\lib_compression.py”, line 68, in readinto data = self.read(len(byte_view)) File “D:\Anaconda\envs\cpupytorch\lib\gzip.py”, line 506, in read raise EOFError("Compressed file ended before the " EOFError: Compressed file ended before the end-of-stream marker was reached The above exception was the direct cause of the following exception: Traceback (most recent call last): File “D:\PythonProject\train_3d_cnn.py”, line 61, in <module> for images, labels in train_loader: File “D:\Anaconda\envs\cpupytorch\lib\site-packages\torch\utils\data\dataloader.py”, line 708, in next data = self._next_data() File “D:\Anaconda\envs\cpupytorch\lib\site-packages\torch\utils\data\dataloader.py”, line 764, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File “D:\Anaconda\envs\cpupytorch\lib\site-packages\torch\utils\data_utils\fetch.py”, line 52, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File “D:\Anaconda\envs\cpupytorch\lib\site-packages\torch\utils\data_utils\fetch.py”, line 52, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File “D:\PythonProject\train_3d_cnn.py”, line 38, in getitem img = self.transform(self.data_files[idx]) File “D:\Anaconda\envs\cpupytorch\lib\site-packages\monai\transforms\compose.py”, line 335, in call result = execute_compose( File “D:\Anaconda\envs\cpupytorch\lib\site-packages\monai\transforms\compose.py”, line 111, in execute_compose data = apply_transform( File “D:\Anaconda\envs\cpupytorch\lib\site-packages\monai\transforms\transform.py”, line 171, in apply_transform raise RuntimeError(f"applying transform {transform}") from e RuntimeError: applying transform <monai.transforms.io.array.LoadImage object at 0x00000204AE39BDC0>
03-13
import torch import torch.nn as nn import torch.optim as optim from monai.networks.nets import DenseNet121 from monai.transforms import Compose, LoadImage, EnsureChannelFirst, ToTensor from torch.utils.data import DataLoader, Dataset import os # 设定 CPU 运行 device = torch.device("cpu") train_root = r"E:\dataset\train" test_root = r"E:\dataset\test" # **遍历患者文件夹** class NiftiDataset(Dataset): def __init__(self, data_dir): self.data_files = [] for patient_folder in os.listdir(data_dir): patient_path = os.path.join(data_dir, patient_folder) if os.path.isdir(patient_path): for file in os.listdir(patient_path): if file.endswith(".nii.gz"): self.data_files.append(os.path.join(patient_path, file)) # **修正 LoadImage,确保正确读取 NIfTI** self.transform = Compose([ LoadImage(image_only=True, reader="NibabelReader"), # 指定 NIfTI 读取方式 EnsureChannelFirst(), # 确保 3D 数据格式正确 ToTensor() ]) def __len__(self): return len(self.data_files) def __getitem__(self, idx): img = self.transform(self.data_files[idx]) label = 1 if "positive" in self.data_files[idx] else 0 return img, torch.tensor(label, dtype=torch.long) # **添加 `if __name__ == "__main__":` 保护代码** if __name__ == "__main__": # **加载数据** train_dataset = NiftiDataset(train_root) test_dataset = NiftiDataset(test_root) # **Windows 下 `num_workers=0` 避免多进程问题** train_loader = DataLoader(train_dataset, batch_size=2, shuffle=True, num_workers=0, pin_memory=True) test_loader = DataLoader(test_dataset, batch_size=2, num_workers=0, pin_memory=True) # **定义 3D CNN 模型** model = DenseNet121(spatial_dims=3, in_channels=1, out_channels=2).to(device) # 改为 CPU 运行 criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) # **训练 3D CNN** for epoch in range(10): model.train() for images, labels in train_loader: images, labels = images.to(device), labels.to(device) # 确保数据也在 CPU 上 optimizer.zero_grad() outputs = model(images) loss = criterion(outputs, labels) loss.backward() optimizer.step() print(f"✅ Epoch {epoch + 1} Loss: {loss.item():.4f}") # **保存模型** torch.save(model.state_dict(), "3d_cnn_model_cpu.pth") 这个代码出现错误EOFError: Compressed file ended before the end-of-stream marker was reached The above exception was the direct cause of the following exception:
03-13
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值