持续记录FASTAI学习踩坑

  1. 安装

推荐用conda,具体如下:

conda create -n fastai_env python=3.10
conda activate fastai_env
conda install -c pytorch -c fastai fastai
conda install jupyterlab

然后激活虚拟环境:

conda activate fastai_env

安装IPython Kernel:

conda install ipykernel

将虚拟环境添加到 Jupyter Notebook 的 kernel 列表中:

python -m ipykernel install --user --name fastai_env --display-name "Python (fastai_env)"

启动 Jupyter Notebook:

jupyter notebook

在 Jupyter Notebook 中:

点击右上角的 New 按钮。

在下拉菜单中,你应该会看到刚刚添加的内核(如 Python (fastai_env))。

选择该内核即可在虚拟环境中运行代码。
在终端fastai_env环境中安装fastbook:

pip install -U fastbook

至此,环境基本搭建完成。

  1. Error displaying widget
pip show ipywidgets
conda update ipywidgets jupyter
conda install -c conda-forge ipywidgets

执行上述命令后,widget运行正常,可上传图片

  1. AttributeError: ‘FileUpload’ object has no attribute ‘data’
    错误原因是ipywidgets版本问题,把原代码注释掉,做以下修改:
#img = PILImage.create(uploader.data[0])
img = PILImage.create(uploader.value[0].content.tobytes())

修改后,可以正常上传猫狗图片进行分类

  1. TypeError: ‘NoneType’ object is not iterable
    运行dls = bears.dataloaders(path),报错TypeError: ‘NoneType’ object is not iterable
    下面是GITHUB上大佬给出的整个代码,亲测可以运行成功
#hide
! [ -e /content ] && pip install -Uqq fastbook
import fastbook
fastbook.setup_book()

#hide
from fastbook import *
from fastai.vision.widgets import *
# Add below import (based on Is It A Bird? notebook)
from fastdownload import download_url

# Replaced search_images_bing with DuckDuckGo
search_images_ddg

# Use function definition from "Is it a bird?" notebook
def search_images(term, max_images=30):
    print(f"Searching for '{term}'")
    return L(search_images_ddg(term, max_images=max_images))

results = search_images_ddg('grizzly bear')
ims = results.attrgot('contentUrl')
len(ims)

#hide
ims = ['http://3.bp.blogspot.com/-S1scRCkI3vY/UHzV2kucsPI/AAAAAAAAA-k/YQ5UzHEm9Ss/s1600/Grizzly%2BBear%2BWildlife.jpg']

dest = 'images/grizzly.jpg'
download_url(ims[0], dest)

bear_types = 'grizzly','black','teddy'
path = Path('bears')

from time import sleep

for o in bear_types:
    dest = (path/o)
    dest.mkdir(exist_ok=True, parents=True)
    # results = search_images(f'{o} bear')
    download_images(dest, urls=search_images(f'{o} bear'))
    sleep(5)  # Pause between bear_types searches to avoid over-loading server

fns = get_image_files(path)
fns

len(fns)

failed = verify_images(fns)
failed

failed.map(Path.unlink);

bears = DataBlock(
    blocks=(ImageBlock, CategoryBlock),
    get_items=get_image_files,
    splitter=RandomSplitter(valid_pct=0.2, seed=42),
    get_y=parent_label,
    item_tfms=Resize(128))

dls = bears.dataloaders(path)

dls.valid.show_batch(max_n=4, nrows=1)

bears = bears.new(item_tfms=Resize(128, ResizeMethod.Squish))
dls = bears.dataloaders(path)
dls.valid.show_batch(max_n=4, nrows=1)

bears = bears.new(item_tfms=Resize(128, ResizeMethod.Pad, pad_mode='zeros'))
dls = bears.dataloaders(path)
dls.valid.show_batch(max_n=4, nrows=1)

bears = bears.new(item_tfms=RandomResizedCrop(128, min_scale=0.3))
dls = bears.dataloaders(path)
dls.train.show_batch(max_n=4, nrows=1, unique=True)

bears = bears.new(item_tfms=Resize(128), batch_tfms=aug_transforms(mult=2))
dls = bears.dataloaders(path)
dls.train.show_batch(max_n=8, nrows=2, unique=True)

bears = bears.new(
    item_tfms=RandomResizedCrop(224, min_scale=0.5),
    batch_tfms=aug_transforms())
dls = bears.dataloaders(path)

learn = vision_learner(dls, resnet18, metrics=error_rate)
learn.fine_tune(4)

interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix()

interp.plot_top_losses(5, nrows=1)

#hide_output
cleaner = ImageClassifierCleaner(learn)
cleaner

#hide
for idx in cleaner.delete(): cleaner.fns[idx].unlink()
for idx,cat in cleaner.change(): shutil.move(str(cleaner.fns[idx]), path/cat)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值