TensorFlow实现的SqueezeDet卷积网路目标检测

本文介绍SqueezeDet实时目标检测器的安装步骤与演示流程,包括克隆仓库、设置虚拟环境、安装依赖包等内容,并提供了训练及评估模型的具体指南。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Installation:

The following instructions are written for Linux-based distros.

  • Clone the SqueezeDet repository:

    git clone https://github.com/BichenWuUCB/squeezeDet.git

    Let's call the top level directory of SqueezeDet $SQDT_ROOT.

https://github.com/BichenWuUCB/squeezeDet
  • (Optional) Setup your own virtual environment.

    1. The following assumes python is the Python2.7 executable. Navigate to your user home directory, and create the virtual environment there.
    cd ~
    virtualenv env --python=python
    1. Launch the virtual environment.
    source env/bin/activate
  • Use pip to install required Python packages:

    pip install -r requirements.txt

Demo:

  • Download SqueezeDet model parameters from here, untar it, and put it under $SQDT_ROOT/data/ If you are using command line, type:

    cd $SQDT_ROOT/data/
    wget https://www.dropbox.com/s/a6t3er8f03gdl4z/model_checkpoints.tgz
    tar -xzvf model_checkpoints.tgz
    rm model_checkpoints.tgz
  • Now we can run the demo. To detect the sample image $SQDT_ROOT/data/sample.png,

    cd $SQDT_ROOT/
    python ./src/demo.py

    If the installation is correct, the detector should generate this image: alt text

    To detect other image(s), use the flag --input_path=./data/*.png to point to input image(s). Input image(s) will be scaled to the resolution of 1242x375 (KITTI image resolution), so it works best when original resolution is close to that.

  • SqueezeDet is a real-time object detector, which can be used to detect videos. The video demo will be released later.

Training/Validation:

  • Download KITTI object detection dataset: images and labels. Put them under $SQDT_ROOT/data/KITTI/. Unzip them, then you will get two directories: $SQDT_ROOT/data/KITTI/training/ and $SQDT_ROOT/data/KITTI/testing/.

  • Now we need to split the training data into a training set and a vlidation set.

    cd $SQDT_ROOT/data/KITTI/
    mkdir ImageSets
    cd ./ImageSets
    ls ../training/image_2/ | grep ".png" | sed s/.png// > trainval.txt

    trainval.txt contains indices to all the images in the training data. In our experiments, we randomly split half of indices in trainval.txt into train.txt to form a training set and rest of them into val.txt to form a validation set. For your convenience, we provide a script to split the train-val set automatically. Simply run

    cd $SQDT_ROOT/data/
    python random_split_train_val.py

    then you should get the train.txt and val.txt under $SQDT_ROOT/data/KITTI/ImageSets.

    When above two steps are finished, the structure of $SQDT_ROOT/data/KITTI/ should at least contain:

    $SQDT_ROOT/data/KITTI/
                      |->training/
                      |     |-> image_2/00****.png
                      |     L-> label_2/00****.txt
                      |->testing/
                      |     L-> image_2/00****.png
                      L->ImageSets/
                            |-> trainval.txt
                            |-> train.txt
                            L-> val.txt
  • Next, download the CNN model pretrained for ImageNet classification:

    cd $SQDT_ROOT/data/
    # SqueezeNet
    wget https://www.dropbox.com/s/fzvtkc42hu3xw47/SqueezeNet.tgz
    tar -xzvf SqueezeNet.tgz
    # ResNet50 
    wget https://www.dropbox.com/s/p65lktictdq011t/ResNet.tgz
    tar -xzvf ResNet.tgz
    # VGG16
    wget https://www.dropbox.com/s/zxd72nj012lzrlf/VGG16.tgz
    tar -xzvf VGG16.tgz
  • Now we can start training. Training script can be found in $SQDT_ROOT/scripts/train.sh, which contains commands to train 4 models: SqueezeDet, SqueezeDet+, VGG16+ConvDet, ResNet50+ConvDet. Un-comment the model you want to train, and then, type the following to train using only the CPU:

    cd $SQDT_ROOT/
    ./scripts/train.sh squeezeDet

    To train using the GPU, add the -gpu flag.

    ./scripts/train.sh squeezeDet -gpu

    Training logs are saved to the directory specified by --train_dir.

  • At the same time, you can launch evaluation by

    cd $SQDT_ROOT/
    ./scripts/eval_train.sh
    ./scripts/eval_val.sh

    If you've changed the --train_dir in the training script, make sure to also change --checkpoint_dir in the evaluation script to the same as --train_dir so evaluation script knows where to find the checkpoint. The evaluation logs will be dumped into the directory specified by --eval_dir. It's recommended to put --train_dir and --eval_dir under the same $LOG_DIR such that tensorboard can load both training and evaluation logs.

    The two scripts simultaneously evaluate the model on training and validation set. The training script keeps dumping checkpoint (model parameters) to the training directory once every 1000 steps (step size can be changed). Once a new checkpoint is saved, evaluation threads load the new checkpoint file and evaluate them on training and validation set.

  • Finally, to monitor training and evaluation process, you can use tensorboard by

    tensorboard --logdir=$LOG_DIR

    Here, $LOG_DIR is the directory where your training and evaluation threads dump log events. As we mentioned it before, your training directory is specified by the flag --train_dir and your evaluation directory is specified by --eval_dir. Then $LOG_DIR should be the upper directory to --train_dir and --eval_dir. From tensorboard, you should be able to see a lot of information including loss, average precision, error analysis, example detections, model visualization, etc.

    alt textalt textalt text

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值