语义分割python教学_Tensorflow/Keras语义分割汇总

本项目提供了一套基于TensorFlow和Keras实现的语义分割解决方案,涵盖了多种模型如FCN、UNet等,并支持多种损失函数及优化器。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Amazing-Semantic-Segmentation

Amazing Semantic Segmentation on Tensorflow && Keras (include FCN, UNet, SegNet, PSPNet, PAN, RefineNet, DeepLabV3, DeepLabV3+, DenseASPP, BiSegNet ...)

Models

The project supports these semantic segmentation models as follows:

Base Models

The project supports these backbone models as follows, and your can choose suitable base model according to your needs.

Losses

The project supports these loss functions:

Cross Entropy

Focal Loss

MIoU Loss

Self Balanced Focal Loss original

...

Optimizers

The project supports these optimizers:

SGD

Adam

Nadam

AdamW

NadamW

SGDW

Learning Rate Scheduler

The project supports these learning rate schedule strategies:

step decay

poly decay

cosine decay

warm up

Dataset Setting

The folds of your dataset must satisfy the following structures:

|-- dataset

| |-- train

| | |-- images

| | |-- labels

| |-- valid

| | |-- images

| | |-- labels

| |-- test

| | |-- images

| | |-- labels

| |-- class_dict.csv

| |-- evaluated_classes

Installation

Numpy pip install numpy

Pillow pip install pillow

OpenCV pip install opencv-python

Tensorflow pip install tensorflow-gpu

Note: The recommended version of tensorflow-gpu is 1.14 or 2.0. And if your tensorflow version is lower, you need to modify some API or upgrade your tensorflow.

Usage

Download

You can download the project through this command:

git clone git@github.com:luyanger1799/Amazing-Semantic-Segmentation.git

Training

The project contains complete codes for training, testing and predicting. And you can perform a simple command as this to build a model on your dataset:

python train.py --model FCN-8s --base_model ResNet50 --dataset "dataset_path" --num_classes "num_classes"

The detailed command line parameters are as follows:

usage: train.py [-h] --model MODEL [--base_model BASE_MODEL] --dataset DATASET

[--loss {CE,Focal_Loss}] --num_classes NUM_CLASSES

[--random_crop RANDOM_CROP] [--crop_height CROP_HEIGHT]

[--crop_width CROP_WIDTH] [--batch_size BATCH_SIZE]

[--valid_batch_size VALID_BATCH_SIZE]

[--num_epochs NUM_EPOCHS] [--initial_epoch INITIAL_EPOCH]

[--h_flip H_FLIP] [--v_flip V_FLIP]

[--brightness BRIGHTNESS [BRIGHTNESS ...]]

[--rotation ROTATION]

[--zoom_range ZOOM_RANGE [ZOOM_RANGE ...]]

[--channel_shift CHANNEL_SHIFT]

[--data_aug_rate DATA_AUG_RATE]

[--checkpoint_freq CHECKPOINT_FREQ]

[--validation_freq VALIDATION_FREQ]

[--num_valid_images NUM_VALID_IMAGES]

[--data_shuffle DATA_SHUFFLE] [--random_seed RANDOM_SEED]

[--weights WEIGHTS]

optional arguments:

-h, --help show this help message and exit

--model MODEL Choose the semantic segmentation methods.

--base_model BASE_MODEL

Choose the backbone model.

--dataset DATASET The path of the dataset.

--loss {CE,Focal_Loss}

The loss function for traing.

--num_classes NUM_CLASSES

The number of classes to be segmented.

--random_crop RANDOM_CROP

Whether to randomly crop the image.

--crop_height CROP_HEIGHT

The height to crop the image.

--crop_width CROP_WIDTH

The width to crop the image.

--batch_size BATCH_SIZE

The training batch size.

--valid_batch_size VALID_BATCH_SIZE

The validation batch size.

--num_epochs NUM_EPOCHS

The number of epochs to train for.

--initial_epoch INITIAL_EPOCH

The initial epoch of training.

--h_flip H_FLIP Whether to randomly flip the image horizontally.

--v_flip V_FLIP Whether to randomly flip the image vertically.

--brightness BRIGHTNESS [BRIGHTNESS ...]

Randomly change the brightness (list).

--rotation ROTATION The angle to randomly rotate the image.

--zoom_range ZOOM_RANGE [ZOOM_RANGE ...]

The times for zooming the image.

--channel_shift CHANNEL_SHIFT

The channel shift range.

--data_aug_rate DATA_AUG_RATE

The rate of data augmentation.

--checkpoint_freq CHECKPOINT_FREQ

How often to save a checkpoint.

--validation_freq VALIDATION_FREQ

How often to perform validation.

--num_valid_images NUM_VALID_IMAGES

The number of images used for validation.

--data_shuffle DATA_SHUFFLE

Whether to shuffle the data.

--random_seed RANDOM_SEED

The random shuffle seed.

--weights WEIGHTS The path of weights to be loaded.

If you only want to use the model in your own training code, you can do as this:

from builders.model_builder import builder

model, base_model = builder(num_classes, input_size, model='SegNet', base_model=None)

Note: If you don't give the parameter "base_model", the default backbone will be used.

Testing

Similarly, you can evaluate the model on your own dataset:

python test.py --model FCN-8s --base_model ResNet50 --dataset "dataset_path" --num_classes "num_classes" --weights "weights_path"

Note: If the parameter "weights" is None, the weigths saved in default path will be loaded.

Predicting

You can get the prediction of a single RGB image as this:

python predict.py --model FCN-8s --base_model ResNet50 --num_classes "num_classes" --weights "weights_path" --image_path "image_path"

Evaluating

If you already have the predictions of all test images or you don't want to evaluate all classes, you can do as this:

python evaluate.py --dataset 'dataset_path' --predictions 'prediction_path'

Note: You must specify the class to be evaluated in dataset/evaluated_classes.txt.

PyPI

Alternatively, you can install the project through PyPI.

pip install semantic-segmentation

And you can use model_builders to build different models or directly call the class of semantic segmentation.

from semantic_segmentation import model_builders

net, base_net = model_builders(num_classes, input_size, model='SegNet', base_model=None)

or

from semantic_segmentation import models

net = models.FCN(num_classes, version='FCN-8s')(input_size=input_size)

Pre-trained

Due to my limited computing resources, there is no pre-training model yet. And maybe it will be added in the future.

Feedback

If you like this work, please give me a star! And if you find any errors or have any suggestions, please contact me.

GitHub: luyanger1799

Email: luyanger1799@outlook.com

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值