Paper note: Segment Anything

SAM是一个基于VisionTransformer的图像分割模型,它结合了预训练的MAE和CLIP技术。模型由三部分组成:任务定义为可提示的分割任务,模型架构包括图像编码器、提示编码器和掩码解码器。数据引擎通过模型辅助的标注流程,从手动到自动,以增强泛化能力。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Segment Anything: Paper & Code & Demo

Three questions before research

  1. What task will enable zero-shot generalization?
  2. What is the corresponding model architecture?
  3. What data can power this task and model?

SAM (segment anything model)
A foundation model for image segmentation

Three components:

在这里插入图片描述

Task:

  1. a promptable segmentation task
  2. a segmentation model (SAM)
  3. a data engine for collecting dataset of over 1billion masks

model:

Image encoder:

use an MAE pre-trained Vision Transformer (ViT)

Prompt encoder:

two sets of prompts: sparse (points, boxes, text) and dense (masks).
positional encodings & CLIP & using convolutions and summed element-wise with the image embedding

Mask decoder:

uses prompt self-attention and cross-attention in two directions (prompt-to-image embedding and vice-versa)

Data engine:

To achieve strong generalization to new data distributions
Three stages of model-in-the-loop dataset annotation:

1. assisted-manual:

assists annotators in annotating masks, similar to a classic interactive segmentation setup

2. semi-automatic:

automatically generate masks for a subset of objects by prompting (mask diversity)

3. fully automatic:

prompt SAM with a regular grid of foreground points

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值