GithubHelp home page GithubHelp logo

fastsam's Introduction

Fast Segment Anything

[📕Paper] [🤗HuggingFace Demo] [Colab demo] [Replicate demo & API] [Model Zoo] [BibTeX]

FastSAM Speed

The Fast Segment Anything Model(FastSAM) is a CNN Segment Anything Model trained by only 2% of the SA-1B dataset published by SAM authors. The FastSAM achieve a comparable performance with the SAM method at 50× higher run-time speed.

FastSAM design

🍇 Updates

Installation

Clone the repository locally:

git clone https://github.com/CASIA-IVA-Lab/FastSAM.git

Create the conda env. The code requires python>=3.7, as well as pytorch>=1.7 and torchvision>=0.8. Please follow the instructions here to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended.

conda create -n FastSAM python=3.9
conda activate FastSAM

Install the packages:

cd FastSAM
pip install -r requirements.txt

Install CLIP:

pip install git+https://github.com/openai/CLIP.git

Getting Started

First download a model checkpoint.

Then, you can run the scripts to try the everything mode and three prompt modes.

# Everything mode
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg
# Text prompt
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg  --text_prompt "the yellow dog"
# Box prompt (xywh)
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg --box_prompt "[570,200,230,400]"
# Points prompt
python Inference.py --model_path ./weights/FastSAM.pt --img_path ./images/dogs.jpg  --point_prompt "[[520,360],[620,300]]" --point_label "[1,0]"

You can use the following code to generate all masks, make mask selection based on prompts, and visualize the results.

from fastsam import FastSAM, FastSAMPrompt

model = FastSAM('./weights/FastSAM.pt')
IMAGE_PATH = './images/dogs.jpg'
DEVICE = 'cpu'
everything_results = model(IMAGE_PATH, device=DEVICE, retina_masks=True, imgsz=1024, conf=0.4, iou=0.9,)
prompt_process = FastSAMPrompt(IMAGE_PATH, everything_results, device=DEVICE)

# everything prompt
ann = prompt_process.everything_prompt()

# bbox default shape [0,0,0,0] -> [x1,y1,x2,y2]
ann = prompt_process.box_prompt(bbox=[200, 200, 300, 300])

# text prompt
ann = prompt_process.text_prompt(text='a photo of a dog')

# point prompt
# points default [[0,0]] [[x1,y1],[x2,y2]]
# point_label default [0] [1,0] 0:background, 1:foreground
ann = prompt_process.point_prompt(points=[[620, 360]], pointlabel=[1])

prompt_process.plot(annotations=ann,output='./output/',)

You are also welcomed to try our Colab demo: FastSAM_example.ipynb.

Different Inference Options

We provide various options for different purposes, details are in MORE_USAGES.md.

Web demo

Gradio demo

  • We also provide a UI for testing our method that is built with gradio. You can upload a custom image, select the mode and set the parameters, click the segment button, and get a satisfactory segmentation result. Everything mode and points mode are now supported for interaction, other modes will try to support in the future. Running the following command in a terminal will launch the demo:
# Download the pre-trained model in "./weights/FastSAM.pt"
python app_gradio.py

HF_Everyhting HF_Points

Replicate demo

  • Replicate demo has supported all modes, you can experience points/box/text mode.

Replicate-1 Replicate-2 Replicate-3

Model Checkpoints

Two model versions of the model are available with different sizes. Click the links below to download the checkpoint for the corresponding model type.

Results

All result were tested on a single NVIDIA GeForce RTX 3090.

1. Inference time

Running Speed under Different Point Prompt Numbers(ms).

method params 1 10 100 E(16x16) E(32x32*) E(64x64)
SAM-H 0.6G 446 464 627 852 2099 6972
SAM-B 136M 110 125 230 432 1383 5417
FastSAM 68M 40 40 40 40 40 40

2. Memory usage

Dataset Method GPU Memory (MB)
COCO 2017 FastSAM 2608
COCO 2017 SAM-H 7060
COCO 2017 SAM-B 4670

3. Zero-shot Transfer Experiments

Edge Detection

Test on the BSDB500 dataset.

method year ODS OIS AP R50
HED 2015 .788 .808 .840 .923
SAM 2023 .768 .786 .794 .928
FastSAM 2023 .750 .790 .793 .903

Object Proposals

COCO
method AR10 AR100 AR1000 AUC
SAM-H E64 15.5 45.6 67.7 32.1
SAM-H E32 18.5 49.5 62.5 33.7
SAM-B E32 11.4 39.6 59.1 27.3
FastSAM 15.7 47.3 63.7 32.2
LVIS

bbox AR@1000

method all small med. large
ViTDet-H 65.0 53.2 83.3 91.2
zero-shot transfer methods
SAM-H E64 52.1 36.6 75.1 88.2
SAM-H E32 50.3 33.1 76.2 89.8
SAM-B E32 45.0 29.3 68.7 80.6
FastSAM 57.1 44.3 77.1 85.3

Instance Segmentation On COCO 2017

method AP APS APM APL
ViTDet-H .510 .320 .543 .689
SAM .465 .308 .510 .617
FastSAM .379 .239 .434 .500

4. Performance Visualization

Several segmentation results:

Natural Images

Natural Images

Text to Mask

Text to Mask

5.Downstream tasks

The results of several downstream tasks to show the effectiveness.

Anomaly Detection

Anomaly Detection

Salient Object Detection

Salient Object Detection

Building Extracting

Building Detection

License

The model is licensed under the Apache 2.0 license.

Acknowledgement

Contributors

Our project wouldn't be possible without the contributions of these amazing people! Thank you all for making this project better.

Citing FastSAM

If you find this project useful for your research, please consider citing the following BibTeX entry.

@misc{zhao2023fast,
      title={Fast Segment Anything},
      author={Xu Zhao and Wenchao Ding and Yongqi An and Yinglong Du and Tao Yu and Min Li and Ming Tang and Jinqiao Wang},
      year={2023},
      eprint={2306.12156},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Star History Chart

fastsam's People

Contributors

zxdeepdiver avatar an-yongqi avatar yinglongdu avatar tianjinren avatar berry-ding avatar chenxwh avatar msanov avatar vietanhdev avatar gaoxinge avatar bala-az avatar jobinsjohn avatar laclouis5 avatar artyaltanzaya avatar zhaochunhai avatar ivaopenlab avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.