GithubHelp home page GithubHelp logo

cellsam's Introduction

CellSAM: A Foundation Model for Cell Segmentation

Try the demo!

Description

This repository provides inference code for CellSAM. CellSAM is described in more detail in the preprint, and is publicly deployed at cellsam.deepcell.org. CellSAM achieves state-of-the-art performance on segmentation across a variety of cellular targets (bacteria, tissue, yeast, cell culture, etc.) and imaging modalities (brightfield, fluorescence, phase, etc.). Feel free to reach out for support/questions! The full dataset used to train CellSAM is available here.

Getting started

The easiest way to get started with CellSAM is with pip pip install git+https://github.com/vanvalenlab/cellSAM.git

CellSAM requires python>=3.10, but otherwise uses pure PyTorch. A sample image is included in this repository. Segmentation can be performed as follows

import numpy as np
from cellSAM import segment_cellular_image
img = np.load("sample_imgs/yeaz.npy")
mask, _, _ = segment_cellular_image(img, device='cuda')

For more details, see cellsam_introduction.ipynb.

Napari package

CellSAM includes a basic napari package for annotation functionality. To install the additional napari dependencies, use pip.

pip install git+https://github.com/vanvalenlab/cellSAM.git#egg=cellsam[napari]

To launch the napari app, run cellsam napari.

Citation

Please cite us if you use CellSAM.

@article{israel2023foundation,
  title={A Foundation Model for Cell Segmentation},
  author={Israel, Uriah and Marks, Markus and Dilip, Rohit and Li, Qilin and Schwartz, Morgan and Pradhan, Elora and Pao, Edward and Li, Shenyi and Pearson-Goulart, Alexander and Perona, Pietro and others},
  journal={bioRxiv},
  publisher={Cold Spring Harbor Laboratory Preprints},
  doi = {10.1101/2023.11.17.567630},
}

cellsam's People

Contributors

damaggu avatar rdilip avatar rossbar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

cellsam's Issues

Package name

Prior to the first release, we should ensure that the package name is PEP8 compliant - i.e. no camel-casing in the package name. In practice this means cellSAM should be renamed to either cellsam or cell_sam (or some other PEP8-compliant package name).

Note this doesn't affect the repo name, only the name of the Python package itself.

How to convert ground truth cell masks into bounding boxes

Hi,

Thank you for your amazon work! I know that you are working on the training scripts for CellSAM. But I do have one quick question: how do you convert the ground truth cell mask labels to bounding boxes in the step of training CellFinder? Is there any related code in the repo or it will be included in the published training scripts?

Thank you in advance!
Yufan

Discussion: mask dtype

The model currently returns the label image representing the cell mask with dtype int64. Since this is a label image, it probably makes sense to use an unsigned type instead (unless it's possible for values in the mask to be negative?).

In a similar vein, it might be worth considering uint32 instead of 64-bit to halve the memory footprint of the mask.

Model training script of CellSAM

Hello,

I recently explored the CellSAM model repository and appreciate the work you've shared. It has been incredibly helpful for understanding the model's application.

I am interested in replicating the training process and experimenting with some modifications. Could you please share the training script if it's available?

Thank you,
Dat

`segment_cellular_images` should always return the same output type

cellSAM/cellSAM/model.py

Lines 98 to 100 in 74ebceb

if preds is None:
print("No cells detected.")
return None

In the case where there are no cells, segment_cellular_images returns None which is different from the standard output of a tuple of length 3. This immediately breaks any downstream processes that rely on a standard output shape. Instead of returning None, I would return a mask of zeros and similarly "empty" versions of the two other outputs.

napari plugin?

Could you implement a napari plugin? It's widely used and I'm sure many people would be happy

Color flag in label layer creation in widget initiation for napari plugin is deprecated

Get the following error when attempting to use the napari widget:

Traceback (most recent call last):
  File "napari_test.py", line 41, in <module>
    viewer.window.add_dock_widget(CellSAMWidget(viewer)) 
                                  ^^^^^^^^^^^^^^^^^^^^^
  File ".venvs/napari/lib/python3.11/site-packages/cellSAM/napari_plugin/_widget.py", line 132, in __init__
    self._mask_layer = self._viewer.add_labels(
                       ^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: add_labels() got an unexpected keyword argument 'color'

The color flag is deprecated.

Error 'numpy.ndarray' object has no attribute 'unsqueeze' when passing bounding boxes into segment_cellular_image

Bounding boxes are converted to ndarray when resized in sam_inference.py (line 234), then unsqueeze is called later as if they are still tensors (line 251). Tested locally by removing the np.array(...) in line 234, and it seems to work - can create a PR if that is helpful.

if boxes_per_heatmap is None:
boxes_per_heatmap = self.generate_bounding_boxes(images, device=device)
else:
boxes_per_heatmap = (
np.array(boxes_per_heatmap) * 1024 / max(images[0].shape)
)
# B, N, 4
if not fast:
boxes_per_heatmap = boxes_per_heatmap[0]
low_masks = []
low_masks_thresholded = []
scores = []
for input_bbox in boxes_per_heatmap:
# if fast , passes N, 4
# else, passes 4
while len(input_bbox.shape) < 2:
input_bbox = input_bbox.unsqueeze(0)

Bounding box prediction using other backbones

Hello, first of all I would like to congratulate you an an awesome work.

I am developing a series of Java based plugins for different Java softwares (Fiji, ImageJ, Icy) that use lighter variants of SAM to improve the manual annotation.

We also want to include automatic segmentation so we thought that providing CellSAM would be the best option, because the implementation of SAM Everything does not really work on cells.

We want to use lighter variants of SAM because we want the plugins to run on any computer and the faster the better.

These are the models that we are using:

From what I have seen you trained the AnchorDetr model with the SAM-b ViT encoder. SAM b has a different number of features than the models that we use. Do you know of any way to adapt your AnchorDetr to these models. I am thinking about interpolation the output feature maps.

If not, for how long did you train the model? And on how many GPUs?

Regards,
Carlos

Getting started...

Thanks for uploading @rdilip

The first order of business is to define an interface for users. IMO the easiest way to start is with a minimal set of examples in the README.

One component that is obviously necessary to make this usable is the model itself. We can start by ensuring that the current version is available on users.deepcell.org. From perusing the code it looks like these assets are necessary in order to perform inference. Are there others?

Add option to pass bounding boxes to process_and_segment_image

It would be great to have an option to pass in bounding boxes to this function. I believe this can be done by allowing a value other than None here.

I believe there is some resizing(rescaling?) of images that are larger than 1k that goes on under the hood. I think the bounding boxes will need to be similarly scaled and/or the expectations will need to be made explicit in some way.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.