GithubHelp home page GithubHelp logo

abe404 / root_painter Goto Github PK

View Code? Open in Web Editor NEW
53.0 5.0 11.0 1.93 MB

RootPainter: Deep Learning Segmentation of Biological Images with Corrective Annotation

Home Page: https://nph.onlinelibrary.wiley.com/doi/full/10.1111/nph.18387

License: Other

Python 97.94% NSIS 0.88% Shell 1.18%
biological-images root painter segmentation deep-learning interactive-training interactive-segmentation root-painter gui human-in-the-loop

root_painter's Introduction

RootPainter

RootPainter is a GUI-based software tool for the rapid training of deep neural networks for use in image analysis. RootPainter uses a client-server architecture, allowing users with a typical laptop to utilise a GPU on a more computationally powerful server.

A detailed description is available in the paper published in the New Phytologist RootPainter: Deep Learning Segmentation of Biological Images with Corrective Annotation

RootPainter Interface

To see a list of work using (or citing) the RootPainter paper, please see the google scholar page

A BioRxiv Pre-print (earlier version of the paper) is available at: https://www.biorxiv.org/content/10.1101/2020.04.16.044461v2

Getting started quickly

I suggest the colab tutorial.

A shorter mini guide is available including more concise instruction, that could be used as reference. I suggest the paper, videos and then colab tutorial to get an idea of how the software interface could be used and then this mini guide for reference to help remember each of the key steps to get from raw data to final measurements.

Videos

A 14 minute video showing how to install RootPainter on windows 11 with google drive and google colab is available on youtube. A similar video for macOS is also now available on youtube. I suggest watching these videos to help with the installation part of the colab tutorial.

A video demonstrating how to train and use a model is available to download

There is a youtube video of a workshop explaining the background behind the software and covering using the colab notebook to train and use a root segmentation model.

Client Downloads

See releases

If you are not confident installing and running python applications on the command line then to get started quickly I suggest the colab tutorial.

Server setup

The following instructions are for a local server. If you do not have a suitable NVIDIA GPU with at least 8GB of GPU memory then my current recommendation is to run via Google colab. A publicly available notebook is available at Google Drive with Google Colab.

Other options to run the server component of RootPainter on a remote machine include the the sshfs server setup tutorial. You can also use Dropbox instead of sshfs.

For the next steps I assume you have a suitable GPU and CUDA installed.

  1. To install the RootPainter trainer:
pip install root-painter-trainer
  1. To run the trainer. This will first create the sync directory.
start-trainer

You will be prompted to input a location for the sync directory. This is the folder where files are shared between the client and server. I will use ~/root_painter_sync. RootPainter will then create some folders inside ~/root_painter_sync. The server should print the automatically selected batch size, which should be greater than 0. It will then start watching for instructions from the client.

You should now be able to see the folders created by RootPainter (datasets, instructions and projects) inside ~/Desktop/root_painter_sync on your local machine See lung tutorial for an example of how to use RootPainter to train a model. I now actually suggest following the colab tutorial instructions but using your local setup instead of the colab server, as these are easier to follow than the lung tutorial.

Questions and Problems

The FAQ may be worth checking before reaching out with any questions you have. If you do have a question you can either email me or post in the discussions. If you have an issue/ have identified a problem with the software then you can post an issue.

root_painter's People

Contributors

abe404 avatar dependabot[bot] avatar felipegalind0 avatar orting avatar rohanorton avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

root_painter's Issues

Segment folder command line option for server/trainer

With rootpainter being able to call the dataset segmentation from the command line with a target, output and model specified would be great. This would help get round the issue of opening a very large folder in the GUI!

export options not showing on metrics plot

Describe the bug
export options not showing on metrics plot

To Reproduce
Steps to reproduce the behavior:

  1. Go to show metrics
  2. Right click on the metrics plot
  3. Click on 'export'
  4. See that nothing happens.

Expected behavior
Export options should be shown without options to output to SVG, PNG etc.

Desktop (please complete the following information):

  • OS: Ubuntu 22.
  • Version [e.g. 0.2.18 - get the version from the About Menu.]

Additional context
Pretty important to fix this ASAP.

make it easier to respecify sync directory

Users often input something incorrect when first setting up RootPainter. I should make an easier way to update this. An option from a settings/options menu that allows the user to select a folder should suffice.

Width/Heigth mismatch

Describe the bug
RootPainter seems to invert the width and heigth when loading the image. This causes in the getitem func of the torch dataset to crash Traceback (most recent call last): │nd,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found
File "trainer/main.py", line 33, in │,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,n
trainer.main_loop() │o seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no
File "/CECI/home/users/r/o/rongione/root_painter/trainer/trainer.py", line 105, in main_loop│seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no se
self.train_one_epoch() │g found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg
File "/CECI/home/users/r/o/rongione/root_painter/trainer/trainer.py", line 234, in train_one│found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg fo
_epoch │und,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg foun
for step, (photo_tiles, │d,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,
File "/home/ucl/elia/rongione/.local/lib/python3.8/site-packages/torch/utils/data/dataloader│no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no
.py", line 517, in next │ seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no s
data = self._next_data() │eg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg
File "/home/ucl/elia/rongione/.local/lib/python3.8/site-packages/torch/utils/data/dataloader│ found,no seg found,no seg found,load seg from file.
.py", line 1199, in _next_data │no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no
return self._process_data(data) │ seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no s
File "/home/ucl/elia/rongione/.local/lib/python3.8/site-packages/torch/utils/data/dataloader│eg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg
.py", line 1225, in _process_data │ found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg f
data.reraise() │ound,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg fou
File "/home/ucl/elia/rongione/.local/lib/python3.8/site-packages/torch/_utils.py", line 429,│nd,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found
in reraise │,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,n
raise self.exc_type(msg) │o seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no
AssertionError: Caught AssertionError in DataLoader worker process 0. │seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no se
Original Traceback (most recent call last): │g found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg
File "/home/ucl/elia/rongione/.local/lib/python3.8/site-packages/torch/utils/data/_utils/wor│found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg fo
ker.py", line 202, in _worker_loop │und,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg foun
data = fetcher.fetch(index) │d,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,
File "/home/ucl/elia/rongione/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fet│no seg found,no seg found,no seg found,no seg found,no seg found,no seg found,load seg from f
ch.py", line 44, in fetch │ile.
data = [self.dataset[idx] for idx in possibly_batched_index] │Traceback (most recent call last):
File "/home/ucl/elia/rongione/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fet│ File "/home/Charles/root_painter/painter/src/main/python/plot_seg_metrics.py", line 185, in
ch.py", line 44, in │ run
data = [self.dataset[idx] for idx in possibly_batched_index] │ metrics = compute_seg_metrics(self.seg_dir, self.annot_dir, fname)
File "/CECI/home/users/r/o/rongione/root_painter/trainer/datasets.py", line 138, in getite│ File "/home/Charles/root_painter/painter/src/main/python/plot_seg_metrics.py", line 141, in
m
│ compute_seg_metrics
assert annot_tile.shape == (self.in_w, self.in_w, 2), ( │ corrected[foreground > 0] = 1
AssertionError: shape is (572, 487, 2) for tile from 2T4_20210502_111453.png

bug rp
rp bug

To Reproduce

  1. Create a dataset with this image
  2. Annotate it
  3. Launch training

This is one of the images for which this bug happens

16T4_20210507_183429

Region Property Crash

Howdy!

When using the extract region property function I find the root painter app crashes on my computer (server is fine). This seems to happen quite often. Any ideas on how to prevent this?

Thanks,

Justin

Installing Root_trainer on an M1 Max Apple Silicon

Hi Abraham,

Is there a way to install the server on a Mac using Apple Silicon (M1 Max) to take advantage of the GPU directly and not use Rozetta emulation? Any help to get started would be highly appreciated.

ubuntu installer

In the current version of the code an executable is built for linux but not an installer.

An app for dpkg would be ideal.

Brush Size Controls Not Clear

Hi there!
I just installed RootPainter on my pc (windows) and I should train it. The problem is that when I try to use the red brush to point out the biopores the brush is too big and it covers the biopore but also a lot of the area surrounding it, so it is not precise.
I tried to zoom in or zoom out but the brush size just adjusts to the photo and it is always bigger than the biopores surface.
Is there a way to change the size?
Thank you in advance :)

colab training progress output

The progress output of the steps during an epoch is no longer shown in colab. The user only sees feedback after the epoch is complete. This has been fixed for RP3D so take a look at how it has been done there.

Switch to pyinstaller

Fmam build system is restricting the client python version which is starting to cause dependency issues.

convert trainer into a pip module to remove dependency on git

The instructions for installing the trainer currently involve using git and then installing the requirements.

The trainer could be uploaded as pip module so the user no longer has to use git and the requirements will be installed automatically.

Display time remaining for segmentation/extracting length

When segmenting large datasets it would be useful if Rootpainter displayed 'Time remaining' or 'Predicted finish time' displayed as well as the % segmented, especially useful when it takes 24/48 hours to know when to check back!!

segmentation RVE option

Currently it takes a second step to convert segmentations to an RVE compatible format (black and white). For the users that require this extra step it will be more efficient to have the RVE compatible format as an option when initially segmenting the folder, so that a second step is not required.

This could be implemented at the same time to #49 as similar areas of the code will be changing.

Image skip to start/to end/to index function

This is just a feature request - I can't seem to find if this is already implemented.

I've been trying to get some screenshots for publications I'm using RootPainter for and wanted to skip back to the first few images I've used to show the annotation approach/progress. Is there any easier way to do it than pressing back and waiting for each image to load?

If not it would be handy to have a 'skip to index' or 'skip to start' option for this kind of purpose.

Create dataset should try to complete even if it fails for some images

Describe the bug
Creating a dataset doesn't complete if there is an issue with a single image.

To Reproduce
Steps to reproduce the behavior:

  1. Create a training dataset where one of the images in the source dataset contains a problematic image (for example a corrupted file).
  2. Observe that the software crashes and doesn't complete the creation of the full training dataset.

Expected behavior
Only the single image with the problem should not be processed. The dataset should otherwise be created normally. The error with the single image should be displayed at the end with the number of images that failed to process also displayed.

keyboard shortcuts should be more discoverable

Some of them can be found by opening the menu to see associated shortcuts with each item.

This is not true for all shortcuts. The software should have a keyboard shortcuts window expaining all interaction options possible with the keyboard shortcuts include brush size modification panning etc.

images missing in metrics plot

Steps to reproduce:

  1. Create a project.
  2. Annotate some images.
  3. Open the metrics plot (show metrics plot) - this will show disagreement between segmentations and annotations and generate a cache file that will insert 'None' for images that are not yet segmented.
  4. Close RootPainter.
  5. Open RootPainter again and open the previous project.
  6. Annotate some more images (4 for example). These should be images that were not previously segmented.
  7. Open the metrics plot
  8. Notice that the newly annotated images are missing from the metrics plot (the plot loads None from the cache and doesn't recompute the metrics for these images.
  9. Annotate some more images (2-3)
  10. Notice the newly annotated images are now added to the metrics plot, skipping the ones that were annotated previously whilst the metric plot was closed.

This bug is quite unfortunate as it means some users metrics plots may have been missing data.

The fix is implemented in: f74cd4f
Which will recompute metrics for images missing from the metrics plot (or in the cache as None) every time the metrics plot is shown.

I will close this issues once it makes it into a release that users can download.

local server gui

for users running the client on a computer with a sufficiently powerful GPU, there should be an option in the client to start a local server. This should not involve command line usage.

segmentation not loading

Hi there! I have a problem with loading the segmentation. I do all the steps in the setup and then i open the rootpainter application and start a new project with the biopores dataset and the first image appears. But the "loading segmentation" never goes away, no matter how much i wait. I also check the server output but it doesen't say anything, it stays on "checking for instructions".
I don't understand why it does that.
Thank you for the attention
:)

Complex input image filenames will crash the server when training

Complex input image filenames will crash the server when training.

Filenames such as EXP-006_20230131_20_2[96dpi]_{x}_{y}.jpg will crash the server immediately after starting to train network. Same image set renamed to A_{x}_{y}.jpg will run and train fine.

To Reproduce
Created a training dataset with filename patter "EXP-006_20230131_20_2[96dpi]{x}{y}.jpg".

I have followed the Colab tutorial here without any changes - https://colab.research.google.com/drive/104narYAvTBt-X4QEDrBSOZm_DRaAKHtA?usp=sharing

Got the following error:

execute_instruction start_training
Traceback (most recent call last):
File "/content/drive/MyDrive/root_painter_src/trainer/main.py", line 40, in
trainer.main_loop()
File "/content/drive/MyDrive/root_painter_src/trainer/trainer.py", line 113, in main_loop
self.train_one_epoch()
File "/content/drive/MyDrive/root_painter_src/trainer/trainer.py", line 244, in train_one_epoch
for step, (photo_tiles,
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py", line 628, in next
data = self._next_data()
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py", line 1333, in _next_data
return self._process_data(data)
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py", line 1359, in _process_data
data.reraise()
File "/usr/local/lib/python3.8/dist-packages/torch/_utils.py", line 543, in reraise
raise exception
Exception: Caught Exception in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/_utils/fetch.py", line 58, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/_utils/fetch.py", line 58, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/content/drive/MyDrive/root_painter_src/trainer/datasets.py", line 106, in getitem
image, annot, fname = load_train_image_and_annot(self.dataset_dir,
File "/content/drive/MyDrive/root_painter_src/trainer/im_utils.py", line 95, in load_train_image_and_annot
raise Exception(f'Could not load photo {latest_im_path}, {latest_error}')
Exception: Could not load photo None, list index out of range

ubuntu (20) Qt platform plugin could be initialized.

qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb.

Aborted (core dumped)

limit batch size to 12

Users on some on large clusters (apparently the largest cluster in Europe) have mentioned that it's not always better to use larger batch sizes and that going larger than 12 doesn't seem to improve performance.

The tests aren't exhaustive, but as I haven't had chance to test larger batch sizes myself, I think a maximum of 12 makes sense.

Output probabilities

There are people that want to use RootPainter to output segmentations that will then be used to train other models. In this case I believe it is better to output floating point probabilities (0 to 1) rather than thresholded to either 0 or 1.

The output to probabilities option could be shown in a dropdown when the user clicks segment folder.

outline view

The 3D version has an outline view, which can be useful as more of the structure can be seen behind the contour. Show the outline view for the 2D version as an alternative to having to hide and show the segmentation as frequently.

See figure 2b here: https://arxiv.org/pdf/2106.11942

mask images

It can sometimes be useful to use a segmentation to mask out features of an image, i.e to remove noise or background as part of a localization stage in a two stage segmentation pipeline. See #58 (comment) for an example where this has been used.

Doing this is not fully possible in RootPainter and requires additional scripting. An option could be added to the software to mask images with segmentations. The code has already been written. See https://github.com/Abe404/im_mask/blob/main/main.py
This could be added as an option from the extras menu.

pre-segment should be more obvious

If a user is waiting for more than 5 seconds for a segmentation tell them about the option to increase pre-segment.

If they are waiting more than 90 seconds, then provide a link to debug issues with segmentation.

Version number specified in multiple places

Version number is specified in 5 places:

  1. !define VERSION "0.2.18.0"
  2. When I actually create the release+tag it is specified manually

How can I specify the version number only once? (or twice if I have to).

create dataset with small images can cause crash

Describe the bug
create dataset crashes the app if the input images are too small.

To Reproduce

  1. prepare some images with a width or height less than 572
  2. open root painter
  3. select 'create training dataset'
  4. select the images that are too small

Expected behavior
There should be a warning stating that root painter can't handle the images and why

Should be possible to specify nested project dataset

Not every dataset is inside the datasets folder directly i.e
datasets/some_dataset

Sometimes users structure their files like so:
datasets/topic_of_interest/specific_dataset

RootPainter should handle this. It currently only stores the name of the dataset in the project and not the full path to the dataset.

Could not find a backend to open image

An image (maybe more) in my dataset triggered this error when annotated and then clicked on saved and next. Inference on it worked fine.

bash-4.2$ python main.py                                                                      │README.md  gpl3.txt  publish.sh  setup.py
GPU Available True                                                                            │(base) root@PC-SE22-091:/home/Charles/root_painter# cd painter/src/main/python/
Batch size 11                                                                                 │(base) root@PC-SE22-091:/home/Charles/root_painter/painter/src/main/python# python main.py
Started main loop. Checking for instructions in /home/ucl/elia/rongione/root_painter_sync/inst│QStandardPaths: runtime directory '/mnt/wslg/runtime-dir' is not owned by UID 0, but a direct
ructions                                                                                      │ory permissions 0700 owned by UID 1000 GID 100
execute_instruction start_training                                                            │view menu add action
epoch train duration 166.935                                                                  │Starting watch for changes
Traceback (most recent call last):                                                            │load seg from file.
  File "main.py", line 35, in <module>                                                        │load seg from file.
    trainer.main_loop()                                                                       │^[[A^[[B
  File "/CECI/home/users/r/o/rongione/root_painter/trainer/trainer.py", line 120, in main_loop│
    self.train_one_epoch()                                                                    │
  File "/CECI/home/users/r/o/rongione/root_painter/trainer/trainer.py", line 301, in train_one│
_epoch                                                                                        │
    self.validation()                                                                         │
  File "/CECI/home/users/r/o/rongione/root_painter/trainer/trainer.py", line 332, in validatio│
n                                                                                             │
    cur_metrics = get_val_metrics(copy.deepcopy(self.model))                                  │
  File "/CECI/home/users/r/o/rongione/root_painter/trainer/model_utils.py", line 108, in get_v│
al_metrics                                                                                    │
    image = im_utils.load_image(image_path)                                                   │
  File "/CECI/home/users/r/o/rongione/root_painter/trainer/im_utils.py", line 223, in load_ima│
ge                                                                                            │
    photo = imread(photo_path)                                                                │
  File "/home/ucl/elia/rongione/.local/lib/python3.8/site-packages/skimage/io/_io.py", line 53│
, in imread                                                                                   │
    img = call_plugin('imread', fname, plugin=plugin, **plugin_args)                          │
  File "/home/ucl/elia/rongione/.local/lib/python3.8/site-packages/skimage/io/manage_plugins.p│
y", line 207, in call_plugin                                                                  │
    return func(*args, **kwargs)                                                              │
  File "/home/ucl/elia/rongione/.local/lib/python3.8/site-packages/skimage/io/_plugins/imageio│
_plugin.py", line 10, in imread                                                               │
    return np.asarray(imageio_imread(*args, **kwargs))                                        │
  File "/home/ucl/elia/rongione/.local/lib/python3.8/site-packages/imageio/__init__.py", line │
97, in imread                                                                                 │
    return imread_v2(uri, format=format, **kwargs)                                            │
  File "/home/ucl/elia/rongione/.local/lib/python3.8/site-packages/imageio/v2.py", line 200, i│
n imread                                                                                      │
    with imopen(uri, "ri", **imopen_args) as file:                                            │
  File "/home/ucl/elia/rongione/.local/lib/python3.8/site-packages/imageio/core/imopen.py", li│
ne 303, in imopen                                                                             │
    raise err_type(err_msg)                                                                   │
ValueError: Could not find a backend to open `/home/ucl/elia/rongione/root_painter_sync/datase│
ts/Paille_photo_finale/3T4_20210508_104558.jpg:Zone.Identifier`` with iomode `ri`.            │


3T4_20210508_104558

instructions folder empty

When starting the server the command looks for instructions in drive_rp_sync/instructions, but the instructions folder is empty and the command cannot move past this step to run the rest of initial setup following Root Painter Setup in Colab.

Screen Shot 2023-01-12 at 11 54 12 AM

Desktop (please complete the following information):

  • OS: MacOS Monterey v12.4
  • Browser: Chrome
  • Version: 108.0.5359.124

squares cut out of predicted region

Currently using rootpainter to extract some plants from some glasshouse images. The model is improving and at a decent quality, but large rectangular cutouts of the predicted foreground region appear in each new image.

To Reproduce
I\m following the Colab guide and have just replaced the soil pores dataset with my own. After each foreground is predicted, I use the red brush for correcting the foreground and green brush for background.

Expected behavior
I assume the predicted foreground should be complete, and not impacted by the cutouts.

Screenshots
image

Desktop (please complete the following information):

  • OS: Windows 10
  • Browser: Chrome/Colab
  • Version 0.2.1

Show informative error message when dataset not found.

When a project is opened that refers to a dataset that is not found (or has been moved). The application just crashes. An appropriate error message should be shown to the user to help them identify and fix the problem.

Initial interactive training with clear examples is too difficult

The protocol states that the first 6 annotated images should include both foreground and background and not more than 10 times as much background as foreground.

The problem is that if too much background is labelled, then the model will tend to only predict background. I believe the problem is still happening for some datasets with a more extreme class imbalance, even if the first 6 images are labelled in accordance with the protocol.

Investigate the situation where the model starts only predicting background. Is it due to the number of images with only background annotated or the ratio of foreground to background?

Is there are better more automatic solution, i.e instance selection to ensure foreground is always included in the batch?

Installation

Is there a Readme to get this to work and what all dependencies it requires?

Sync is too slow in the colab tutorial

It can take a long time to transfer data between the client and server when using google drive, which is done with the colab tutorial:
https://colab.research.google.com/drive/104narYAvTBt-X4QEDrBSOZm_DRaAKHtA

As this colab tutorial is currently the recommended way to get started with RootPainter, it can lead to problems with the user experience for many new users.

One issue is that messages appear in the colab notebook first and take a while to appear on the client (many seconds), which may be causing confusion. See #60 (reply in thread) for some motivation behind this issue.

Users can also spend a long time waiting for segmentations for new images, although the pre-segment option mitigates this issue to some extent, a faster sync speed would still lead to an improved user experience and reduce the requirement for tuning the pre-segment setting.

Expand tutorial to include segmentation of original dataset and extraction of traits

As discussed in #60 (reply in thread), the colab tutorial could be extended with a similar step-by-step guide including checking performance using the metrics plot, segmenting the original images, checking the composites and extracting traits as CSV (or preparing images for RVE).

I actually don't think this needs to be a colab tutorial and could be a simple step by step guide available as HTML or PDF.

It could also be useful to explain that waiting for 60 epochs without progress doesn't matter than much as many users on free colab are getting kicked off (#67 (reply in thread)) before this happens.

open project files with RootPainter

Rather than having to first open RootPainter and then open a project file, it should be possible to directly open a project file with RootPainter.

minimum dataset image size warning

When creating a dataset if setting the target size to 100, the size jumps back up to 900 immediately after clicking submit.

There should instead be a warning saying a size of at least 600x600 is recommended and the input size should always be used.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.