GithubHelp home page GithubHelp logo

danifranco / biapy Goto Github PK

View Code? Open in Web Editor NEW
104.0 4.0 27.0 897.82 MB

Open source Python library for building bioimage analysis pipelines

Home Page: https://BiaPyX.github.io

License: MIT License

Python 3.41% Jupyter Notebook 96.59%
computer-vision deep-learning biomedical-image-processing segmentation convolutional-neural-networks image-segmentation medical-imaging semantic-segmentation classification instance-segmentation object-detection image-processing machine-learning denoising self-supervised-learning super-resolution python pytorch

biapy's Introduction

BiaPy logo

BiaPy: Bioimage analysis pipelines in Python

🔥NEWS🔥: We have a new preprint! Check it out at bioRxiv: https://www.biorxiv.org/content/10.1101/2024.02.03.576026v1

BiaPy is an open source ready-to-use all-in-one library that provides deep-learning workflows for a large variety of bioimage analysis tasks, including 2D and 3D semantic segmentation, instance segmentation, object detection, image denoising, single image super-resolution, self-supervised learning and image classification.

BiaPy is a versatile platform designed to accommodate both proficient computer scientists and users less experienced in programming. It offers diverse and user-friendly access points to our workflows.

This repository is actively under development by the Biomedical Computer Vision group at the University of the Basque Country and the Donostia International Physics Center.

BiaPy workflows

Description video

Find a comprehensive overview of BiaPy and its functionality in the following videos:

BiaPy history and GUI demo
BiaPy history and GUI demo at RTmfm by Ignacio Arganda-Carreras and Daniel Franco-Barranco.
BiaPy presentation
BiaPy presentation at Virtual Pub of Euro-BioImaging by Ignacio Arganda-Carreras.

User interface

You can also use BiaPy through our graphical user interface (GUI).

BiaPy GUI

Download BiaPy GUI for you OS

Project's page: [BiaPy GUI]

Applications using BiaPy

López-Cano, Daniel, et al. "Characterizing Structure Formation through Instance Segmentation" (2023).

This study presents a machine-learning framework to predict the formation of dark matter haloes from early universe density perturbations. Utilizing two neural networks, it distinguishes particles comprising haloes and groups them by membership. The framework accurately predicts halo masses and shapes, and compares favorably with N-body simulations. The open-source model could enhance analytical methods of structure formation by analyzing initial condition variations. BiaPy is used in the creation of the watershed approach.

[Documentation (not yet)] [Paper]
Franco-Barranco, Daniel, et al. "Current Progress and Challenges in Large-scale 3D Mitochondria Instance Segmentation." (2023).

This paper reports the results of the MitoEM challenge on 3D instance segmentation of mitochondria in electron microscopy images, held in conjunction with IEEE-ISBI 2021. The paper discusses the top-performing methods, addresses ground truth errors, and proposes a new scoring system to improve segmentation evaluation. Despite progress, challenges remain in segmenting mitochondria with complex shapes, keeping the competition open for further submissions. BiaPy is used in the creation of the MitoEM challenge baseline (U2D-BC).

[Documentation] [Paper] [Toolbox]
Backová, Lenka, et al. "Modeling Wound Healing Using Vector Quantized Variational Autoencoders and Transformers." 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI). IEEE, 2023.

This study focuses on time-lapse sequences of Drosophila embryos healing from laser-incised wounds. The researchers employ a two-stage approach involving a vector quantized variational autoencoder and an autoregressive transformer to model wound healing as a video prediction task. BiaPy is used in the creation of the wound segmentation masks.

[Documentation] [Paper]
Andrés-San Román, Jesús A., et al. "CartoCell, a high-content pipeline for 3D image analysis, unveils cell morphology patterns in epithelia." Cell Reports Methods (2023)

Combining deep learning and 3D imaging is crucial for high-content analysis. CartoCell is introduced, a method that accurately labels 3D epithelial cysts, enabling quantification of cellular features and mapping their distribution. It's adaptable to other epithelial tissues. CartoCell method is created using BiaPy.

[Documentation] [Paper]
Franco-Barranco, Daniel, et al. "Deep learning based domain adaptation for mitochondria segmentation on EM volumes." Computer Methods and Programs in Biomedicine 222 (2022): 106949.

This study addresses mitochondria segmentation across different datasets using three unsupervised domain adaptation approaches, including style transfer, self-supervised learning, and multi-task neural networks. To ensure robust generalization, a new training stopping criterion based on source domain morphological priors is proposed. BiaPy is used for the implementation of the Attention U-Net.

[Documentation] [Paper]
Franco-Barranco, Daniel, et al. "Stable deep neural network architectures for mitochondria segmentation on electron microscopy volumes." Neuroinformatics 20.2 (2022): 437-450.

Recent deep learning models have shown impressive performance in mitochondria segmentation, but often lack code and training details, affecting reproducibility. This study follows best practices, comprehensively comparing state-of-the-art architectures and variations of U-Net models for mitochondria segmentation, revealing their impact and stability. The research consistently achieves state-of-the-art results on various datasets, including EPFL Hippocampus, Lucchi++, and Kasthuri++. BiaPy is used for the implementation of the methods compared in the study.

[Documentation] [Paper]
Wei, Donglai, et al. "Mitoem dataset: Large-scale 3d mitochondria instance segmentation from em images." International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer International Publishing, 2020.

Existing mitochondria segmentation datasets are small, raising questions about method robustness. The MitoEM dataset introduces larger 3D volumes with diverse mitochondria, challenging existing instance segmentation methods, highlighting the need for improved techniques. BiaPy is used in the creation of the MitoEM challenge baseline (U2D-BC).

[Documentation] [Paper] [Challenge]

Authors

Name Role Affiliations
Daniel Franco-Barranco Creator, Implementation, Software design
  • Dept. of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU)
  • Donostia International Physics Center (DIPC)
Lenka Backová Implementation
  • Biofisika Institute
Aitor Gonzalez-Marfil Implementation
  • Dept. of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU)
  • Donostia International Physics Center (DIPC)
Ignacio Arganda-Carreras Supervision, Implementation
  • Dept. of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU)
  • Donostia International Physics Center (DIPC)
  • IKERBASQUE, Basque Foundation for Science
  • Biofisika Institute
Arrate Muñoz-Barrutia Supervision
  • Dept. de Bioingenieria, Universidad Carlos III de Madrid

External collaborators

Name Role Affiliations
Jesús Ángel Andrés-San Román Implementation
  • Instituto de Biomedicina de Sevilla (IBiS), Hospital Universitario Virgen del Rocío/CSIC/Universidad de Sevilla and Dept. de Biología Celular, Facultad de Biología, Universidad de Sevilla
  • Biomedical Network Research Centre on Neurodegenerative Diseases (CIBERNED)
Pedro Javier Gómez Gálvez Supervision, Implementation
  • Instituto de Biomedicina de Sevilla (IBiS), Hospital Universitario Virgen del Rocío/CSIC/Universidad de Sevilla and Dept. de Biología Celular, Facultad de Biología, Universidad de Sevilla
  • MRC Laboratory of Molecular Biology
  • Department of Physiology, Development and Neuroscience, University of Cambridge
Luis M. Escudero Supervision
  • Instituto de Biomedicina de Sevilla (IBiS), Hospital Universitario Virgen del Rocío/CSIC/Universidad de Sevilla and Dept. de Biología Celular, Facultad de Biología, Universidad de Sevilla
  • Biomedical Network Research Centre on Neurodegenerative Diseases (CIBERNED)
Iván Hidalgo Cenalmor Implementation
  • Optical cell biology group, Instituto Gulbenkian de Ciência, Oerias, Portugal
Donglai Wei Supervision
  • Boston College
Clément Caporal Implementation
  • Laboratoire d’Optique et Biosciences, CNRS, Inserm, Ecole Polytechnique
Anatole Chessel Supervision
  • Laboratoire d’Optique et Biosciences, CNRS, Inserm, Ecole Polytechnique

Citation

Franco-Barranco, Daniel, et al. "BiaPy: a ready-to-use library for Bioimage Analysis Pipelines." 
2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI). IEEE, 2023.

biapy's People

Contributors

aaitorg avatar clementcaporal avatar danifranco avatar dependabot[bot] avatar gnodar01 avatar iarganda avatar ivanhcenalmor avatar jansanrom avatar lenkaback avatar pedgomgal1 avatar sbinnee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

biapy's Issues

Add more statistics for instances

Add more statistics for instances in remove_by_properties function. Changes need to be made:

  • Add a variable to calculate stats. Now remove_by_properties function is only called when some filtering is applied (TEST.POST_PROCESSING.REMOVE_BY_PROPERTIES).
  • Calculate perimeter (necessary for sphericity)
  • Calculate sphericity (for 3D). Here is the formula.

Add more data pre-processing

Add the option of applying some new pre-processing to the data. Pre-processing functions to add:

  • Resize
  • Gaussian blur
  • Extract edges (canny())
  • Histogram matching
  • Clahe
  • Median blur

Input zarr axes order in config file while loading Zarr/H5

Hello!

Thank you for your work

Pitch

I would like to load in BiaPy any zarr format without having to respect the current specific order (z, y, x, c).
I think for example that many zarr will respect the ome-zarr convention (t, z, c, y, x)

It could be a variable in the config file

`data.generators.simple_data_generator` does not respect empty mask

Desc

Hi. I ran into an error while I tried to load a model and to run a test inference. I modified DATA.TEST.PATH, DATA.TEST.MASK_PATH, and PATHS.CHECKPOINT_FILE. As you can see, I don't have any masks, so I put an empty string.

The error points to simple_data_generator and I saw that it computes max on mask variable, which is actually None. When I put if statement to avoid executing that line, I don't have the error anymore.

Config file

PROBLEM:
    TYPE: DETECTION
    NDIM: 3D

DATA:
    PATCH_SIZE: (256, 256, 20, 3)
    REFLECT_TO_COMPLETE_SHAPE: True
    CHECK_GENERATORS: False
    EXTRACT_RANDOM_PATCH: False
    PROBABILITY_MAP: False
    TRAIN:
        PATH: 'path_to_train_data'
        MASK_PATH: 'path_to_train_data_gt'
        IN_MEMORY: True
        PADDING: (0,0,0)
        OVERLAP: (0,0,0)
    VAL:
        FROM_TRAIN: True
        SPLIT_TRAIN: 0.1
        IN_MEMORY: True
        PADDING: (0,0,0)
        OVERLAP: (0,0,0)
    TEST:
        PATH: './ChroMS/2021-03-24 P14 6520-2 Mcol cortex/test/x'
        MASK_PATH: ''
        IN_MEMORY: True
        LOAD_GT: False
        PADDING: (16,16,0)
        OVERLAP: (0,0,0)

AUGMENTOR:
    ENABLE: True
    AUG_SAMPLES: True
    DRAW_GRID: True
    DA_PROB: 1.0
    CHANNEL_SHUFFLE: False
    MISALIGNMENT: False
    CUTOUT: False
    GRIDMASK: False
    CUTNOISE: False
    CNOISE_SCALE: (0.05, 0.1)
    CUTBLUR: False
    RANDOM_ROT: False
    ROT90: False
    VFLIP: False
    HFLIP: False
    CONTRAST: False
    CONTRAST_FACTOR: (-0.3, 0.3)
    BRIGHTNESS: False
    BRIGHTNESS_FACTOR: (0.1, 0.3)
    GAMMA_CONTRAST: False
    GC_GAMMA: (0.5, 1.5)
    ELASTIC: False
    GRAYSCALE: True
    AFFINE_MODE: 'reflect'

MODEL:
    ARCHITECTURE: unet
    FEATURE_MAPS: [36, 48, 64]
    DROPOUT_VALUES: [0.1, 0.2, 0.3]
    Z_DOWN: 1
    LOAD_CHECKPOINT: False

LOSS:
  TYPE: CE

TRAIN:
    ENABLE: False
    OPTIMIZER: ADAM
    LR: 1.E-4
    BATCH_SIZE: 1
    EPOCHS: 500
    PATIENCE: 20

TEST:
    ENABLE: True
    AUGMENTATION: False

    DET_LOCAL_MAX_COORDS: True
    DET_MIN_TH_TO_BE_PEAK: [0.1]
    DET_VOXEL_SIZE: (0.4,0.4,2)
    DET_TOLERANCE: [10]
    DET_MIN_DISTANCE: [10]

    STATS:
        PER_PATCH: True
        MERGE_PATCHES: True
        FULL_IMG: False

PATHS:
    CHECKPOINT_FILE: './brainbow_3d_detection/model_weights_unet_3d_detection_P14_big_1.h5'

What I did

from config.config import Config
from engine.engine import Engine

job_identifier = '1'
dataroot = './'
jobdir = './'


cfg = Config(jobdir,
             job_identifier=job_identifier,
             dataroot=dataroot)

cfg._C.merge_from_file('./brainbow_3d_detection/brainbow_3d_detection.yaml')
cfg.update_dependencies()
cfg = cfg.get_cfg_defaults()

engine = Engine(cfg, job_identifier)

Error msg

###################
#  SANITY CHECKS  #
###################

#################
#   LOAD DATA   #
#################

2) Loading test images . . .
Loading data from ./ChroMS/2021-03-24 P14 6520-2 Mcol cortex/test/x

100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00,  2.60it/s]

*** Loaded data shape is (1, 51, 512, 512, 3)
########################
#  PREPARE GENERATORS  #
########################

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Input In [8], in <cell line: 1>()
----> 1 engine = Engine(cfg, job_identifier)

File ~/workspace/BiaPy/engine/engine.py:179, in Engine.__init__(self, cfg, job_identifier)
    176         check_generator_consistence(
    177             self.val_generator, cfg.PATHS.GEN_CHECKS+"_val", cfg.PATHS.GEN_MASK_CHECKS+"_val")
    178 if cfg.TEST.ENABLE:
--> 179     self.test_generator = create_test_augmentor(cfg, X_test, Y_test)
    182 print("#################\n"
    183       "#  BUILD MODEL  #\n"
    184       "#################\n")
    185 self.model = build_model(cfg, self.job_identifier)

File ~/workspace/BiaPy/data/generators/__init__.py:192, in create_test_augmentor(cfg, X_test, Y_test)
    190     if cfg.PROBLEM.TYPE == 'SUPER_RESOLUTION':
    191         dic['do_normalization']=False
--> 192     test_generator = simple_data_generator(**dic)
    193 return test_generator

File ~/workspace/BiaPy/data/generators/simple_data_generators.py:82, in simple_data_generator.__init__(self, X, d_path, provide_Y, Y, dm_path, dims, batch_size, seed, shuffle_each_epoch, instance_problem, do_normalization)
     80 self.o_indexes = np.arange(self.len)
     81 self.div_X_on_load = True if (np.max(img) > 100 and do_normalization) else False
---> 82 self.div_Y_on_load = True if (np.max(mask) > 100 and do_normalization and not instance_problem) else False
     83 self.on_epoch_end()

TypeError: '>' not supported between instances of 'NoneType' and 'int'

Unstable training for lucchi and lucchi_pp datasets

Dear authors, thank you very much for your excellent paper and the efforts to make the code available. I am trying to reproduce your result on the lucchi and lucchi_pp datasets, finding that jaccard_index is not improving during training (i.e. it reduced quickly to 0.0000e+00 at epoch 0 and stayed unchanged). However, the training process is fine for the kasthuri_pp dataset. And I have tried both SGD and ADAM optimizers, but the problem persisted. The model architecture is the Attention U-net. What could be the problem with this scenario?

The training log is attached as follows:

#####################

TRAIN THE MODEL

#####################

build callbacks done
Epoch 1/360

1/297 [..............................] - ETA: 24:24 - loss: 0.7848 - jaccard_index: 0.1074
2/297 [..............................] - ETA: 26s - loss: 0.6453 - jaccard_index: 0.0728
3/297 [..............................] - ETA: 26s - loss: 1.2558 - jaccard_index: 0.0562
4/297 [..............................] - ETA: 26s - loss: 1.0312 - jaccard_index: 0.0424
5/297 [..............................] - ETA: 26s - loss: 0.9182 - jaccard_index: 0.0357
6/297 [..............................] - ETA: 26s - loss: 0.8367 - jaccard_index: 0.0298
7/297 [..............................] - ETA: 26s - loss: 0.7619 - jaccard_index: 0.0256
8/297 [..............................] - ETA: 26s - loss: 0.6990 - jaccard_index: 0.0224
9/297 [..............................] - ETA: 26s - loss: 0.6426 - jaccard_index: 0.0199
10/297 [>.............................] - ETA: 25s - loss: 0.6153 - jaccard_index: 0.0179
11/297 [>.............................] - ETA: 25s - loss: 0.5751 - jaccard_index: 0.0163
12/297 [>.............................] - ETA: 25s - loss: 0.5460 - jaccard_index: 0.0149
13/297 [>.............................] - ETA: 25s - loss: 0.5177 - jaccard_index: 0.0138
14/297 [>.............................] - ETA: 25s - loss: 0.4983 - jaccard_index: 0.0128
15/297 [>.............................] - ETA: 25s - loss: 0.4713 - jaccard_index: 0.0119
16/297 [>.............................] - ETA: 25s - loss: 0.4528 - jaccard_index: 0.0112
17/297 [>.............................] - ETA: 25s - loss: 0.4358 - jaccard_index: 0.0105
18/297 [>.............................] - ETA: 25s - loss: 0.4341 - jaccard_index: 0.0099
19/297 [>.............................] - ETA: 24s - loss: 0.4269 - jaccard_index: 0.0094
20/297 [=>............................] - ETA: 24s - loss: 0.4184 - jaccard_index: 0.0094
21/297 [=>............................] - ETA: 24s - loss: 0.4093 - jaccard_index: 0.0090
22/297 [=>............................] - ETA: 24s - loss: 0.4009 - jaccard_index: 0.0086
23/297 [=>............................] - ETA: 24s - loss: 0.3968 - jaccard_index: 0.0082
24/297 [=>............................] - ETA: 24s - loss: 0.3923 - jaccard_index: 0.0079
25/297 [=>............................] - ETA: 24s - loss: 0.3873 - jaccard_index: 0.0075
26/297 [=>............................] - ETA: 24s - loss: 0.3802 - jaccard_index: 0.0073
27/297 [=>............................] - ETA: 24s - loss: 0.3717 - jaccard_index: 0.0070
28/297 [=>............................] - ETA: 24s - loss: 0.3627 - jaccard_index: 0.0067
29/297 [=>............................] - ETA: 24s - loss: 0.3624 - jaccard_index: 0.0065
30/297 [==>...........................] - ETA: 24s - loss: 0.3552 - jaccard_index: 0.0063
31/297 [==>...........................] - ETA: 24s - loss: 0.3516 - jaccard_index: 0.0061
32/297 [==>...........................] - ETA: 24s - loss: 0.3504 - jaccard_index: 0.0059
33/297 [==>...........................] - ETA: 24s - loss: 0.3444 - jaccard_index: 0.0057
34/297 [==>...........................] - ETA: 24s - loss: 0.3397 - jaccard_index: 0.0055
35/297 [==>...........................] - ETA: 24s - loss: 0.3401 - jaccard_index: 0.0054
36/297 [==>...........................] - ETA: 24s - loss: 0.3377 - jaccard_index: 0.0052
37/297 [==>...........................] - ETA: 23s - loss: 0.3358 - jaccard_index: 0.0051
38/297 [==>...........................] - ETA: 23s - loss: 0.3375 - jaccard_index: 0.0050
39/297 [==>...........................] - ETA: 23s - loss: 0.3353 - jaccard_index: 0.0048
40/297 [===>..........................] - ETA: 23s - loss: 0.3339 - jaccard_index: 0.0047
41/297 [===>..........................] - ETA: 23s - loss: 0.3311 - jaccard_index: 0.0046
42/297 [===>..........................] - ETA: 23s - loss: 0.3271 - jaccard_index: 0.0045
43/297 [===>..........................] - ETA: 23s - loss: 0.3235 - jaccard_index: 0.0044
44/297 [===>..........................] - ETA: 23s - loss: 0.3217 - jaccard_index: 0.0043
45/297 [===>..........................] - ETA: 23s - loss: 0.3197 - jaccard_index: 0.0042
46/297 [===>..........................] - ETA: 23s - loss: 0.3181 - jaccard_index: 0.0041
47/297 [===>..........................] - ETA: 23s - loss: 0.3162 - jaccard_index: 0.0040
48/297 [===>..........................] - ETA: 22s - loss: 0.3156 - jaccard_index: 0.0039
49/297 [===>..........................] - ETA: 22s - loss: 0.3144 - jaccard_index: 0.0038
50/297 [====>.........................] - ETA: 22s - loss: 0.3118 - jaccard_index: 0.0038
51/297 [====>.........................] - ETA: 22s - loss: 0.3106 - jaccard_index: 0.0037
52/297 [====>.........................] - ETA: 22s - loss: 0.3119 - jaccard_index: 0.0036
53/297 [====>.........................] - ETA: 22s - loss: 0.3093 - jaccard_index: 0.0036
54/297 [====>.........................] - ETA: 22s - loss: 0.3096 - jaccard_index: 0.0035
55/297 [====>.........................] - ETA: 22s - loss: 0.3115 - jaccard_index: 0.0034
56/297 [====>.........................] - ETA: 22s - loss: 0.3104 - jaccard_index: 0.0034
57/297 [====>.........................] - ETA: 22s - loss: 0.3109 - jaccard_index: 0.0033
58/297 [====>.........................] - ETA: 22s - loss: 0.3093 - jaccard_index: 0.0033
59/297 [====>.........................] - ETA: 21s - loss: 0.3086 - jaccard_index: 0.0032
60/297 [=====>........................] - ETA: 21s - loss: 0.3078 - jaccard_index: 0.0031
61/297 [=====>........................] - ETA: 21s - loss: 0.3073 - jaccard_index: 0.0031
62/297 [=====>........................] - ETA: 21s - loss: 0.3077 - jaccard_index: 0.0030
63/297 [=====>........................] - ETA: 21s - loss: 0.3072 - jaccard_index: 0.0030
64/297 [=====>........................] - ETA: 21s - loss: 0.3052 - jaccard_index: 0.0029
65/297 [=====>........................] - ETA: 21s - loss: 0.3047 - jaccard_index: 0.0029
66/297 [=====>........................] - ETA: 21s - loss: 0.3039 - jaccard_index: 0.0029
67/297 [=====>........................] - ETA: 21s - loss: 0.3036 - jaccard_index: 0.0028
68/297 [=====>........................] - ETA: 21s - loss: 0.3026 - jaccard_index: 0.0028
69/297 [=====>........................] - ETA: 20s - loss: 0.3021 - jaccard_index: 0.0027
70/297 [======>.......................] - ETA: 20s - loss: 0.3023 - jaccard_index: 0.0027
71/297 [======>.......................] - ETA: 20s - loss: 0.3023 - jaccard_index: 0.0027
72/297 [======>.......................] - ETA: 20s - loss: 0.3010 - jaccard_index: 0.0026
73/297 [======>.......................] - ETA: 20s - loss: 0.3000 - jaccard_index: 0.0026
74/297 [======>.......................] - ETA: 20s - loss: 0.2989 - jaccard_index: 0.0025
75/297 [======>.......................] - ETA: 20s - loss: 0.2972 - jaccard_index: 0.0025
76/297 [======>.......................] - ETA: 20s - loss: 0.2959 - jaccard_index: 0.0025
77/297 [======>.......................] - ETA: 20s - loss: 0.2938 - jaccard_index: 0.0024
78/297 [======>.......................] - ETA: 20s - loss: 0.2909 - jaccard_index: 0.0024
79/297 [======>.......................] - ETA: 20s - loss: 0.2903 - jaccard_index: 0.0024
80/297 [=======>......................] - ETA: 19s - loss: 0.2880 - jaccard_index: 0.0024
81/297 [=======>......................] - ETA: 19s - loss: 0.2880 - jaccard_index: 0.0023
82/297 [=======>......................] - ETA: 19s - loss: 0.2874 - jaccard_index: 0.0023
83/297 [=======>......................] - ETA: 19s - loss: 0.2860 - jaccard_index: 0.0023
84/297 [=======>......................] - ETA: 19s - loss: 0.2865 - jaccard_index: 0.0022
85/297 [=======>......................] - ETA: 19s - loss: 0.2872 - jaccard_index: 0.0022
86/297 [=======>......................] - ETA: 19s - loss: 0.2869 - jaccard_index: 0.0022
87/297 [=======>......................] - ETA: 19s - loss: 0.2883 - jaccard_index: 0.0022
88/297 [=======>......................] - ETA: 19s - loss: 0.2886 - jaccard_index: 0.0023
89/297 [=======>......................] - ETA: 19s - loss: 0.2887 - jaccard_index: 0.0023
90/297 [========>.....................] - ETA: 18s - loss: 0.2881 - jaccard_index: 0.0023
91/297 [========>.....................] - ETA: 18s - loss: 0.2878 - jaccard_index: 0.0022
92/297 [========>.....................] - ETA: 18s - loss: 0.2870 - jaccard_index: 0.0022
93/297 [========>.....................] - ETA: 18s - loss: 0.2864 - jaccard_index: 0.0022
94/297 [========>.....................] - ETA: 18s - loss: 0.2856 - jaccard_index: 0.0022
95/297 [========>.....................] - ETA: 18s - loss: 0.2857 - jaccard_index: 0.0021
96/297 [========>.....................] - ETA: 18s - loss: 0.2847 - jaccard_index: 0.0021
97/297 [========>.....................] - ETA: 18s - loss: 0.2837 - jaccard_index: 0.0021
98/297 [========>.....................] - ETA: 18s - loss: 0.2829 - jaccard_index: 0.0021
99/297 [=========>....................] - ETA: 18s - loss: 0.2825 - jaccard_index: 0.0021
100/297 [=========>....................] - ETA: 18s - loss: 0.2814 - jaccard_index: 0.0020
101/297 [=========>....................] - ETA: 17s - loss: 0.2810 - jaccard_index: 0.0020
102/297 [=========>....................] - ETA: 17s - loss: 0.2817 - jaccard_index: 0.0020
103/297 [=========>....................] - ETA: 17s - loss: 0.2834 - jaccard_index: 0.0020
104/297 [=========>....................] - ETA: 17s - loss: 0.2831 - jaccard_index: 0.0020
105/297 [=========>....................] - ETA: 17s - loss: 0.2829 - jaccard_index: 0.0019
106/297 [=========>....................] - ETA: 17s - loss: 0.2822 - jaccard_index: 0.0019
107/297 [=========>....................] - ETA: 17s - loss: 0.2824 - jaccard_index: 0.0019
108/297 [=========>....................] - ETA: 17s - loss: 0.2823 - jaccard_index: 0.0019
109/297 [==========>...................] - ETA: 17s - loss: 0.2831 - jaccard_index: 0.0019
110/297 [==========>...................] - ETA: 17s - loss: 0.2824 - jaccard_index: 0.0019
111/297 [==========>...................] - ETA: 17s - loss: 0.2821 - jaccard_index: 0.0019
112/297 [==========>...................] - ETA: 16s - loss: 0.2825 - jaccard_index: 0.0019
113/297 [==========>...................] - ETA: 16s - loss: 0.2823 - jaccard_index: 0.0018
114/297 [==========>...................] - ETA: 16s - loss: 0.2809 - jaccard_index: 0.0018
115/297 [==========>...................] - ETA: 16s - loss: 0.2802 - jaccard_index: 0.0018
116/297 [==========>...................] - ETA: 16s - loss: 0.2801 - jaccard_index: 0.0018
117/297 [==========>...................] - ETA: 16s - loss: 0.2806 - jaccard_index: 0.0018
118/297 [==========>...................] - ETA: 16s - loss: 0.2806 - jaccard_index: 0.0018
119/297 [===========>..................] - ETA: 16s - loss: 0.2801 - jaccard_index: 0.0018
120/297 [===========>..................] - ETA: 16s - loss: 0.2795 - jaccard_index: 0.0017
121/297 [===========>..................] - ETA: 16s - loss: 0.2804 - jaccard_index: 0.0017
122/297 [===========>..................] - ETA: 16s - loss: 0.2795 - jaccard_index: 0.0017
123/297 [===========>..................] - ETA: 15s - loss: 0.2794 - jaccard_index: 0.0017
124/297 [===========>..................] - ETA: 15s - loss: 0.2794 - jaccard_index: 0.0017
125/297 [===========>..................] - ETA: 15s - loss: 0.2793 - jaccard_index: 0.0017
126/297 [===========>..................] - ETA: 15s - loss: 0.2796 - jaccard_index: 0.0017
127/297 [===========>..................] - ETA: 15s - loss: 0.2790 - jaccard_index: 0.0016
128/297 [===========>..................] - ETA: 15s - loss: 0.2794 - jaccard_index: 0.0016
129/297 [============>.................] - ETA: 15s - loss: 0.2794 - jaccard_index: 0.0016
130/297 [============>.................] - ETA: 15s - loss: 0.2792 - jaccard_index: 0.0016
131/297 [============>.................] - ETA: 15s - loss: 0.2792 - jaccard_index: 0.0016
132/297 [============>.................] - ETA: 15s - loss: 0.2782 - jaccard_index: 0.0016
133/297 [============>.................] - ETA: 15s - loss: 0.2779 - jaccard_index: 0.0016
134/297 [============>.................] - ETA: 14s - loss: 0.2778 - jaccard_index: 0.0016
135/297 [============>.................] - ETA: 14s - loss: 0.2782 - jaccard_index: 0.0015
136/297 [============>.................] - ETA: 14s - loss: 0.2780 - jaccard_index: 0.0015
137/297 [============>.................] - ETA: 14s - loss: 0.2775 - jaccard_index: 0.0015
138/297 [============>.................] - ETA: 14s - loss: 0.2771 - jaccard_index: 0.0015
139/297 [=============>................] - ETA: 14s - loss: 0.2769 - jaccard_index: 0.0015
140/297 [=============>................] - ETA: 14s - loss: 0.2764 - jaccard_index: 0.0015
141/297 [=============>................] - ETA: 14s - loss: 0.2766 - jaccard_index: 0.0015
142/297 [=============>................] - ETA: 14s - loss: 0.2773 - jaccard_index: 0.0015
143/297 [=============>................] - ETA: 14s - loss: 0.2773 - jaccard_index: 0.0015
144/297 [=============>................] - ETA: 14s - loss: 0.2765 - jaccard_index: 0.0014
145/297 [=============>................] - ETA: 13s - loss: 0.2760 - jaccard_index: 0.0014
146/297 [=============>................] - ETA: 13s - loss: 0.2760 - jaccard_index: 0.0014
147/297 [=============>................] - ETA: 13s - loss: 0.2756 - jaccard_index: 0.0014
148/297 [=============>................] - ETA: 13s - loss: 0.2749 - jaccard_index: 0.0014
149/297 [==============>...............] - ETA: 13s - loss: 0.2739 - jaccard_index: 0.0014
150/297 [==============>...............] - ETA: 13s - loss: 0.2733 - jaccard_index: 0.0014
151/297 [==============>...............] - ETA: 13s - loss: 0.2748 - jaccard_index: 0.0014
152/297 [==============>...............] - ETA: 13s - loss: 0.2743 - jaccard_index: 0.0014
153/297 [==============>...............] - ETA: 13s - loss: 0.2738 - jaccard_index: 0.0014
154/297 [==============>...............] - ETA: 13s - loss: 0.2737 - jaccard_index: 0.0014
155/297 [==============>...............] - ETA: 12s - loss: 0.2740 - jaccard_index: 0.0013
156/297 [==============>...............] - ETA: 12s - loss: 0.2734 - jaccard_index: 0.0013
157/297 [==============>...............] - ETA: 12s - loss: 0.2733 - jaccard_index: 0.0013
158/297 [==============>...............] - ETA: 12s - loss: 0.2734 - jaccard_index: 0.0013
159/297 [===============>..............] - ETA: 12s - loss: 0.2730 - jaccard_index: 0.0013
160/297 [===============>..............] - ETA: 12s - loss: 0.2727 - jaccard_index: 0.0013
161/297 [===============>..............] - ETA: 12s - loss: 0.2726 - jaccard_index: 0.0013
162/297 [===============>..............] - ETA: 12s - loss: 0.2720 - jaccard_index: 0.0013
163/297 [===============>..............] - ETA: 12s - loss: 0.2721 - jaccard_index: 0.0013
164/297 [===============>..............] - ETA: 12s - loss: 0.2713 - jaccard_index: 0.0013
165/297 [===============>..............] - ETA: 12s - loss: 0.2709 - jaccard_index: 0.0013
166/297 [===============>..............] - ETA: 11s - loss: 0.2702 - jaccard_index: 0.0013
167/297 [===============>..............] - ETA: 11s - loss: 0.2694 - jaccard_index: 0.0012
168/297 [===============>..............] - ETA: 11s - loss: 0.2684 - jaccard_index: 0.0012
169/297 [================>.............] - ETA: 11s - loss: 0.2680 - jaccard_index: 0.0012
170/297 [================>.............] - ETA: 11s - loss: 0.2687 - jaccard_index: 0.0012
171/297 [================>.............] - ETA: 11s - loss: 0.2686 - jaccard_index: 0.0012
172/297 [================>.............] - ETA: 11s - loss: 0.2684 - jaccard_index: 0.0012
173/297 [================>.............] - ETA: 11s - loss: 0.2677 - jaccard_index: 0.0012
174/297 [================>.............] - ETA: 11s - loss: 0.2686 - jaccard_index: 0.0012
175/297 [================>.............] - ETA: 11s - loss: 0.2685 - jaccard_index: 0.0012
176/297 [================>.............] - ETA: 11s - loss: 0.2686 - jaccard_index: 0.0012
177/297 [================>.............] - ETA: 10s - loss: 0.2680 - jaccard_index: 0.0012
178/297 [================>.............] - ETA: 10s - loss: 0.2675 - jaccard_index: 0.0012
179/297 [=================>............] - ETA: 10s - loss: 0.2670 - jaccard_index: 0.0012
180/297 [=================>............] - ETA: 10s - loss: 0.2671 - jaccard_index: 0.0012
181/297 [=================>............] - ETA: 10s - loss: 0.2667 - jaccard_index: 0.0012
182/297 [=================>............] - ETA: 10s - loss: 0.2664 - jaccard_index: 0.0011
183/297 [=================>............] - ETA: 10s - loss: 0.2662 - jaccard_index: 0.0011
184/297 [=================>............] - ETA: 10s - loss: 0.2656 - jaccard_index: 0.0011
185/297 [=================>............] - ETA: 10s - loss: 0.2652 - jaccard_index: 0.0011
186/297 [=================>............] - ETA: 10s - loss: 0.2657 - jaccard_index: 0.0011
187/297 [=================>............] - ETA: 10s - loss: 0.2651 - jaccard_index: 0.0011
188/297 [=================>............] - ETA: 9s - loss: 0.2648 - jaccard_index: 0.0011
189/297 [==================>...........] - ETA: 9s - loss: 0.2649 - jaccard_index: 0.0011
190/297 [==================>...........] - ETA: 9s - loss: 0.2656 - jaccard_index: 0.0011
191/297 [==================>...........] - ETA: 9s - loss: 0.2662 - jaccard_index: 0.0011
192/297 [==================>...........] - ETA: 9s - loss: 0.2661 - jaccard_index: 0.0011
193/297 [==================>...........] - ETA: 9s - loss: 0.2657 - jaccard_index: 0.0011
194/297 [==================>...........] - ETA: 9s - loss: 0.2657 - jaccard_index: 0.0011
195/297 [==================>...........] - ETA: 9s - loss: 0.2656 - jaccard_index: 0.0011
196/297 [==================>...........] - ETA: 9s - loss: 0.2653 - jaccard_index: 0.0011
197/297 [==================>...........] - ETA: 9s - loss: 0.2648 - jaccard_index: 0.0011
198/297 [===================>..........] - ETA: 9s - loss: 0.2647 - jaccard_index: 0.0011
199/297 [===================>..........] - ETA: 8s - loss: 0.2642 - jaccard_index: 0.0010
200/297 [===================>..........] - ETA: 8s - loss: 0.2642 - jaccard_index: 0.0010
201/297 [===================>..........] - ETA: 8s - loss: 0.2643 - jaccard_index: 0.0010
202/297 [===================>..........] - ETA: 8s - loss: 0.2638 - jaccard_index: 0.0010
203/297 [===================>..........] - ETA: 8s - loss: 0.2645 - jaccard_index: 0.0010
204/297 [===================>..........] - ETA: 8s - loss: 0.2650 - jaccard_index: 0.0010
205/297 [===================>..........] - ETA: 8s - loss: 0.2644 - jaccard_index: 0.0010
206/297 [===================>..........] - ETA: 8s - loss: 0.2646 - jaccard_index: 0.0010
207/297 [===================>..........] - ETA: 8s - loss: 0.2644 - jaccard_index: 0.0010
208/297 [====================>.........] - ETA: 8s - loss: 0.2649 - jaccard_index: 0.0010
209/297 [====================>.........] - ETA: 8s - loss: 0.2655 - jaccard_index: 9.9654e-04
210/297 [====================>.........] - ETA: 7s - loss: 0.2652 - jaccard_index: 9.9180e-04
211/297 [====================>.........] - ETA: 7s - loss: 0.2652 - jaccard_index: 9.8710e-04
212/297 [====================>.........] - ETA: 7s - loss: 0.2651 - jaccard_index: 9.8244e-04
213/297 [====================>.........] - ETA: 7s - loss: 0.2650 - jaccard_index: 9.7783e-04
214/297 [====================>.........] - ETA: 7s - loss: 0.2651 - jaccard_index: 9.7326e-04
215/297 [====================>.........] - ETA: 7s - loss: 0.2649 - jaccard_index: 9.6873e-04
216/297 [====================>.........] - ETA: 7s - loss: 0.2644 - jaccard_index: 9.6425e-04
217/297 [====================>.........] - ETA: 7s - loss: 0.2646 - jaccard_index: 9.5980e-04
218/297 [=====================>........] - ETA: 7s - loss: 0.2644 - jaccard_index: 9.5540e-04
219/297 [=====================>........] - ETA: 7s - loss: 0.2644 - jaccard_index: 9.5104e-04
220/297 [=====================>........] - ETA: 7s - loss: 0.2639 - jaccard_index: 9.4671e-04
221/297 [=====================>........] - ETA: 6s - loss: 0.2633 - jaccard_index: 9.4243e-04
222/297 [=====================>........] - ETA: 6s - loss: 0.2634 - jaccard_index: 9.3819e-04
223/297 [=====================>........] - ETA: 6s - loss: 0.2630 - jaccard_index: 9.3398e-04
224/297 [=====================>........] - ETA: 6s - loss: 0.2633 - jaccard_index: 9.2981e-04
225/297 [=====================>........] - ETA: 6s - loss: 0.2630 - jaccard_index: 9.2568e-04
226/297 [=====================>........] - ETA: 6s - loss: 0.2625 - jaccard_index: 9.2158e-04
227/297 [=====================>........] - ETA: 6s - loss: 0.2623 - jaccard_index: 9.1752e-04
228/297 [======================>.......] - ETA: 6s - loss: 0.2623 - jaccard_index: 9.1350e-04
229/297 [======================>.......] - ETA: 6s - loss: 0.2618 - jaccard_index: 9.0951e-04
230/297 [======================>.......] - ETA: 6s - loss: 0.2620 - jaccard_index: 9.0555e-04
231/297 [======================>.......] - ETA: 6s - loss: 0.2615 - jaccard_index: 9.0163e-04
232/297 [======================>.......] - ETA: 5s - loss: 0.2613 - jaccard_index: 8.9775e-04
233/297 [======================>.......] - ETA: 5s - loss: 0.2611 - jaccard_index: 8.9389e-04
234/297 [======================>.......] - ETA: 5s - loss: 0.2605 - jaccard_index: 8.9007e-04
235/297 [======================>.......] - ETA: 5s - loss: 0.2600 - jaccard_index: 8.8629e-04
236/297 [======================>.......] - ETA: 5s - loss: 0.2595 - jaccard_index: 8.8253e-04
237/297 [======================>.......] - ETA: 5s - loss: 0.2593 - jaccard_index: 8.7881e-04
238/297 [=======================>......] - ETA: 5s - loss: 0.2592 - jaccard_index: 8.7511e-04
239/297 [=======================>......] - ETA: 5s - loss: 0.2594 - jaccard_index: 8.7145e-04
240/297 [=======================>......] - ETA: 5s - loss: 0.2592 - jaccard_index: 8.6782e-04
241/297 [=======================>......] - ETA: 5s - loss: 0.2589 - jaccard_index: 8.6422e-04
242/297 [=======================>......] - ETA: 5s - loss: 0.2585 - jaccard_index: 8.6065e-04
243/297 [=======================>......] - ETA: 4s - loss: 0.2580 - jaccard_index: 8.5711e-04
244/297 [=======================>......] - ETA: 4s - loss: 0.2581 - jaccard_index: 8.5359e-04
245/297 [=======================>......] - ETA: 4s - loss: 0.2576 - jaccard_index: 8.5011e-04
246/297 [=======================>......] - ETA: 4s - loss: 0.2580 - jaccard_index: 8.4666e-04
247/297 [=======================>......] - ETA: 4s - loss: 0.2582 - jaccard_index: 8.4323e-04
248/297 [========================>.....] - ETA: 4s - loss: 0.2580 - jaccard_index: 8.3983e-04
249/297 [========================>.....] - ETA: 4s - loss: 0.2581 - jaccard_index: 8.3645e-04
250/297 [========================>.....] - ETA: 4s - loss: 0.2576 - jaccard_index: 8.3311e-04
251/297 [========================>.....] - ETA: 4s - loss: 0.2571 - jaccard_index: 8.2979e-04
252/297 [========================>.....] - ETA: 4s - loss: 0.2574 - jaccard_index: 8.2650e-04
253/297 [========================>.....] - ETA: 4s - loss: 0.2569 - jaccard_index: 8.2323e-04
254/297 [========================>.....] - ETA: 3s - loss: 0.2566 - jaccard_index: 8.1999e-04
255/297 [========================>.....] - ETA: 3s - loss: 0.2568 - jaccard_index: 8.1677e-04
256/297 [========================>.....] - ETA: 3s - loss: 0.2563 - jaccard_index: 8.1358e-04
257/297 [========================>.....] - ETA: 3s - loss: 0.2568 - jaccard_index: 8.1042e-04
258/297 [=========================>....] - ETA: 3s - loss: 0.2566 - jaccard_index: 8.0728e-04
259/297 [=========================>....] - ETA: 3s - loss: 0.2566 - jaccard_index: 8.0416e-04
260/297 [=========================>....] - ETA: 3s - loss: 0.2568 - jaccard_index: 8.0107e-04
261/297 [=========================>....] - ETA: 3s - loss: 0.2569 - jaccard_index: 7.9800e-04
262/297 [=========================>....] - ETA: 3s - loss: 0.2575 - jaccard_index: 7.9495e-04
263/297 [=========================>....] - ETA: 3s - loss: 0.2576 - jaccard_index: 7.9193e-04
264/297 [=========================>....] - ETA: 3s - loss: 0.2575 - jaccard_index: 7.8893e-04
265/297 [=========================>....] - ETA: 2s - loss: 0.2572 - jaccard_index: 7.8595e-04
266/297 [=========================>....] - ETA: 2s - loss: 0.2575 - jaccard_index: 7.8300e-04
267/297 [=========================>....] - ETA: 2s - loss: 0.2573 - jaccard_index: 7.8006e-04
268/297 [==========================>...] - ETA: 2s - loss: 0.2571 - jaccard_index: 7.7715e-04
269/297 [==========================>...] - ETA: 2s - loss: 0.2568 - jaccard_index: 7.7426e-04
270/297 [==========================>...] - ETA: 2s - loss: 0.2568 - jaccard_index: 7.7140e-04
271/297 [==========================>...] - ETA: 2s - loss: 0.2563 - jaccard_index: 7.6855e-04
272/297 [==========================>...] - ETA: 2s - loss: 0.2560 - jaccard_index: 7.6572e-04
273/297 [==========================>...] - ETA: 2s - loss: 0.2556 - jaccard_index: 7.6292e-04
274/297 [==========================>...] - ETA: 2s - loss: 0.2560 - jaccard_index: 7.6014e-04
275/297 [==========================>...] - ETA: 2s - loss: 0.2561 - jaccard_index: 7.5737e-04
276/297 [==========================>...] - ETA: 1s - loss: 0.2559 - jaccard_index: 7.5463e-04
277/297 [==========================>...] - ETA: 1s - loss: 0.2558 - jaccard_index: 7.5190e-04
278/297 [===========================>..] - ETA: 1s - loss: 0.2556 - jaccard_index: 7.4920e-04
279/297 [===========================>..] - ETA: 1s - loss: 0.2557 - jaccard_index: 7.4651e-04
280/297 [===========================>..] - ETA: 1s - loss: 0.2558 - jaccard_index: 7.4385e-04
281/297 [===========================>..] - ETA: 1s - loss: 0.2559 - jaccard_index: 7.4271e-04
282/297 [===========================>..] - ETA: 1s - loss: 0.2556 - jaccard_index: 7.4008e-04
283/297 [===========================>..] - ETA: 1s - loss: 0.2552 - jaccard_index: 7.3746e-04
284/297 [===========================>..] - ETA: 1s - loss: 0.2546 - jaccard_index: 7.3487e-04
285/297 [===========================>..] - ETA: 1s - loss: 0.2550 - jaccard_index: 7.3229e-04
286/297 [===========================>..] - ETA: 1s - loss: 0.2546 - jaccard_index: 7.2973e-04
287/297 [===========================>..] - ETA: 0s - loss: 0.2546 - jaccard_index: 7.2718e-04
288/297 [============================>.] - ETA: 0s - loss: 0.2547 - jaccard_index: 7.2466e-04
289/297 [============================>.] - ETA: 0s - loss: 0.2544 - jaccard_index: 7.2215e-04
290/297 [============================>.] - ETA: 0s - loss: 0.2541 - jaccard_index: 7.1966e-04
291/297 [============================>.] - ETA: 0s - loss: 0.2545 - jaccard_index: 7.1719e-04
292/297 [============================>.] - ETA: 0s - loss: 0.2544 - jaccard_index: 7.1473e-04
293/297 [============================>.] - ETA: 0s - loss: 0.2545 - jaccard_index: 7.1229e-04
294/297 [============================>.] - ETA: 0s - loss: 0.2549 - jaccard_index: 7.0987e-04
295/297 [============================>.] - ETA: 0s - loss: 0.2547 - jaccard_index: 7.0746e-04
296/297 [============================>.] - ETA: 0s - loss: 0.2548 - jaccard_index: 7.0507e-04
297/297 [==============================] - ETA: 0s - loss: 0.2547 - jaccard_index: 7.0270e-04

Epoch 00001: val_loss improved from inf to 0.25942, saving model to content\output\reproduce_lucchi_pp_10\checkpoints\model_weights_reproduce_lucchi_pp_10_10.h5

297/297 [==============================] - 34s 98ms/step - loss: 0.2547 - jaccard_index: 7.0270e-04 - val_loss: 0.2594 - val_jaccard_index: 0.0000e+00
Epoch 2/360

1/297 [..............................] - ETA: 39s - loss: 0.3499 - jaccard_index: 0.0000e+00
2/297 [..............................] - ETA: 26s - loss: 0.2899 - jaccard_index: 0.0000e+00
3/297 [..............................] - ETA: 26s - loss: 0.2773 - jaccard_index: 0.0000e+00
4/297 [..............................] - ETA: 26s - loss: 0.2892 - jaccard_index: 0.0000e+00
5/297 [..............................] - ETA: 26s - loss: 0.2869 - jaccard_index: 0.0000e+00
6/297 [..............................] - ETA: 25s - loss: 0.3050 - jaccard_index: 0.0000e+00
7/297 [..............................] - ETA: 25s - loss: 0.2988 - jaccard_index: 0.0000e+00
8/297 [..............................] - ETA: 25s - loss: 0.2908 - jaccard_index: 0.0000e+00
9/297 [..............................] - ETA: 26s - loss: 0.2805 - jaccard_index: 0.0000e+00
10/297 [>.............................] - ETA: 26s - loss: 0.2854 - jaccard_index: 0.0000e+00
11/297 [>.............................] - ETA: 25s - loss: 0.2756 - jaccard_index: 0.0000e+00
12/297 [>.............................] - ETA: 25s - loss: 0.2721 - jaccard_index: 0.0000e+00
13/297 [>.............................] - ETA: 25s - loss: 0.2647 - jaccard_index: 0.0000e+00
14/297 [>.............................] - ETA: 26s - loss: 0.2620 - jaccard_index: 0.0000e+00
15/297 [>.............................] - ETA: 26s - loss: 0.2517 - jaccard_index: 0.0000e+00
16/297 [>.............................] - ETA: 26s - loss: 0.2469 - jaccard_index: 0.0000e+00
17/297 [>.............................] - ETA: 26s - loss: 0.2415 - jaccard_index: 0.0000e+00
18/297 [>.............................] - ETA: 25s - loss: 0.2484 - jaccard_index: 0.0000e+00
19/297 [>.............................] - ETA: 25s - loss: 0.2506 - jaccard_index: 0.0000e+00
20/297 [=>............................] - ETA: 25s - loss: 0.2493 - jaccard_index: 0.0000e+00
21/297 [=>............................] - ETA: 25s - loss: 0.2488 - jaccard_index: 0.0000e+00
22/297 [=>............................] - ETA: 25s - loss: 0.2465 - jaccard_index: 0.0000e+00
23/297 [=>............................] - ETA: 25s - loss: 0.2478 - jaccard_index: 0.0000e+00
24/297 [=>............................] - ETA: 25s - loss: 0.2494 - jaccard_index: 0.0000e+00
25/297 [=>............................] - ETA: 25s - loss: 0.2502 - jaccard_index: 0.0000e+00
26/297 [=>............................] - ETA: 25s - loss: 0.2479 - jaccard_index: 0.0000e+00
27/297 [=>............................] - ETA: 25s - loss: 0.2434 - jaccard_index: 0.0000e+00
28/297 [=>............................] - ETA: 25s - loss: 0.2394 - jaccard_index: 0.0000e+00
29/297 [=>............................] - ETA: 25s - loss: 0.2439 - jaccard_index: 0.0000e+00
30/297 [==>...........................] - ETA: 24s - loss: 0.2408 - jaccard_index: 0.0000e+00
31/297 [==>...........................] - ETA: 24s - loss: 0.2397 - jaccard_index: 0.0000e+00
32/297 [==>...........................] - ETA: 24s - loss: 0.2411 - jaccard_index: 0.0000e+00
33/297 [==>...........................] - ETA: 24s - loss: 0.2387 - jaccard_index: 0.0000e+00
34/297 [==>...........................] - ETA: 24s - loss: 0.2369 - jaccard_index: 0.0000e+00
35/297 [==>...........................] - ETA: 24s - loss: 0.2398 - jaccard_index: 0.0000e+00
36/297 [==>...........................] - ETA: 24s - loss: 0.2397 - jaccard_index: 0.0000e+00
37/297 [==>...........................] - ETA: 24s - loss: 0.2408 - jaccard_index: 0.0000e+00
38/297 [==>...........................] - ETA: 24s - loss: 0.2446 - jaccard_index: 0.0000e+00
39/297 [==>...........................] - ETA: 23s - loss: 0.2446 - jaccard_index: 0.0000e+00
40/297 [===>..........................] - ETA: 23s - loss: 0.2452 - jaccard_index: 0.0000e+00
41/297 [===>..........................] - ETA: 23s - loss: 0.2443 - jaccard_index: 0.0000e+00
42/297 [===>..........................] - ETA: 23s - loss: 0.2417 - jaccard_index: 0.0000e+00
43/297 [===>..........................] - ETA: 23s - loss: 0.2399 - jaccard_index: 0.0000e+00
44/297 [===>..........................] - ETA: 23s - loss: 0.2399 - jaccard_index: 0.0000e+00
45/297 [===>..........................] - ETA: 23s - loss: 0.2394 - jaccard_index: 0.0000e+00
46/297 [===>..........................] - ETA: 23s - loss: 0.2394 - jaccard_index: 0.0000e+00
47/297 [===>..........................] - ETA: 23s - loss: 0.2389 - jaccard_index: 0.0000e+00
48/297 [===>..........................] - ETA: 23s - loss: 0.2398 - jaccard_index: 0.0000e+00
49/297 [===>..........................] - ETA: 22s - loss: 0.2398 - jaccard_index: 0.0000e+00
50/297 [====>.........................] - ETA: 22s - loss: 0.2387 - jaccard_index: 0.0000e+00
51/297 [====>.........................] - ETA: 22s - loss: 0.2390 - jaccard_index: 0.0000e+00
52/297 [====>.........................] - ETA: 22s - loss: 0.2411 - jaccard_index: 0.0000e+00
53/297 [====>.........................] - ETA: 22s - loss: 0.2398 - jaccard_index: 0.0000e+00
54/297 [====>.........................] - ETA: 22s - loss: 0.2410 - jaccard_index: 0.0000e+00
55/297 [====>.........................] - ETA: 22s - loss: 0.2440 - jaccard_index: 0.0000e+00
56/297 [====>.........................] - ETA: 22s - loss: 0.2441 - jaccard_index: 0.0000e+00
57/297 [====>.........................] - ETA: 22s - loss: 0.2454 - jaccard_index: 0.0000e+00
58/297 [====>.........................] - ETA: 22s - loss: 0.2447 - jaccard_index: 0.0000e+00
59/297 [====>.........................] - ETA: 22s - loss: 0.2455 - jaccard_index: 0.0000e+00
60/297 [=====>........................] - ETA: 21s - loss: 0.2459 - jaccard_index: 0.0000e+00
61/297 [=====>........................] - ETA: 21s - loss: 0.2464 - jaccard_index: 0.0000e+00
62/297 [=====>........................] - ETA: 21s - loss: 0.2477 - jaccard_index: 0.0000e+00
63/297 [=====>........................] - ETA: 21s - loss: 0.2481 - jaccard_index: 0.0000e+00
64/297 [=====>........................] - ETA: 21s - loss: 0.2472 - jaccard_index: 0.0000e+00
65/297 [=====>........................] - ETA: 21s - loss: 0.2475 - jaccard_index: 0.0000e+00
66/297 [=====>........................] - ETA: 21s - loss: 0.2475 - jaccard_index: 0.0000e+00
67/297 [=====>........................] - ETA: 21s - loss: 0.2477 - jaccard_index: 0.0000e+00
68/297 [=====>........................] - ETA: 21s - loss: 0.2477 - jaccard_index: 0.0000e+00
69/297 [=====>........................] - ETA: 21s - loss: 0.2480 - jaccard_index: 0.0000e+00
70/297 [======>.......................] - ETA: 20s - loss: 0.2488 - jaccard_index: 0.0000e+00
71/297 [======>.......................] - ETA: 20s - loss: 0.2493 - jaccard_index: 0.0000e+00
72/297 [======>.......................] - ETA: 20s - loss: 0.2485 - jaccard_index: 0.0000e+00
73/297 [======>.......................] - ETA: 20s - loss: 0.2481 - jaccard_index: 0.0000e+00
74/297 [======>.......................] - ETA: 20s - loss: 0.2474 - jaccard_index: 0.0000e+00
75/297 [======>.......................] - ETA: 20s - loss: 0.2465 - jaccard_index: 0.0000e+00
76/297 [======>.......................] - ETA: 20s - loss: 0.2459 - jaccard_index: 0.0000e+00
77/297 [======>.......................] - ETA: 20s - loss: 0.2444 - jaccard_index: 0.0000e+00
78/297 [======>.......................] - ETA: 20s - loss: 0.2421 - jaccard_index: 0.0000e+00
79/297 [======>.......................] - ETA: 20s - loss: 0.2420 - jaccard_index: 0.0000e+00
80/297 [=======>......................] - ETA: 19s - loss: 0.2403 - jaccard_index: 0.0000e+00
81/297 [=======>......................] - ETA: 19s - loss: 0.2408 - jaccard_index: 0.0000e+00
82/297 [=======>......................] - ETA: 19s - loss: 0.2409 - jaccard_index: 0.0000e+00
83/297 [=======>......................] - ETA: 19s - loss: 0.2401 - jaccard_index: 0.0000e+00
84/297 [=======>......................] - ETA: 19s - loss: 0.2411 - jaccard_index: 0.0000e+00
85/297 [=======>......................] - ETA: 19s - loss: 0.2421 - jaccard_index: 0.0000e+00
86/297 [=======>......................] - ETA: 19s - loss: 0.2427 - jaccard_index: 0.0000e+00
87/297 [=======>......................] - ETA: 19s - loss: 0.2449 - jaccard_index: 0.0000e+00
88/297 [=======>......................] - ETA: 19s - loss: 0.2453 - jaccard_index: 0.0000e+00
89/297 [=======>......................] - ETA: 19s - loss: 0.2456 - jaccard_index: 0.0000e+00
90/297 [========>.....................] - ETA: 19s - loss: 0.2453 - jaccard_index: 0.0000e+00
91/297 [========>.....................] - ETA: 18s - loss: 0.2455 - jaccard_index: 0.0000e+00
92/297 [========>.....................] - ETA: 18s - loss: 0.2452 - jaccard_index: 0.0000e+00
93/297 [========>.....................] - ETA: 18s - loss: 0.2452 - jaccard_index: 0.0000e+00
94/297 [========>.....................] - ETA: 18s - loss: 0.2447 - jaccard_index: 0.0000e+00
95/297 [========>.....................] - ETA: 18s - loss: 0.2453 - jaccard_index: 0.0000e+00
96/297 [========>.....................] - ETA: 18s - loss: 0.2448 - jaccard_index: 0.0000e+00
97/297 [========>.....................] - ETA: 18s - loss: 0.2442 - jaccard_index: 0.0000e+00
98/297 [========>.....................] - ETA: 18s - loss: 0.2435 - jaccard_index: 0.0000e+00
99/297 [=========>....................] - ETA: 18s - loss: 0.2436 - jaccard_index: 0.0000e+00
100/297 [=========>....................] - ETA: 18s - loss: 0.2430 - jaccard_index: 0.0000e+00
101/297 [=========>....................] - ETA: 18s - loss: 0.2427 - jaccard_index: 0.0000e+00
102/297 [=========>....................] - ETA: 17s - loss: 0.2437 - jaccard_index: 0.0000e+00
103/297 [=========>....................] - ETA: 17s - loss: 0.2458 - jaccard_index: 0.0000e+00
104/297 [=========>....................] - ETA: 17s - loss: 0.2458 - jaccard_index: 0.0000e+00
105/297 [=========>....................] - ETA: 17s - loss: 0.2459 - jaccard_index: 0.0000e+00
106/297 [=========>....................] - ETA: 17s - loss: 0.2456 - jaccard_index: 0.0000e+00
107/297 [=========>....................] - ETA: 17s - loss: 0.2461 - jaccard_index: 0.0000e+00
108/297 [=========>....................] - ETA: 17s - loss: 0.2464 - jaccard_index: 0.0000e+00
109/297 [==========>...................] - ETA: 17s - loss: 0.2474 - jaccard_index: 0.0000e+00
110/297 [==========>...................] - ETA: 17s - loss: 0.2467 - jaccard_index: 0.0000e+00
111/297 [==========>...................] - ETA: 17s - loss: 0.2467 - jaccard_index: 0.0000e+00
112/297 [==========>...................] - ETA: 16s - loss: 0.2474 - jaccard_index: 0.0000e+00
113/297 [==========>...................] - ETA: 16s - loss: 0.2474 - jaccard_index: 0.0000e+00
114/297 [==========>...................] - ETA: 16s - loss: 0.2464 - jaccard_index: 0.0000e+00
115/297 [==========>...................] - ETA: 16s - loss: 0.2459 - jaccard_index: 0.0000e+00
116/297 [==========>...................] - ETA: 16s - loss: 0.2460 - jaccard_index: 0.0000e+00
117/297 [==========>...................] - ETA: 16s - loss: 0.2466 - jaccard_index: 0.0000e+00
118/297 [==========>...................] - ETA: 16s - loss: 0.2469 - jaccard_index: 0.0000e+00
119/297 [===========>..................] - ETA: 16s - loss: 0.2467 - jaccard_index: 0.0000e+00
120/297 [===========>..................] - ETA: 16s - loss: 0.2465 - jaccard_index: 0.0000e+00
121/297 [===========>..................] - ETA: 16s - loss: 0.2478 - jaccard_index: 0.0000e+00
122/297 [===========>..................] - ETA: 16s - loss: 0.2471 - jaccard_index: 0.0000e+00
123/297 [===========>..................] - ETA: 15s - loss: 0.2473 - jaccard_index: 0.0000e+00
124/297 [===========>..................] - ETA: 15s - loss: 0.2476 - jaccard_index: 0.0000e+00
125/297 [===========>..................] - ETA: 15s - loss: 0.2477 - jaccard_index: 0.0000e+00
126/297 [===========>..................] - ETA: 15s - loss: 0.2481 - jaccard_index: 0.0000e+00
127/297 [===========>..................] - ETA: 15s - loss: 0.2477 - jaccard_index: 0.0000e+00
128/297 [===========>..................] - ETA: 15s - loss: 0.2482 - jaccard_index: 0.0000e+00
129/297 [============>.................] - ETA: 15s - loss: 0.2485 - jaccard_index: 0.0000e+00
130/297 [============>.................] - ETA: 15s - loss: 0.2486 - jaccard_index: 0.0000e+00
131/297 [============>.................] - ETA: 15s - loss: 0.2489 - jaccard_index: 0.0000e+00
132/297 [============>.................] - ETA: 15s - loss: 0.2482 - jaccard_index: 0.0000e+00
133/297 [============>.................] - ETA: 15s - loss: 0.2480 - jaccard_index: 0.0000e+00
134/297 [============>.................] - ETA: 14s - loss: 0.2482 - jaccard_index: 0.0000e+00
135/297 [============>.................] - ETA: 14s - loss: 0.2488 - jaccard_index: 0.0000e+00
136/297 [============>.................] - ETA: 14s - loss: 0.2488 - jaccard_index: 0.0000e+00
137/297 [============>.................] - ETA: 14s - loss: 0.2486 - jaccard_index: 0.0000e+00
138/297 [============>.................] - ETA: 14s - loss: 0.2484 - jaccard_index: 0.0000e+00
139/297 [=============>................] - ETA: 14s - loss: 0.2482 - jaccard_index: 0.0000e+00
140/297 [=============>................] - ETA: 14s - loss: 0.2479 - jaccard_index: 0.0000e+00
141/297 [=============>................] - ETA: 14s - loss: 0.2483 - jaccard_index: 0.0000e+00
142/297 [=============>................] - ETA: 14s - loss: 0.2493 - jaccard_index: 0.0000e+00
143/297 [=============>................] - ETA: 14s - loss: 0.2495 - jaccard_index: 0.0000e+00
144/297 [=============>................] - ETA: 14s - loss: 0.2489 - jaccard_index: 0.0000e+00
145/297 [=============>................] - ETA: 13s - loss: 0.2484 - jaccard_index: 0.0000e+00
146/297 [=============>................] - ETA: 13s - loss: 0.2487 - jaccard_index: 0.0000e+00
147/297 [=============>................] - ETA: 13s - loss: 0.2484 - jaccard_index: 0.0000e+00
148/297 [=============>................] - ETA: 13s - loss: 0.2478 - jaccard_index: 0.0000e+00
149/297 [==============>...............] - ETA: 13s - loss: 0.2469 - jaccard_index: 0.0000e+00
150/297 [==============>...............] - ETA: 13s - loss: 0.2464 - jaccard_index: 0.0000e+00
151/297 [==============>...............] - ETA: 13s - loss: 0.2480 - jaccard_index: 0.0000e+00
152/297 [==============>...............] - ETA: 13s - loss: 0.2475 - jaccard_index: 0.0000e+00
153/297 [==============>...............] - ETA: 13s - loss: 0.2473 - jaccard_index: 0.0000e+00
154/297 [==============>...............] - ETA: 13s - loss: 0.2473 - jaccard_index: 0.0000e+00
155/297 [==============>...............] - ETA: 13s - loss: 0.2477 - jaccard_index: 0.0000e+00
156/297 [==============>...............] - ETA: 12s - loss: 0.2473 - jaccard_index: 0.0000e+00
157/297 [==============>...............] - ETA: 12s - loss: 0.2474 - jaccard_index: 0.0000e+00
158/297 [==============>...............] - ETA: 12s - loss: 0.2476 - jaccard_index: 0.0000e+00
159/297 [===============>..............] - ETA: 12s - loss: 0.2474 - jaccard_index: 0.0000e+00
160/297 [===============>..............] - ETA: 12s - loss: 0.2471 - jaccard_index: 0.0000e+00
161/297 [===============>..............] - ETA: 12s - loss: 0.2472 - jaccard_index: 0.0000e+00
162/297 [===============>..............] - ETA: 12s - loss: 0.2466 - jaccard_index: 0.0000e+00
163/297 [===============>..............] - ETA: 12s - loss: 0.2469 - jaccard_index: 0.0000e+00
164/297 [===============>..............] - ETA: 12s - loss: 0.2461 - jaccard_index: 0.0000e+00
165/297 [===============>..............] - ETA: 12s - loss: 0.2459 - jaccard_index: 0.0000e+00
166/297 [===============>..............] - ETA: 12s - loss: 0.2454 - jaccard_index: 0.0000e+00
167/297 [===============>..............] - ETA: 11s - loss: 0.2446 - jaccard_index: 0.0000e+00
168/297 [===============>..............] - ETA: 11s - loss: 0.2438 - jaccard_index: 0.0000e+00
169/297 [================>.............] - ETA: 11s - loss: 0.2435 - jaccard_index: 0.0000e+00
170/297 [================>.............] - ETA: 11s - loss: 0.2443 - jaccard_index: 0.0000e+00
171/297 [================>.............] - ETA: 11s - loss: 0.2442 - jaccard_index: 0.0000e+00
172/297 [================>.............] - ETA: 11s - loss: 0.2441 - jaccard_index: 0.0000e+00
173/297 [================>.............] - ETA: 11s - loss: 0.2437 - jaccard_index: 0.0000e+00
174/297 [================>.............] - ETA: 11s - loss: 0.2446 - jaccard_index: 0.0000e+00
175/297 [================>.............] - ETA: 11s - loss: 0.2448 - jaccard_index: 0.0000e+00
176/297 [================>.............] - ETA: 11s - loss: 0.2451 - jaccard_index: 0.0000e+00
177/297 [================>.............] - ETA: 11s - loss: 0.2447 - jaccard_index: 0.0000e+00
178/297 [================>.............] - ETA: 10s - loss: 0.2444 - jaccard_index: 0.0000e+00
179/297 [=================>............] - ETA: 10s - loss: 0.2439 - jaccard_index: 0.0000e+00
180/297 [=================>............] - ETA: 10s - loss: 0.2440 - jaccard_index: 0.0000e+00
181/297 [=================>............] - ETA: 10s - loss: 0.2436 - jaccard_index: 0.0000e+00
182/297 [=================>............] - ETA: 10s - loss: 0.2434 - jaccard_index: 0.0000e+00
183/297 [=================>............] - ETA: 10s - loss: 0.2432 - jaccard_index: 0.0000e+00
184/297 [=================>............] - ETA: 10s - loss: 0.2427 - jaccard_index: 0.0000e+00
185/297 [=================>............] - ETA: 10s - loss: 0.2425 - jaccard_index: 0.0000e+00
186/297 [=================>............] - ETA: 10s - loss: 0.2431 - jaccard_index: 0.0000e+00
187/297 [=================>............] - ETA: 10s - loss: 0.2426 - jaccard_index: 0.0000e+00
188/297 [=================>............] - ETA: 10s - loss: 0.2423 - jaccard_index: 0.0000e+00
189/297 [==================>...........] - ETA: 9s - loss: 0.2425 - jaccard_index: 0.0000e+00
190/297 [==================>...........] - ETA: 9s - loss: 0.2431 - jaccard_index: 0.0000e+00
191/297 [==================>...........] - ETA: 9s - loss: 0.2437 - jaccard_index: 0.0000e+00
192/297 [==================>...........] - ETA: 9s - loss: 0.2438 - jaccard_index: 0.0000e+00
193/297 [==================>...........] - ETA: 9s - loss: 0.2434 - jaccard_index: 0.0000e+00
194/297 [==================>...........] - ETA: 9s - loss: 0.2436 - jaccard_index: 0.0000e+00
195/297 [==================>...........] - ETA: 9s - loss: 0.2436 - jaccard_index: 0.0000e+00
196/297 [==================>...........] - ETA: 9s - loss: 0.2434 - jaccard_index: 0.0000e+00
197/297 [==================>...........] - ETA: 9s - loss: 0.2430 - jaccard_index: 0.0000e+00
198/297 [===================>..........] - ETA: 9s - loss: 0.2430 - jaccard_index: 0.0000e+00
199/297 [===================>..........] - ETA: 8s - loss: 0.2424 - jaccard_index: 0.0000e+00
200/297 [===================>..........] - ETA: 8s - loss: 0.2425 - jaccard_index: 0.0000e+00
201/297 [===================>..........] - ETA: 8s - loss: 0.2427 - jaccard_index: 0.0000e+00
202/297 [===================>..........] - ETA: 8s - loss: 0.2423 - jaccard_index: 0.0000e+00
203/297 [===================>..........] - ETA: 8s - loss: 0.2431 - jaccard_index: 0.0000e+00
204/297 [===================>..........] - ETA: 8s - loss: 0.2436 - jaccard_index: 0.0000e+00
205/297 [===================>..........] - ETA: 8s - loss: 0.2433 - jaccard_index: 0.0000e+00
206/297 [===================>..........] - ETA: 8s - loss: 0.2436 - jaccard_index: 0.0000e+00
207/297 [===================>..........] - ETA: 8s - loss: 0.2436 - jaccard_index: 0.0000e+00
208/297 [====================>.........] - ETA: 8s - loss: 0.2441 - jaccard_index: 0.0000e+00
209/297 [====================>.........] - ETA: 8s - loss: 0.2448 - jaccard_index: 0.0000e+00
210/297 [====================>.........] - ETA: 7s - loss: 0.2447 - jaccard_index: 0.0000e+00
211/297 [====================>.........] - ETA: 7s - loss: 0.2449 - jaccard_index: 0.0000e+00
212/297 [====================>.........] - ETA: 7s - loss: 0.2449 - jaccard_index: 0.0000e+00
213/297 [====================>.........] - ETA: 7s - loss: 0.2449 - jaccard_index: 0.0000e+00
214/297 [====================>.........] - ETA: 7s - loss: 0.2451 - jaccard_index: 0.0000e+00
215/297 [====================>.........] - ETA: 7s - loss: 0.2450 - jaccard_index: 0.0000e+00
216/297 [====================>.........] - ETA: 7s - loss: 0.2444 - jaccard_index: 0.0000e+00
217/297 [====================>.........] - ETA: 7s - loss: 0.2448 - jaccard_index: 0.0000e+00
218/297 [=====================>........] - ETA: 7s - loss: 0.2447 - jaccard_index: 0.0000e+00
219/297 [=====================>........] - ETA: 7s - loss: 0.2448 - jaccard_index: 0.0000e+00
220/297 [=====================>........] - ETA: 7s - loss: 0.2445 - jaccard_index: 0.0000e+00
221/297 [=====================>........] - ETA: 6s - loss: 0.2439 - jaccard_index: 0.0000e+00
222/297 [=====================>........] - ETA: 6s - loss: 0.2442 - jaccard_index: 0.0000e+00
223/297 [=====================>........] - ETA: 6s - loss: 0.2437 - jaccard_index: 0.0000e+00
224/297 [=====================>........] - ETA: 6s - loss: 0.2441 - jaccard_index: 0.0000e+00
225/297 [=====================>........] - ETA: 6s - loss: 0.2438 - jaccard_index: 0.0000e+00
226/297 [=====================>........] - ETA: 6s - loss: 0.2435 - jaccard_index: 0.0000e+00
227/297 [=====================>........] - ETA: 6s - loss: 0.2434 - jaccard_index: 0.0000e+00
228/297 [======================>.......] - ETA: 6s - loss: 0.2434 - jaccard_index: 0.0000e+00
229/297 [======================>.......] - ETA: 6s - loss: 0.2430 - jaccard_index: 0.0000e+00
230/297 [======================>.......] - ETA: 6s - loss: 0.2432 - jaccard_index: 0.0000e+00
231/297 [======================>.......] - ETA: 6s - loss: 0.2429 - jaccard_index: 0.0000e+00
232/297 [======================>.......] - ETA: 5s - loss: 0.2428 - jaccard_index: 0.0000e+00
233/297 [======================>.......] - ETA: 5s - loss: 0.2427 - jaccard_index: 0.0000e+00
234/297 [======================>.......] - ETA: 5s - loss: 0.2421 - jaccard_index: 0.0000e+00
235/297 [======================>.......] - ETA: 5s - loss: 0.2416 - jaccard_index: 0.0000e+00
236/297 [======================>.......] - ETA: 5s - loss: 0.2413 - jaccard_index: 0.0000e+00
237/297 [======================>.......] - ETA: 5s - loss: 0.2410 - jaccard_index: 0.0000e+00
238/297 [=======================>......] - ETA: 5s - loss: 0.2410 - jaccard_index: 0.0000e+00
239/297 [=======================>......] - ETA: 5s - loss: 0.2414 - jaccard_index: 0.0000e+00
240/297 [=======================>......] - ETA: 5s - loss: 0.2412 - jaccard_index: 0.0000e+00
241/297 [=======================>......] - ETA: 5s - loss: 0.2410 - jaccard_index: 0.0000e+00
242/297 [=======================>......] - ETA: 5s - loss: 0.2407 - jaccard_index: 0.0000e+00
243/297 [=======================>......] - ETA: 4s - loss: 0.2403 - jaccard_index: 0.0000e+00
244/297 [=======================>......] - ETA: 4s - loss: 0.2405 - jaccard_index: 0.0000e+00
245/297 [=======================>......] - ETA: 4s - loss: 0.2401 - jaccard_index: 0.0000e+00
246/297 [=======================>......] - ETA: 4s - loss: 0.2405 - jaccard_index: 0.0000e+00
247/297 [=======================>......] - ETA: 4s - loss: 0.2406 - jaccard_index: 0.0000e+00
248/297 [========================>.....] - ETA: 4s - loss: 0.2406 - jaccard_index: 0.0000e+00
249/297 [========================>.....] - ETA: 4s - loss: 0.2408 - jaccard_index: 0.0000e+00
250/297 [========================>.....] - ETA: 4s - loss: 0.2403 - jaccard_index: 0.0000e+00
251/297 [========================>.....] - ETA: 4s - loss: 0.2398 - jaccard_index: 0.0000e+00
252/297 [========================>.....] - ETA: 4s - loss: 0.2403 - jaccard_index: 0.0000e+00
253/297 [========================>.....] - ETA: 4s - loss: 0.2397 - jaccard_index: 0.0000e+00
254/297 [========================>.....] - ETA: 3s - loss: 0.2395 - jaccard_index: 0.0000e+00
255/297 [========================>.....] - ETA: 3s - loss: 0.2397 - jaccard_index: 0.0000e+00
256/297 [========================>.....] - ETA: 3s - loss: 0.2393 - jaccard_index: 0.0000e+00
257/297 [========================>.....] - ETA: 3s - loss: 0.2398 - jaccard_index: 0.0000e+00
258/297 [=========================>....] - ETA: 3s - loss: 0.2397 - jaccard_index: 0.0000e+00
259/297 [=========================>....] - ETA: 3s - loss: 0.2398 - jaccard_index: 0.0000e+00
260/297 [=========================>....] - ETA: 3s - loss: 0.2402 - jaccard_index: 0.0000e+00
261/297 [=========================>....] - ETA: 3s - loss: 0.2404 - jaccard_index: 0.0000e+00
262/297 [=========================>....] - ETA: 3s - loss: 0.2411 - jaccard_index: 0.0000e+00
263/297 [=========================>....] - ETA: 3s - loss: 0.2412 - jaccard_index: 0.0000e+00
264/297 [=========================>....] - ETA: 3s - loss: 0.2412 - jaccard_index: 0.0000e+00
265/297 [=========================>....] - ETA: 2s - loss: 0.2409 - jaccard_index: 0.0000e+00
266/297 [=========================>....] - ETA: 2s - loss: 0.2413 - jaccard_index: 0.0000e+00
267/297 [=========================>....] - ETA: 2s - loss: 0.2411 - jaccard_index: 0.0000e+00
268/297 [==========================>...] - ETA: 2s - loss: 0.2410 - jaccard_index: 0.0000e+00
269/297 [==========================>...] - ETA: 2s - loss: 0.2408 - jaccard_index: 0.0000e+00
270/297 [==========================>...] - ETA: 2s - loss: 0.2407 - jaccard_index: 0.0000e+00
271/297 [==========================>...] - ETA: 2s - loss: 0.2403 - jaccard_index: 0.0000e+00
272/297 [==========================>...] - ETA: 2s - loss: 0.2401 - jaccard_index: 0.0000e+00
273/297 [==========================>...] - ETA: 2s - loss: 0.2398 - jaccard_index: 0.0000e+00
274/297 [==========================>...] - ETA: 2s - loss: 0.2402 - jaccard_index: 0.0000e+00
275/297 [==========================>...] - ETA: 2s - loss: 0.2403 - jaccard_index: 0.0000e+00
276/297 [==========================>...] - ETA: 1s - loss: 0.2401 - jaccard_index: 0.0000e+00
277/297 [==========================>...] - ETA: 1s - loss: 0.2400 - jaccard_index: 0.0000e+00
278/297 [===========================>..] - ETA: 1s - loss: 0.2399 - jaccard_index: 0.0000e+00
279/297 [===========================>..] - ETA: 1s - loss: 0.2400 - jaccard_index: 0.0000e+00
280/297 [===========================>..] - ETA: 1s - loss: 0.2401 - jaccard_index: 0.0000e+00
281/297 [===========================>..] - ETA: 1s - loss: 0.2403 - jaccard_index: 0.0000e+00
282/297 [===========================>..] - ETA: 1s - loss: 0.2401 - jaccard_index: 0.0000e+00
283/297 [===========================>..] - ETA: 1s - loss: 0.2397 - jaccard_index: 0.0000e+00
284/297 [===========================>..] - ETA: 1s - loss: 0.2392 - jaccard_index: 0.0000e+00
285/297 [===========================>..] - ETA: 1s - loss: 0.2397 - jaccard_index: 0.0000e+00
286/297 [===========================>..] - ETA: 1s - loss: 0.2394 - jaccard_index: 0.0000e+00
287/297 [===========================>..] - ETA: 0s - loss: 0.2393 - jaccard_index: 0.0000e+00
288/297 [============================>.] - ETA: 0s - loss: 0.2395 - jaccard_index: 0.0000e+00
289/297 [============================>.] - ETA: 0s - loss: 0.2392 - jaccard_index: 0.0000e+00
290/297 [============================>.] - ETA: 0s - loss: 0.2390 - jaccard_index: 0.0000e+00
291/297 [============================>.] - ETA: 0s - loss: 0.2394 - jaccard_index: 0.0000e+00
292/297 [============================>.] - ETA: 0s - loss: 0.2394 - jaccard_index: 0.0000e+00
293/297 [============================>.] - ETA: 0s - loss: 0.2395 - jaccard_index: 0.0000e+00
294/297 [============================>.] - ETA: 0s - loss: 0.2400 - jaccard_index: 0.0000e+00
295/297 [============================>.] - ETA: 0s - loss: 0.2399 - jaccard_index: 0.0000e+00
296/297 [============================>.] - ETA: 0s - loss: 0.2400 - jaccard_index: 0.0000e+00
297/297 [==============================] - ETA: 0s - loss: 0.2399 - jaccard_index: 0.0000e+00

Epoch 00002: val_loss improved from 0.25942 to 0.25855, saving model to content\output\reproduce_lucchi_pp_10\checkpoints\model_weights_reproduce_lucchi_pp_10_10.h5

297/297 [==============================] - 29s 96ms/step - loss: 0.2399 - jaccard_index: 0.0000e+00 - val_loss: 0.2585 - val_jaccard_index: 0.0000e+00

Remove imgaug dependency

imgaug project is not being updated. We should implement the transformations used through imgaug to remove its dependecy completely. The transformations are these:

  • #63
  • Random rotation
  • Shear
  • Zoom
  • Shift
  • Vertical flip
  • Horizontal flip
  • Elastic transformation
  • Gaussian blur
  • Median blur
  • Motion blur
  • Gamma contrast
  • Notebook presenting all transformations

Also because we adapted 3D images to use them as 2D images for imgaug, which gives weird messages everytime a transformation is used:

/home/dfranco/anaconda3/envs/BiaPy_env/lib/python3.8/site-packages/imgaug/augmenters/base.py:49: SuspiciousSingleImageShapeWarning: You provided a numpy array of shape (160, 160, 40) as a single-image augmentation input, which was interpreted as (H, W, C). The last dimension however has a size of >=32, which indicates that you provided a multi-image array with shape (N, H, W) instead. If that is the case, you should use e.g. augmenter(imageS=<your input>) or augment_imageS(<your input>). Otherwise your multi-image input will be interpreted as a single image during augmentation.

bug: import error: pytorch_msssim

I was testing the master repo and I got this error when I started training:

from pytorch_msssim import ssim, ms_ssim, SSIM, MS_SSIM
ModuleNotFoundError: No module named 'pytorch_msssim

Cheers
Pradeep

MitoEM training killed

Hi Daniel,

I am trying to train MitoEM via Biapy following your guidelines in the docs. However, the training gets killed at the below phase:

0) Loading train images . . .
Loading data from /mnt/mito-data/data_organised_biapy/MitoEM-R/train/x_BC_thick
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 500/500 [00:52<00:00,  9.59it/s]
*** Loaded data shape is (128000, 256, 256, 1)
1) Loading train GT . . .
Loading data from /mnt/mito-data/data_organised_biapy/MitoEM-R/train/y_BC_thick
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 500/500 [01:06<00:00,  7.52it/s]
*** Loaded data shape is (128000, 256, 256, 2)
Creating validation data
Not all samples seem to have the same shape. Number of samples: 115200
*** Loaded train data shape is: (115200, 256, 256, 1)
*** Loaded train GT shape is: (115200, 256, 256, 2)
*** Loaded validation data shape is: (12800, 256, 256, 1)
*** Loaded validation GT shape is: (12800, 256, 256, 2)
### END LOAD ###
2) Loading test images . . .
Loading data from /mnt/mito-data/data_organised_biapy/MitoEM-R/test/x_BC_thick
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 500/500 [00:44<00:00, 11.30it/s]
*** Loaded data shape is (500, 4096, 4096, 1)
########################
#  PREPARE GENERATORS  #
########################

Initializing train data generator . . .
Killed

This is how I run it:

python main.py --config /installations/BiaPy/templates/instance_segmentation/2d_instance_segmentation.yaml --result_dir /mnt/mito-data --name atten_unet_mito_r --run_id 1 --gpu 0

I am running it through a docker image that I have built myself and it can definitely access tensorflow gpus. I have a 24GB RTX 3090 and 128GB RAM. I also have reduced the batch_size during training from default 6 to 2.

Any clue what might be happening?

Best,
Samia

Question about Attention U-net

Dear authors, thank you very much for making the code publicly available.

I have a question regards to the attention block implementation in Attention U-net. Below is the code snippet from your implementation:

g1 = conv(filters, kernel_size = 1)(shortcut)
g1 = BatchNormalization() (g1) if batch_norm else g1
x1 = conv(filters, kernel_size = 1)(x)
x1 = BatchNormalization() (x1) if batch_norm else x1

g1_x1 = Add()([g1,x1])
psi = Activation('relu')(g1_x1)
psi = conv(1, kernel_size = 1)(psi)
psi = BatchNormalization() (psi) if batch_norm else psi
psi = Activation('sigmoid')(psi)
x = Multiply()([x,psi])

In the code above, shortcut features are used as the gating signal, thus the final feature map after concatenation is: upsampled features * attention coefficient + upsampled features

However, in the original paper of Attention U-net, it seems that they used upsampled features as the gating signal. So their feature map after concatenation is: shortcut features * attention coefficient + upsampled features.

image

I conducted a simple experiment by swapping two variables passed to the attention block, but it gave a comparable (even worse) performance. I am not sure if I understood the written code incorrectly.

Best Regards,

running biapy after pip install

Hi
Great package and I love the modularity of it as well as the ease of getting it up and running.

So, I've followed the isntructions to pip install Biapy.
Now, when I want to run the training via command line, it looks like I can only run it via

python -u main.py ?

You can make it executable by adding the entry under project.scripts .
https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#creating-executable-scripts

This way, we wouldn't need to clone the repo again to run biapy even after pip install.

Cheers
Pradeep

Add support for .nii.gz extension

Add support for reading .nii.gz data. Something was done in load_ct_data_from_dir function. Need to incorporate it to load_data_from_dir and load_3d_images_from_dir function for reading 2D and 3D images respectively. We can check it using CT datasets from here.

when I try to run the main.py error occured


#####################

TRAIN THE MODEL

#####################

2022-06-29 13:59:43.604993: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
2022-06-29 13:59:43.622141: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 3400045000 Hz
Epoch 1/360
2022-06-29 13:59:45.237194: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.8
2022-06-29 13:59:45.822237: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.11
2022-06-29 13:59:46.034149: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.11
2022-06-29 13:59:46.034483: W tensorflow/core/framework/op_kernel.cc:1763] OP_REQUIRES failed at conv_ops.cc:1106 : Not found: No algorithm worked!
Traceback (most recent call last):
File "/data12T/ydaugust/code/EM_domain_adaptation-main/EM_Image_Segmentation-master/main.py", line 154, in
trainer.train()
File "/data12T/ydaugust/code/EM_domain_adaptation-main/EM_Image_Segmentation-master/engine/trainer.py", line 267, in train
self.results = self.model.fit(self.train_generator, validation_data=self.val_generator,
File "/data12T/ydaugust/anaconda3/envs/EM_tools/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1100, in fit
tmp_logs = self.train_function(iterator)
File "/data12T/ydaugust/anaconda3/envs/EM_tools/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 828, in call
result = self._call(*args, **kwds)
File "/data12T/ydaugust/anaconda3/envs/EM_tools/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 888, in _call
return self._stateless_fn(*args, **kwds)
File "/data12T/ydaugust/anaconda3/envs/EM_tools/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2942, in call
return graph_function._call_flat(
File "/data12T/ydaugust/anaconda3/envs/EM_tools/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 1918, in _call_flat
return self._build_call_outputs(self._inference_function.call(
File "/data12T/ydaugust/anaconda3/envs/EM_tools/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 555, in call
outputs = execute.execute(
File "/data12T/ydaugust/anaconda3/envs/EM_tools/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.NotFoundError: 2 root error(s) found.
(0) Not found: No algorithm worked!
[[node model/conv2d/Conv2D (defined at /code/EM_domain_adaptation-main/EM_Image_Segmentation-master/engine/trainer.py:267) ]]
[[Greater_1/_38]]
(1) Not found: No algorithm worked!
[[node model/conv2d/Conv2D (defined at /code/EM_domain_adaptation-main/EM_Image_Segmentation-master/engine/trainer.py:267) ]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_3792]

Function call stack:
train_function -> train_function

2022-06-29 13:59:46.144834: W tensorflow/core/kernels/data/generator_dataset_op.cc:107] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated.
[[{{node PyFunc}}]]

I don't know how to fix it. and I have followed the step that the author wrote on this link https://github.com/danifranco/EM_Image_Segmentation/blob/master/utils/env/environment.yml

documentation suggestion

You mentioned we could normalized by image or dataset, but I couldn't find the documentation for what value I should give DATA.NORMALIZATION.APPLICATION_MODE if I wanted to normalize by entire dataset stats. I found it in the code that I should use the value 'dataset'. It would be worth including that in the documentation.

Unless I completely missed it!!

Error when first detection df_patch is null

Hello!

I had this error message:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'copy'

coming from

if 'df' not in locals():
df = df_patch.copy()
df['file'] = fname
else:
if df_patch is not None:
df_patch['file'] = fname
df = pd.concat([df, df_patch], ignore_index=True)

I suppose it happens when the first df_patch is None and df is not instanced yet.

Error in 3D classification notebook

Running with default parameters ends up in this error when training the model:

Training configuration finished.
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
[<ipython-input-6-1fe6fcb7389d>](https://vyjefdubo0j-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20240201-060115_RC00_603318229#) in <cell line: 117>()
    115 
    116 # Run the code
--> 117 biapy = BiaPy(f'/content/{job_name}.yaml', result_dir=output_path, name=job_name, run_id=1, gpu=0)
    118 biapy.run_job()

4 frames
[/usr/local/lib/python3.10/dist-packages/yacs/config.py](https://vyjefdubo0j-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20240201-060115_RC00_603318229#) in _merge_a_into_b(a, b, root, key_list)
    489                 root.raise_key_rename_error(full_key)
    490             else:
--> 491                 raise KeyError("Non-existent config key: {}".format(full_key))
    492 
    493 

KeyError: 'Non-existent config key: MODEL.SPATIAL_DROPOUT'

bug: Biapy crashes with detection workflow on latest branch

Hi
I installed the latest branch of BiaPy from github and ran a workflow for detection.

I got the following error:

[06:59:18.885498] ##############################
[06:59:18.889505] #  PREPARE TRAIN GENERATORS  #
[06:59:18.890497] ##############################
[06:59:18.892500] Initializing train data generator . . .
[06:59:18.908179] Checking which channel of the mask needs normalization . . .
[06:59:20.994035] Normalization config used for X: {'type': 'div', 'orig_dtype': dtype('uint16'), 'reduced_uint16': 1}
[06:59:20.995028] Normalization config used for Y: as_mask
[06:59:20.999033] Initializing val data generator . . .
Traceback (most recent call last):
  File "C:\Users\rajasekhar.p\BiaPy\main.py", line 51, in <module>
    _biapy.run_job()
  File "C:\Users\rajasekhar.p\BiaPy\biapy\_biapy.py", line 412, in run_job
    self.train()
  File "C:\Users\rajasekhar.p\BiaPy\biapy\_biapy.py", line 152, in train
    self.workflow.train()
  File "C:\Users\rajasekhar.p\BiaPy\biapy\engine\base_workflow.py", line 492, in train
    self.prepare_train_generators()
  File "C:\Users\rajasekhar.p\BiaPy\biapy\engine\base_workflow.py", line 315, in prepare_train_generators
    self.num_training_steps_per_epoch = create_train_val_augmentors(self.cfg, self.X_train, self.Y_train,
  File "C:\Users\rajasekhar.p\BiaPy\biapy\data\generators\__init__.py", line 214, in create_train_val_augmentors
    val_generator = f_name(**dic)
  File "C:\Users\rajasekhar.p\BiaPy\biapy\data\generators\pair_data_3D_generator.py", line 24, in __init__
    super().__init__(**kwars)
  File "C:\Users\rajasekhar.p\BiaPy\biapy\data\generators\pair_base_data_generator.py", line 416, in __init__
    if _X.ndim != (self.ndim+2):
AttributeError: 'NoneType' object has no attribute 'ndim'

This was fixed when I installed from the branch associated with the released version of 3.3.15. I am using a Windows 11 PC.
I used the default 3d_detection.yml configuration file from Biapy and only added following augmentation:

    DA_PROB: 0.5
    ENABLE: True
    VFLIP: True
    HFLIP: True
    ZFLIP: True
    GAUSSIAN_NOISE: True
    POISSON_NOISE: True
    BRIGHTNESS: True
    BRIGHTNESS_MODE: '3D'

Cheers
Pradeep

Error handling TIFFs created in MIB

Error handling TIFFs from MIB:
ValueError: the ImageJ format does not support data type dtype('<f4') for RGB

MIB software:
https://mib.helsinki.fi/index.html

To note, I've checked the TIFF's and they are in 8 bit format but they have LZW compression and I don't know if this is an an issue.

Error info from the training step:
21:09:50.070554] Creating validation data
[21:09:51.261219] *** Loaded train data shape is: (7536, 4, 160, 160, 1)
[21:09:51.261283] *** Loaded train GT shape is: (7536, 4, 160, 160, 1)
[21:09:51.261307] *** Loaded validation data shape is: (1884, 4, 160, 160, 1)
[21:09:51.261326] *** Loaded validation GT shape is: (1884, 4, 160, 160, 1)
[21:09:51.261420] ##############################
[21:09:51.261436] # PREPARE TRAIN GENERATORS #
[21:09:51.261452] ##############################
[21:09:51.261677] Initializing train data generator . . .
[21:09:52.987917] Normalization config used for X: {'type': 'div', 'orig_dtype': dtype('uint8'), 'div_255': 1}
[21:09:52.987972] Normalization config used for Y: as_mask
[21:09:52.991121] Initializing val data generator . . .
[21:09:53.432304] Normalization config used for X: {'type': 'div', 'orig_dtype': dtype('uint8'), 'div_255': 1}
[21:09:53.432401] Normalization config used for Y: as_mask
[21:09:53.432800] Creating generator samples . . .
[21:09:53.432853] 0) Creating samples of data augmentation . . .
0% 0/10 [00:00<?, ?it/s]
0% 0/1 [00:00<?, ?it/s]
0% 0/10 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/content/BiaPy/utils/util.py", line 240, in save_tif
imsave(f, aux, imagej=True, metadata={'axes': 'ZCYXS'}, check_contrast=False, compression=('zlib', 1))
File "/usr/local/lib/python3.10/dist-packages/skimage/io/_io.py", line 143, in imsave
return call_plugin('imsave', fname, arr, plugin=plugin, **plugin_args)
File "/usr/local/lib/python3.10/dist-packages/skimage/io/manage_plugins.py", line 205, in call_plugin
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/skimage/io/_plugins/tifffile_plugin.py", line 47, in imsave
return tifffile_imwrite(fname, arr, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/tifffile/tifffile.py", line 1280, in imwrite
result = tif.write(
File "/usr/local/lib/python3.10/dist-packages/tifffile/tifffile.py", line 2332, in write
raise ValueError(
ValueError: the ImageJ format does not support data type dtype('<f4') for RGB

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/content/BiaPy/main.py", line 137, in
workflow.train()
File "/content/BiaPy/engine/base_workflow.py", line 454, in train
self.prepare_train_generators()
File "/content/BiaPy/engine/base_workflow.py", line 285, in prepare_train_generators
self.num_training_steps_per_epoch = create_train_val_augmentors(self.cfg, self.X_train, self.Y_train,
File "/content/BiaPy/data/generators/init.py", line 214, in create_train_val_augmentors
train_generator.get_transformed_samples(
File "/content/BiaPy/data/generators/pair_base_data_generator.py", line 1266, in get_transformed_samples
self.save_aug_samples(sample_x[i], sample_y[i], orig_images, i, pos, out_dir, point_dict)
File "/content/BiaPy/data/generators/pair_data_3D_generator.py", line 88, in save_aug_samples
save_tif(aux, out_dir, [str(i)+"orig_x"+str(pos)+"_"+self.trans_made+'.tif'], verbose=False)
File "/content/BiaPy/utils/util.py", line 242, in save_tif
imsave(f, aux, imagej=True, metadata={'axes': 'ZCYXS'}, check_contrast=False)
File "/usr/local/lib/python3.10/dist-packages/skimage/io/_io.py", line 143, in imsave
return call_plugin('imsave', fname, arr, plugin=plugin, **plugin_args)
File "/usr/local/lib/python3.10/dist-packages/skimage/io/manage_plugins.py", line 205, in call_plugin
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/skimage/io/_plugins/tifffile_plugin.py", line 47, in imsave
return tifffile_imwrite(fname, arr, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/tifffile/tifffile.py", line 1280, in imwrite
result = tif.write(
File "/usr/local/lib/python3.10/dist-packages/tifffile/tifffile.py", line 2332, in write
raise ValueError(
ValueError: the ImageJ format does not support data type dtype('<f4') for RGB

Incorrect number of epochs on metric plot

The number of epochs in the metric chart appears to be larger than that of the loss chart when using several metrics. See for instance:
snemi3d_resunet_A_2_jaccard_index
snemi3d_resunet_A_2_loss
My guess is it is related to the number of metrics. In this example, I used the Jaccard index three times, one per channel.

Incorporate Zarr/H5 as a source of training data

Currently, Zarr/H5 are only considered for inference. We can read test images as Zarr/H5 and create also images in those formats using TEST.BY_CHUNKS variable. The idea is to support training data as well composed by these datatypes, not only multiple images but entire datasets stored in just one file.

Enable Bioimage Model Zoo pretrained models for training

Add the option to retrain BMZ models, not just to do inference. There are two options we can do:

  • Use Torchscript directly loaded models and work with that (possible errors in DDP? need to ensure all functionalities).
  • Create a "regular" Pytorch model by calling the function that returns the model in the provided .py file (args are stored in the RDF), and then load pytorch_state_dict weigths. More complicated but this will ensure all functionalities of Pytorch

Wrong net architecture in 3D Classification notebook

When selecting EfficientNetB0 as model, the training throws the following error:

[07:11:28.777149] Training configuration finished.
[07:11:28.786321] Date: 2024-02-05 07:11:28
[07:11:28.786428] Arguments: Namespace(config='/content/my_3d_classification.yaml', result_dir='/content/output', name='my_3d_classification', run_id=1, gpu=0, world_size=1, local_rank=-1, dist_on_itp=False, dist_url='env://', dist_backend='nccl')
[07:11:28.787806] Job: my_3d_classification_1
[07:11:28.787876] Python       : 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
[07:11:28.789216] PyTorch:  2.1.0+cu121
[07:11:28.790429] Not using distributed mode
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
[<ipython-input-28-a88867eca5b3>](https://vyjefdubo0j-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20240201-060115_RC00_603318229#) in <cell line: 116>()
    114 
    115 # Run the code
--> 116 biapy = BiaPy(f'/content/{job_name}.yaml', result_dir=output_path, name=job_name, run_id=1, gpu=0)
    117 biapy.run_job()

1 frames
[/usr/local/lib/python3.10/dist-packages/biapy/engine/check_configuration.py](https://vyjefdubo0j-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20240201-060115_RC00_603318229#) in check_configuration(cfg, jobname, check_data_paths)
    506     ### Model ###
    507     if cfg.MODEL.SOURCE == "biapy":
--> 508         assert model_arch in ['unet', 'resunet', 'resunet++', 'attention_unet', 'multiresunet', 'seunet', 'simple_cnn', 'efficientnet_b0', 
    509             'efficientnet_b1', 'efficientnet_b2', 'efficientnet_b3', 'efficientnet_b4','efficientnet_b5','efficientnet_b6','efficientnet_b7',
    510             'unetr', 'edsr', 'rcan', 'dfcan', 'wdsr', 'vit', 'mae'],\

AssertionError: MODEL.ARCHITECTURE not in ['unet', 'resunet', 'resunet++', 'attention_unet', 'multiresunet', 'seunet', 'simple_cnn', 'efficientnet_b[0-7]', 'unetr', 'edsr', 'rcan', 'dfcan', 'wdsr', 'vit', 'mae']

Detection could be multi threaded

Hello Biapyople,

I think it could be great to speed up the detection part of the workflow when using per_chunk.
On my setup the detection alone for a 5.88 GB takes >12 minutes (with 1472 chunks).

Possible Patch related errors for detection

Hello !

I encountered some problems while working with patches.

  • here the local patch coordinates are added in the global coordinate system without shift from the actual patch origin.
    df = pd.concat([df, df_patch], ignore_index=True)
  • here When using multiple GPU i think that there is a problem when slicing the data in z as it is not done exactly the same way than here.
  • Finally I think the patch detection should be done inside a padded data otherwise there is an edge effect that appear (red and magenta arrow in the following figure):
    image

I am working on a patch in my local branch. I propose that I open a PR once #48 and #49 are merged?

TTA fails on semantic segmentation workflow

I tested the 2D semantic segmentation workflow with the Lucchi dataset in its Colab notebook and it fails during testing when using TTA.

This is the output I get:

[11:12:43.377433] ############################
[11:12:43.377476] #  PREPARE TEST GENERATOR  #
[11:12:43.377489] ############################
[11:12:43.383550] Loading checkpoint from file /content/output/my_2d_semantic_segmentation/checkpoints/my_2d_semantic_segmentation_1-checkpoint-best.pth
[11:12:43.462119] Model weights loaded!
[11:12:43.463007] ###############
[11:12:43.463039] #  INFERENCE  #
[11:12:43.463050] ###############
[11:12:43.463070] Making predictions on test data . . .
  0% 0/165 [00:00<?, ?it/s]
  0% 0/1 [00:00<?, ?it/s][11:12:43.468880] Processing image(s): ['testing-0001.tif']
[11:12:43.469306] ### OV-CROP ###
[11:12:43.469339] Cropping (1, 768, 1024, 1) images into (256, 256, 1) with overlapping. . .
[11:12:43.469353] Minimum overlap selected: (0, 0)
[11:12:43.469372] Padding: (32, 32)
[11:12:43.470362] Real overlapping (%): 0.13020833333333334
[11:12:43.470399] Real overlapping (pixels): 25.0
[11:12:43.470413] 6 patches per (x,y) axis
[11:12:43.474004] **** New data shape is: (24, 256, 256, 1)
[11:12:43.474055] ### END OV-CROP ###


  0% 0/4 [00:00<?, ?it/s]

 25% 1/4 [00:00<00:00,  3.18it/s]

                                 

  0% 0/24 [00:00<?, ?it/s]

  4% 1/24 [00:00<00:05,  3.88it/s]

 12% 3/24 [00:00<00:02,  9.32it/s]

 21% 5/24 [00:00<00:01, 12.77it/s]

 29% 7/24 [00:00<00:01, 14.75it/s]

 38% 9/24 [00:00<00:00, 16.27it/s]

 46% 11/24 [00:00<00:00, 17.02it/s]

 54% 13/24 [00:00<00:00, 17.91it/s]

 62% 15/24 [00:00<00:00, 18.21it/s]

 75% 18/24 [00:01<00:00, 18.90it/s]

 88% 21/24 [00:01<00:00, 19.41it/s]

100% 24/24 [00:01<00:00, 19.26it/s]

                                   [11:12:45.268981] ### MERGE-OV-CROP ###
[11:12:45.269026] Merging (24, 256, 256, 1) images into (1, 768, 1024, 1) with overlapping . . .
[11:12:45.269058] Minimum overlap selected: (0, 0)
[11:12:45.269085] Padding: (32, 32)
[11:12:45.269741] Real overlapping (%): (0.13020833333333334, 0.0)
[11:12:45.269765] Real overlapping (pixels): (25.0, 0.0)
[11:12:45.269779] (6, 4) patches per (x,y) axis
[11:12:45.280197] **** New data shape is: (1, 768, 1024, 1)
[11:12:45.280244] ### END MERGE-OV-CROP ###
[11:12:45.280723] Saving (1, 1, 768, 1024, 1) data as .tif in folder: /content/output/my_2d_semantic_segmentation/results/my_2d_semantic_segmentation_1/per_image


  0% 0/1 [00:00<?, ?it/s]

                         [11:12:45.297158] Saving (1, 768, 1024, 1) data as .tif in folder: /content/output/my_2d_semantic_segmentation/results/my_2d_semantic_segmentation_1/per_image_binarized


  0% 0/1 [00:00<?, ?it/s]

                         
  0% 0/165 [00:02<?, ?it/s]
Traceback (most recent call last):
  File "/content/BiaPy/main.py", line 140, in <module>
    workflow.test()
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/content/BiaPy/engine/base_workflow.py", line 645, in test
    self.process_sample(self.test_filenames[(i*l_X)+j:(i*l_X)+j+1], norm=(X_norm, Y_norm))
  File "/content/BiaPy/engine/base_workflow.py", line 1123, in process_sample
    pred = ensemble8_2d_predictions(
  File "/content/BiaPy/data/post_processing/post_processing.py", line 558, in ensemble8_2d_predictions
    r_aux = pred_func(total_img[i*batch_size_value:top])
  File "/content/BiaPy/engine/base_workflow.py", line 1128, in <lambda>
    to_numpy_format(self.apply_model_activations(img_batch_subdiv), self.axis_order_back)
  File "/content/BiaPy/engine/base_workflow.py", line 555, in apply_model_activations
    pred = act(pred)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/activation.py", line 292, in forward
    return torch.sigmoid(input)
TypeError: sigmoid(): argument 'input' (position 1) must be Tensor, not numpy.ndarray

Failed to load a model

Desc

Follow up of #21. It looks like either I got a wrong model weight or I have to use a model other than unet? Thank you for your help in advance!

What I did

Config is the same as #21. I checked that cfg merged specified settings from the given config file.

config

PROBLEM:
    TYPE: DETECTION
    NDIM: 3D

DATA:
    PATCH_SIZE: (256, 256, 20, 3)
    REFLECT_TO_COMPLETE_SHAPE: True
    CHECK_GENERATORS: False
    EXTRACT_RANDOM_PATCH: False
    PROBABILITY_MAP: False
    TRAIN:
        PATH: 'path_to_train_data'
        MASK_PATH: 'path_to_train_data_gt'
        IN_MEMORY: True
        PADDING: (0,0,0)
        OVERLAP: (0,0,0)
    VAL:
        FROM_TRAIN: True
        SPLIT_TRAIN: 0.1
        IN_MEMORY: True
        PADDING: (0,0,0)
        OVERLAP: (0,0,0)
    TEST:
        PATH: './ChroMS/2021-03-24 P14 6520-2 Mcol cortex/test/x'
        MASK_PATH: ''
        IN_MEMORY: True
        LOAD_GT: False
        PADDING: (16,16,0)
        OVERLAP: (0,0,0)

AUGMENTOR:
    ENABLE: True
    AUG_SAMPLES: True
    DRAW_GRID: True
    DA_PROB: 1.0
    CHANNEL_SHUFFLE: False
    MISALIGNMENT: False
    CUTOUT: False
    GRIDMASK: False
    CUTNOISE: False
    CNOISE_SCALE: (0.05, 0.1)
    CUTBLUR: False
    RANDOM_ROT: False
    ROT90: False
    VFLIP: False
    HFLIP: False
    CONTRAST: False
    CONTRAST_FACTOR: (-0.3, 0.3)
    BRIGHTNESS: False
    BRIGHTNESS_FACTOR: (0.1, 0.3)
    GAMMA_CONTRAST: False
    GC_GAMMA: (0.5, 1.5)
    ELASTIC: False
    GRAYSCALE: True
    AFFINE_MODE: 'reflect'

MODEL:
    ARCHITECTURE: unet
    FEATURE_MAPS: [36, 48, 64]
    DROPOUT_VALUES: [0.1, 0.2, 0.3]
    Z_DOWN: 1
    LOAD_CHECKPOINT: False

LOSS:
  TYPE: CE

TRAIN:
    ENABLE: False
    OPTIMIZER: ADAM
    LR: 1.E-4
    BATCH_SIZE: 1
    EPOCHS: 500
    PATIENCE: 20

TEST:
    ENABLE: True
    AUGMENTATION: False

    DET_LOCAL_MAX_COORDS: True
    DET_MIN_TH_TO_BE_PEAK: [0.1]
    DET_VOXEL_SIZE: (0.4,0.4,2)
    DET_TOLERANCE: [10]
    DET_MIN_DISTANCE: [10]

    STATS:
        PER_PATCH: True
        MERGE_PATCHES: True
        FULL_IMG: False

PATHS:
    CHECKPOINT_FILE: './brainbow_3d_detection/model_weights_unet_3d_detection_P14_big_1.h5'

Script

from config.config import Config
from engine.engine import Engine

job_identifier = '1'
dataroot = './'
jobdir = './'


cfg = Config(jobdir,
             job_identifier=job_identifier,
             dataroot=dataroot)

cfg._C.merge_from_file('./brainbow_3d_detection/brainbow_3d_detection.yaml')
cfg.update_dependencies()
cfg = cfg.get_cfg_defaults()

engine = Engine(cfg, job_identifier)  # had an error. see issue #21 
engine.test()

Check loaded cfg

from pprint import pprint
>>> pprint(cfg)

{'AUGMENTOR': {'AFFINE_MODE': 'reflect',
               'AUG_NUM_SAMPLES': 10,
               'AUG_SAMPLES': True,
               'BRIGHTNESS': False,
               'BRIGHTNESS_EM': False,
               'BRIGHTNESS_EM_FACTOR': (0.1, 0.3),
               'BRIGHTNESS_EM_MODE': '3D',
               'BRIGHTNESS_FACTOR': (0.1, 0.3),
               'BRIGHTNESS_MODE': '3D',
               'CBLUR_DOWN_RANGE': (2, 8),
               'CBLUR_INSIDE': True,
               'CBLUR_SIZE': (0.2, 0.4),
               'CHANNEL_SHUFFLE': False,
               'CMIX_SIZE': (0.2, 0.4),
               'CNOISE_NB_ITERATIONS': (1, 3),
               'CNOISE_SCALE': (0.05, 0.1),
               'CNOISE_SIZE': (0.2, 0.4),
               'CONTRAST': False,
               'CONTRAST_EM': False,
               'CONTRAST_EM_FACTOR': (0.1, 0.3),
               'CONTRAST_EM_MODE': '3D',
               'CONTRAST_FACTOR': (-0.3, 0.3),
               'CONTRAST_MODE': '3D',
               'COUT_APPLY_TO_MASK': False,
               'COUT_CVAL': 0,
               'COUT_NB_ITERATIONS': (1, 3),
               'COUT_SIZE': (0.05, 0.3),
               'CUTBLUR': False,
               'CUTMIX': False,
               'CUTNOISE': False,
               'CUTOUT': False,
               'DA_PROB': 1.0,
               'DRAW_GRID': True,
               'DROPOUT': False,
               'DROP_RANGE': (0, 0.2),
               'ELASTIC': False,
               'ENABLE': True,
               'E_ALPHA': (12, 16),
               'E_MODE': 'constant',
               'E_SIGMA': 4,
               'GAMMA_CONTRAST': False,
               'GC_GAMMA': (0.5, 1.5),
               'GRAYSCALE': True,
               'GRIDMASK': False,
               'GRID_D_RANGE': (0.4, 1),
               'GRID_INVERT': False,
               'GRID_RATIO': 0.6,
               'GRID_ROTATE': 1,
               'G_BLUR': False,
               'G_SIGMA': (1.0, 2.0),
               'HFLIP': False,
               'MB_KERNEL': (3, 7),
               'MEDIAN_BLUR': False,
               'MISALIGNMENT': False,
               'MISSING_PARTS': False,
               'MISSP_ITERATIONS': (10, 30),
               'MOTB_K_RANGE': (8, 12),
               'MOTION_BLUR': False,
               'MS_DISPLACEMENT': 16,
               'MS_ROTATE_RATIO': 0.5,
               'RANDOM_CROP_SCALE': 1,
               'RANDOM_ROT': False,
               'RANDOM_ROT_RANGE': (-180, 180),
               'ROT90': False,
               'SHEAR': False,
               'SHEAR_RANGE': (-20, 20),
               'SHIFT': False,
               'SHIFT_RANGE': (0.1, 0.2),
               'SHUFFLE_TRAIN_DATA_EACH_EPOCH': True,
               'SHUFFLE_VAL_DATA_EACH_EPOCH': False,
               'VFLIP': False,
               'ZFLIP': False,
               'ZOOM': False,
               'ZOOM_RANGE': (0.8, 1.2)},
 'DATA': {'CHANNELS': 'B',
          'CHANNEL_WEIGHTS': (1, 0.2),
          'CHECK_GENERATORS': False,
          'CHECK_MW': True,
          'CONTOUR_MODE': 'thick',
          'EXTRACT_RANDOM_PATCH': False,
          'MW_OPTIMIZE_THS': False,
          'MW_TH1': 0.2,
          'MW_TH2': 0.1,
          'MW_TH3': 0.3,
          'MW_TH4': 1.2,
          'MW_TH5': 1.5,
          'PATCH_SIZE': (256, 256, 20, 3),
          'PROBABILITY_MAP': False,
          'REFLECT_TO_COMPLETE_SHAPE': True,
          'REMOVE_BEFORE_MW': True,
          'REMOVE_SMALL_OBJ': 30,
          'ROOT_DIR': './',
          'TEST': {'ARGMAX_TO_OUTPUT': False,
                   'BINARY_MASKS': './ChroMS/2021-03-24 P14 6520-2 Mcol '
                                   'cortex/test/x/../bin_mask',
                   'CHECK_DATA': True,
                   'INSTANCE_CHANNELS_DIR': './ChroMS/2021-03-24 P14 6520-2 '
                                            'Mcol cortex/test/x_B_thick',
                   'INSTANCE_CHANNELS_MASK_DIR': '_B_thick',
                   'IN_MEMORY': True,
                   'LOAD_GT': False,
                   'MASK_PATH': '',
                   'MEDIAN_PADDING': False,
                   'OVERLAP': (0, 0, 0),
                   'PADDING': (16, 16, 0),
                   'PATH': './ChroMS/2021-03-24 P14 6520-2 Mcol cortex/test/x',
                   'RESOLUTION': (-1,),
                   'USE_VAL_AS_TEST': False},
          'TRAIN': {'CHECK_CROP': False,
                    'CHECK_DATA': True,
                    'INSTANCE_CHANNELS_DIR': 'path_to_train_data_B_thick',
                    'INSTANCE_CHANNELS_MASK_DIR': 'path_to_train_data_gt_B_thick',
                    'IN_MEMORY': True,
                    'MASK_PATH': 'path_to_train_data_gt',
                    'OVERLAP': (0, 0, 0),
                    'PADDING': (0, 0, 0),
                    'PATH': 'path_to_train_data',
                    'REPLICATE': 0,
                    'RESOLUTION': (-1,)},
          'VAL': {'BINARY_MASKS': './val/x/../bin_mask',
                  'CROSS_VAL': False,
                  'CROSS_VAL_FOLD': 1,
                  'CROSS_VAL_NFOLD': 5,
                  'FROM_TRAIN': True,
                  'INSTANCE_CHANNELS_DIR': './val/x_B_thick',
                  'INSTANCE_CHANNELS_MASK_DIR': './val/y_B_thick',
                  'IN_MEMORY': True,
                  'MASK_PATH': './val/y',
                  'MEDIAN_PADDING': False,
                  'OVERLAP': (0, 0, 0),
                  'PADDING': (0, 0, 0),
                  'PATH': './val/x',
                  'RANDOM': True,
                  'RESOLUTION': (-1,),
                  'SPLIT_TRAIN': 0.1},
          'W_BACKGROUND': 0.06,
          'W_FOREGROUND': 0.94},
 'LOSS': CfgNode({'TYPE': 'CE'}),
 'MODEL': {'ACTIVATION': 'elu',
           'ARCHITECTURE': 'unet',
           'BATCH_NORMALIZATION': False,
           'DEPTH': 12,
           'DROPOUT_VALUES': [0.1, 0.2, 0.3],
           'EMBED_DIM': 768,
           'FEATURE_MAPS': [36, 48, 64],
           'KERNEL_INIT': 'he_normal',
           'LAST_ACTIVATION': 'sigmoid',
           'LOAD_CHECKPOINT': False,
           'MLP_HIDDEN_UNITS': [2048, 1024],
           'NUM_HEADS': 6,
           'N_CLASSES': 1,
           'OUT_DIM': 1,
           'SPATIAL_DROPOUT': False,
           'TOKEN_SIZE': 16,
           'Z_DOWN': 1},
 'PATHS': {'CHARTS': './results/1/charts',
           'CHECKPOINT': './h5_files',
           'CHECKPOINT_FILE': './brainbow_3d_detection/model_weights_unet_3d_detection_P14_big_1.h5',
           'CROP_CHECKS': './results/1/check_crop',
           'DA_SAMPLES': './results/1/aug',
           'GEN_CHECKS': './results/1/gen_check',
           'GEN_MASK_CHECKS': './results/1/gen_mask_check',
           'LOSS_WEIGHTS': './results/1/loss_weights',
           'MAP_CODE_DIR': '',
           'MAP_H5_DIR': './results/1/mAP_h5_files',
           'PROB_MAP_DIR': './prob_map',
           'PROB_MAP_FILENAME': 'prob_map.npy',
           'RESULT_DIR': {'DET_LOCAL_MAX_COORDS_CHECK': './results/1/per_image_local_max_check',
                          'FULL_IMAGE': './results/1/full_image',
                          'FULL_POST_PROCESSING': './results/1/full_post_processing',
                          'PATH': './results/1',
                          'PER_IMAGE': './results/1/per_image',
                          'PER_IMAGE_INSTANCES': './results/1/per_image_instances',
                          'PER_IMAGE_INST_VORONOI': './results/1/per_image_instances_voronoi',
                          'PER_IMAGE_POST_PROCESSING': './results/1/per_image_post_processing'},
           'TEST_FULL_GT_H5': 'h5',
           'TEST_INSTANCE_CHANNELS_CHECK': './results/1/test_instance_channels',
           'TRAIN_INSTANCE_CHANNELS_CHECK': './results/1/train_instance_channels',
           'VAL_INSTANCE_CHANNELS_CHECK': './results/1/val_instance_channels',
           'WATERSHED_DIR': './results/1/watershed'},
 'PROBLEM': CfgNode({'TYPE': 'DETECTION', 'NDIM': '3D'}),
 'SYSTEM': CfgNode({'NUM_GPUS': 1, 'NUM_CPUS': 1, 'SEED': 0}),
 'TEST': {'APPLY_MASK': False,
          'AUGMENTATION': False,
          'DET_LOCAL_MAX_COORDS': True,
          'DET_MIN_DISTANCE': [10],
          'DET_MIN_TH_TO_BE_PEAK': [0.1],
          'DET_TOLERANCE': [10],
          'DET_VOXEL_SIZE': (0.4, 0.4, 2),
          'ENABLE': True,
          'EVALUATE': True,
          'MAP': False,
          'MATCHING_SEGCOMPARE': False,
          'MATCHING_STATS': False,
          'MATCHING_STATS_THS': [0.3, 0.5, 0.75],
          'POST_PROCESSING': {'BLENDING': False,
                              'YZ_FILTERING': False,
                              'YZ_FILTERING_SIZE': 5,
                              'Z_FILTERING': False,
                              'Z_FILTERING_SIZE': 5},
          'REDUCE_MEMORY': False,
          'STATS': {'FULL_IMG': False,
                    'MERGE_PATCHES': True,
                    'PER_PATCH': True},
          'VERBOSE': True,
          'VORONOI_ON_MASK': False},
 'TRAIN': {'BATCH_SIZE': 1,
           'CHECKPOINT_MONITOR': 'val_loss',
           'EARLYSTOPPING_MONITOR': 'val_loss',
           'ENABLE': False,
           'EPOCHS': 500,
           'LR': 0.0001,
           'LR_SCHEDULER': CfgNode({'ENABLE': False, 'NAME': ''}),
           'OPTIMIZER': 'ADAM',
           'PATIENCE': 20}}

Error msg

Loading model weights from h5_file: ./brainbow_3d_detection/model_weights_unet_3d_detection_P14_big_1.h5

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Input In [16], in <cell line: 1>()
----> 1 engine.test()

File ~/workspace/BiaPy/engine/engine.py:208, in Engine.test(self)
    205 def test(self):
    207     print("Loading model weights from h5_file: {}".format(self.cfg.PATHS.CHECKPOINT_FILE))
--> 208     self.model.load_weights(self.cfg.PATHS.CHECKPOINT_FILE)
    210     image_counter = 0
    211     if self.cfg.TEST.POST_PROCESSING.BLENDING or self.cfg.TEST.POST_PROCESSING.YZ_FILTERING or \
    212        self.cfg.TEST.POST_PROCESSING.Z_FILTERING:

File ~/miniconda3/envs/EM_tools/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:2234, in Model.load_weights(self, filepath, by_name, skip_mismatch, options)
   2231   hdf5_format.load_weights_from_hdf5_group_by_name(
   2232       f, self.layers, skip_mismatch=skip_mismatch)
   2233 else:
-> 2234   hdf5_format.load_weights_from_hdf5_group(f, self.layers)

File ~/miniconda3/envs/EM_tools/lib/python3.8/site-packages/tensorflow/python/keras/saving/hdf5_format.py:685, in load_weights_from_hdf5_group(f, layers)
    683 layer_names = filtered_layer_names
    684 if len(layer_names) != len(filtered_layers):
--> 685   raise ValueError('You are trying to load a weight file '
    686                    'containing ' + str(len(layer_names)) +
    687                    ' layers into a model with ' + str(len(filtered_layers)) +
    688                    ' layers.')
    690 # We batch weight value assignments in a single backend call
    691 # which provides a speedup in TensorFlow.
    692 weight_value_tuples = []

ValueError: You are trying to load a weight file containing 18 layers into a model with 13 layers.

pypi / conda package

Hi @danifranco and @iarganda ,

BiaPy looks fantastic. Congratulations!

I was just wondering if there are any plans to make the installation more easy/reproducible by uploading biapy to pypi and / or conda-forge?

At the moment, the installation instructions mention to download the source git clone https://github.com/danifranco/BiaPy.git and then install it using pip install --editable . Thus, if I download it today and next year, I might retrieve different code. This could potentially break my downstream analysis. If I could pip install biapy==3.0 I would in both cases retrieve the same code.

I'd be happy to actively support you with this. In case you need assistance, just let me know!

Best,
Robert

Dockerfile build fails: base image missing

Hi Daniel,

Building the docker from utils/env/Dockerfile fails due to missing base image nvidia/cuda:11.4.0-base-ubuntu20.04.
I could build a docker for BiaPy after pulling a TF2 installed nvidia-base image.

Could you kindly test if the existing Docker build works at your end?
Happy to share my Dockerfile via pull request if it would be of benefit.

Best,
Samia

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.