GithubHelp home page GithubHelp logo

benjijamorris / aics_transfer_function Goto Github PK

View Code? Open in Web Editor NEW

This project forked from wang101/aics_transfer_function

0.0 0.0 0.0 6.25 MB

Python package for building 3D computational transfer function for light microscopy images

License: Other

Makefile 0.89% Python 96.16% Jupyter Notebook 2.95%

aics_transfer_function's Introduction

AICS Transfer Function

Build Status Documentation Code Coverage

Python package for building 3d computational transfer function models for light microscopy images


Quick Start

To build a transfer function from a source domain (e.g., low resolution images) to a target domain (e.g., high resolution images), we assume the training data (pairs of images in both source and target domains) are prepared by the registration step (see: https://github.com/AllenCell/aics_tf_registration). After that, there are three main steps: (1) calculating intensity normalization parameters. (2) training , (3) testing

(1) intensity normalization parameter

python scripts/pre_process_calc.py --source_domain /path/to/source/domain/data --target_domain /path/to/target/domain/data

parameters will be printed out on command line screen.

(2) training

Take the intensity normalization parameters calculated in step 1 and prepare a training configuration file (Example). Mostly, just updating the normalization parameters, data paths, model save path, experiment names, etc. Then

TF_run --config /path/to/training/config.yaml --model train

(3) testing

The testing step can be done in three different ways:

  • validation: both source domain images and target domain images (ground truth) are available. This can be used to validate the model performance. Example configuration file Then, TF_run --config /path/to/validation/config.yaml --model validation
  • inference: only source domain images are available. This is used to make predictions on new data. Exmaple configuration file. Then, TF_run --config /path/to/inference/config.yaml --model inference
  • API for applying transfer function on numpy array: This is useful when you want to use transfer function as part of your big workflow. Demo Jupyter Notebook

Note about parameters in configuration yaml file In general, only the datapath and normalization sections need to change, no matter for training or testing. *datapath: the directory of source domain images and target domain images (no target when doing inference), as well as the directory to save prediction (only when testing) *normalization: Only simple_norm using middle_otsu is supported currently. simple_norm refers intensity normalization by rescaling the intensity into [m - a * s, m + b * s], where m and s are mean intensity and standard deviation of all "valid" voxels. Here, "valid" voxels are the middle chunk of the image stack, where the middle chunk is estimated by applying Otsu thresholding to roughly identify where the signals are. By doing this, we could reduce the impact of empty z-slices near the bottom and top of the stack. This is also where the name middle_otsu comes from.

Besides above two, there one section (called save) specific to training. Users can specify where to save the model (results_folder) and whether to save example predictions periodically (save_training_inspections), if so, how frequent to save (print_freq). Also, it is recommeded to set save_latest_freq (how frequent to save the model) the same as print_freq, so that the example prediction you observe matches the checkpoints being saved.

One last important parameter is path under load_trained_model section. If this is used in training, it means the training will use this model as the initial model. If this is used in testing, it is the path to the model you want to apply.

(4) Auto-Alignment module

The Auto-Alignment (AA) module is an optional training step designed to improve the model performance when we know the training pairs are prone to mis-alignment. To use the AA module, we assume a regular trianing, see section (2) above, has finished and the trained model is saved at /path/to/basic/model. The AA module has two stages: estimating mis-alignment, and fine-tuning the model after adjusting the training data with estimated mis-alignment.

Stage 1: Update the configuration file (exmaple), especially setting path (under load_trained_model) as /path/to/basic/model (i.e., the model trained with regular steps) and using the same normalization parameters, data paths, model saving path as when training the basic model. Notice that model becomes stn, instead of pix2pix and there are also more parameters comparing to regular training. In most cases, these new settings do not need to change. Then, TF_run --model train --config /path/to/new/AA/config_stage_1.yaml After training, a new file offset.log will be generated in the results_folder.

Stage 2: Update the configuration file (exmaple), especially setting path (under load_trained_model) as /path/to/stage_1/model (i.e., the model from previous training stage), and setting the new parameter readoffsetfrom as /path/to/stage_1/offset.log, and using the same normalization parameters, data paths, model saving path as before. In most cases, other settings do not need to change. Then, TF_run --model train --config /path/to/new/AA/config_stage_2.yaml After training, the new model in results_folder can be used for inference.

Installation

Stable Release: pip install aics_transfer_function
Development Head: pip install git+https://github.com/AllenCell/aics_transfer_function.git

Documentation

For full package documentation please visit AllenCell.github.io/aics_transfer_function.

Development

See CONTRIBUTING.md for information related to developing the code.

The implementation of this repo was partially inspired by https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix. A few core functions were reused.

Free software: Allen Institute Software License

aics_transfer_function's People

Contributors

jxchen01 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.