GithubHelp home page GithubHelp logo

na-mic / projectweek Goto Github PK

View Code? Open in Web Editor NEW
74.0 56.0 256.0 1.06 GB

Website for NA-MIC Project Weeks

Home Page: https://projectweek.na-mic.org

Ruby 0.15% HTML 7.70% Jupyter Notebook 5.41% CMake 9.48% Python 62.67% C++ 13.36% SCSS 0.35% Batchfile 0.89%
3d-slicer reproducible-research international-conference medical-imaging open-healthcare open-source hackathon

projectweek's People

Contributors

acetylsalicyl avatar adamrankin avatar aliciaposediezdelalastra avatar bextia avatar carlos-luque avatar cpinter avatar deepakri201 avatar dgmato avatar drouin-simon avatar fedorov avatar franklinwk avatar jcfr avatar lassoan avatar lorifranke avatar mariannaj avatar marilolamacbioidi avatar mschumak avatar narteagam avatar pieper avatar piiq avatar punzo avatar pzaffino avatar rafaelpalomar avatar rbumm avatar sarafv avatar sjh26 avatar smrolfe avatar spujol avatar sunderlandkyl avatar tkapur avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

projectweek's Issues

Project: AR in Slicer

Category

VR/AR and Rendering

Key Investigators

  • Alicia Pose Díez de la Lastra (Universidad Carlos III de Madrid, Madrid, Spain) - [On site, Presenter]
  • Simon Drouin (École de Technologie Supérieure , Montreal , Canada)

Project Description

Microsoft HoloLens 2 has demonstrated to be an excellent device in many clinical applications. They are mainly used to display 3D patient-related virtual information overlayed to the real world. However, its processing capacity is quite limited, so developing complex applications that require medical image processing is quite convoluted.

A good solution could be to perform the difficult computations on a speciallized software on a computer (i.e. 3D Slicer) and send them in real time to HoloLens 2 so that it can focus solely on visualization.
Up to date, there has been a lack of software infrastructure to connect 3D Slicer to any augmented reality (AR) device.

During the last year, Universidad Carlos III de Madrid (Madrid, Spain) and Perk Lab in Queen's University have worked together to develop a novel connection approach between Microsoft HoloLens 2 and 3D Slicer using OpenIGTLink.

The results of that work are publicly available at this GitHub repository.

The current solution is implemented in a 3 elements system. It is composed by A Microsoft HoloLens 2 headset, the Unity software, and the 3D Slicer platform.
The HoloLens 2 application is not directly built on the device, but remotely transferred from Unity in real time using Holographic Remoting.

image

Objective

Evaluate the transferability of the aforementioned project to other AR devices. Specifically, we'll focus on the VARJO XR-3 headset.
Varjo headset

Approach and Plan

  1. Connect Varjo headset to Unity.
  2. Find a way to remotely render information from Unity to the headset.
  3. 3D Slicer creates an OpenIGTLink server.
  4. Unity, containing the AR application, creates an OpenIGTLink client that connects to the server.
  5. Currently, when the application is executed in the Unity editor, it starts sending and receiving messages from 3D Slicer. Simultaneously, it wirelessly streams the app to Microsoft HoloLens 2 using Holographic Remoting. Try to replicate the same with Varjo.

Progress and Next Steps

So far, everything works for HoloLens 2. Our current application transfers geometrical transform and image messages between the platforms.
It displays CT reslices of a patient in the AR device. The user wearing the glasses can manipulate the CT plane to see different perspectives.
The application was build for pedicle screw placement planning.

20221213_161232_HoloLens

Our main goal for this week is to replicate the exact same application in the new device.

Illustrations

No response

Background and References

Check out our app in this GitHub repository.
This repository contains all the resources and code needed to replicate our work in your computer.

Transfer of geometrical transforms from HoloLens 2 to 3D Slicer:

MovingSpine_GIF

Transfer of images from 3D Slicer to HoloLens 2:

MovingCT_GIF

Speed-up deployement

Is your feature request related to a problem? Please describe.

After integrating updates into the main branch, it takes ~10mins to fully deploy the updated website.

Overview image
Upload artifact image
deploy image

Describe the solution you'd like

Some of the large files copied during each deployment could be moved into a "Resource" release.

Files are associated with Project Week 28, 35, 37 and 38 takes close 0.5GB.

image

$ du -ah . | sort -hr | head -n 20
1003M	.
244M	./PW38_2023_GranCanaria/Projects
244M	./PW38_2023_GranCanaria
174M	./PW37_2022_Virtual
173M	./PW37_2022_Virtual/Projects
136M	./PW28_2018_GranCanaria
121M	./PW28_2018_GranCanaria/Projects
113M	./PW35_2021_Virtual/Projects
113M	./PW35_2021_Virtual
100M	./PW38_2023_GranCanaria/Projects/SlicerLiver
100M	./PW37_2022_Virtual/Projects/SlicerLiver
83M	./PW33_2020_GranCanaria
72M	./PW31_2019_Boston
70M	./PW33_2020_GranCanaria/Projects
69M	./PW30_2019_GranCanaria
62M	./PW35_2021_Virtual/Projects/US_CT_VertebraRegistration
59M	./PW35_2021_Virtual/Projects/US_CT_VertebraRegistration/US-CTAlignment.gif
54M	./PW30_2019_GranCanaria/Projects
52M	./PW31_2019_Boston/Projects
44M	./PW38_2023_GranCanaria/Projects/MultiSpectralSensorIntegration
$ find . -type f -printf '%s %p\n'| sort -nr | head -20 | while IFS= read -r line; do
   size=$(echo $line | cut -d" " -f1);
   file=$(echo $line | cut -d" " -f2);
   printf $size | numfmt --to=iec;
   echo " $file";
done
59M ./PW35_2021_Virtual/Projects/US_CT_VertebraRegistration/US-CTAlignment.gif
33M ./PW38_2023_GranCanaria/Projects/SlicerLiver/distance-tumor.webm
33M ./PW37_2022_Virtual/Projects/SlicerLiver/distance-tumor.webm
32M ./PW38_2023_GranCanaria/Projects/SlicerLiver/planning.webm
32M ./PW37_2022_Virtual/Projects/SlicerLiver/planning.webm
24M ./PW28_2018_GranCanaria/Projects/3DViewsLinking/myimage.gif
24M ./PW38_2023_GranCanaria/Projects/MultiSpectralSensorIntegration/TEEV2+PCOUV.gif
23M ./PW37_2022_Virtual/Projects/StreamlinedROIAnnotationTool/FinalROITool_1.gif
21M ./PW38_2023_GranCanaria/Projects/MultiSpectralSensorIntegration/TEEV2PCOUV-2.gif
20M ./PW38_2023_GranCanaria/Projects/SlicerLiver/distance-vessels.webm
20M ./PW37_2022_Virtual/Projects/SlicerLiver/distance-vessels.webm
20M ./PW30_2019_GranCanaria/Projects/Data-glove_for_virtual_operations/20190201_095221.gif
16M ./PW31_2019_Boston/Breakouts/DataManagement/XNAT
15M ./PW38_2023_GranCanaria/Projects/MONAILabel2bundle/monai_bundle_vs_total_seg_spleen.gif
15M ./PW33_2020_GranCanaria/Projects/ClubFoot/Models/stage3.vtk
15M ./PW31_2019_Boston/Projects/ClubfootCasts/Models/stage3.vtk
14M ./PW38_2023_GranCanaria/Projects/MONAILabel2bundle/monai_bundle_vs_total_seg_idc.gif
14M ./PW37_2022_Virtual/Projects/SlicerTMS/tms_vis.gif
14M ./PW38_2023_GranCanaria/Projects/KaapanaFastViewingAndTaggingOfDICOMImages/NA-MIC.gif
14M ./PW38_2023_GranCanaria/Projects/SlicerLiver/liver_resection.mp4

Project: 3D Slicer Internationalization

Category

Infrastructure

Key Investigators

  • Sonia Pujol (Brigham and Women's Hospital, Harvard Medical School, USA)
  • Steve Pieper (Isomics Inc., USA)
  • Andras Lasso (Queen's University, Canada)

Project Description

The goal of the project is to facilitate access to 3D Slicer in non-English speaking countries and foster global community engagement.

Objective

To identify members of the global Slicer community interested in new Slicer activities in their language

Approach and Plan

Slice Internationalization Breakout session:

  • Monday, June 12, 2-4 pm EST

Daily Slicer internationalization sessions with members of the Slicer community

  • Tuesday, June 13, 9-11 am EST
  • Wednesday, June 14, 10-11 am EST
  • Thursday, June 15, 10-11 am EST

Progress and Next Steps

Illustrations

No response

Background and References

No response

Project: MHub-Slicer Integration

Category

Segmentation / Classification / Landmarking

Key Investigators

  • Leonard Nürnberg (Department of Radiology, Brigham and Women’s Hospital, Boston, MA)
  • Dennis Bontempi (Department of Radiology, Brigham and Women’s Hospital, Boston, MA)
  • Justin Johnson (Department of Radiology, Brigham and Women’s Hospital, Boston, MA)
  • Andrey Fedorov (Department of Radiology, Brigham and Women’s Hospital, Boston, MA)
  • Hugo Aerts (Department of Radiology, Brigham and Women’s Hospital, Boston, MA)

Project Description

MHub is a repository of self-contained deep-learning models trained for a wide variety of applications in the medical and medical imaging domain. MHub provides the community with reproducible and transparent AI pipelines that work out of the box as intended by the developers.

As part of our efforts, we developed a first version of a Slicer MHub extension that allows users to run different AI models directly in Slicer without the need to install potentially conflicting dependencies as part of their Slicer Python installation.

Objective

The goal of this project is to polish the extension, publish it, and further explore its potential applications and user feedback to expand the extension's capabilities, address its limitations, and ensure its seamless integration with Slicer.

Approach and Plan

Work on identified issues/enhancements, and collect feedback from the Slicer community.

Progress and Next Steps

No response

Illustrations

image

Background and References

No response

Project: Improve TCIA Browser extension

Category

Infrastructure

Key Investigators

  • Justin Kirby (Frederick National Laboratory for Cancer Research, USA)
  • Adam Li (Georgetown University, USA)

Project Description

The Cancer Imaging Archive (TCIA) is an NCI-funded service which de-identifies and publishes cancer imaging datasets. The imaging data are organized as “collections” or "analysis result" datasets defined by a common disease (e.g. lung cancer), image modality or type (MRI, CT, digital histopathology, etc) or research focus. An emphasis is made to provide supporting data related to the images such as patient outcomes, treatment details, genomics and expert analyses where available.

TCIA Browser is an extension that lets users easily download and import TCIA data into 3D Slicer. This project seeks to improve the TCIA Browser extension for 3D Slicer by updating it to leverage TCIA-Utils to access TCIA's APIs.

Objective

The major improvements we'd like to address with TCIA Browser include:

  1. Currently TCIA Browser is using a TCIA API that was deprecated in June 2022. The extension needs to be updated to use the new APIs in order to retrieve the full catalog of data in TCIA.
  2. TCIA Browser lacks the ability to access datasets that require logging in.
  3. Some of the metadata TCIA Browser returns in the UI are rarely populated. We would like to explore updating the metadata fields that describe the subjects and scans to better assist Slicer users in deciding which scans they want to download.
  4. TCIA provides “manifest” files that allow full collection downloads and the ability to use our radiology portal to create custom manifest files. It would be useful to extend TCIA Browser to ingest a manifest file and download the data from it.

Approach and Plan

  1. Identify locations in the code that use the older API to download or query data and update them to leverage TCIA-Utils functions such as downloadSeries(), getCollections(), getPatients(), getStudies() and getSeries().
  2. Implement a new feature to support logging in to TCIA Browser using the getToken() function in TCIA-Utils.
  3. Review the existing metadata fields in the Browser GUI. Perform queries of the TCIA database to determine how often these fields are populated.
  4. Discuss and agree on other available metadata fields that may be useful to Slicer users. Run queries to find out how often they're populated. Include external sources from NCI's Cancer Research Data Commons that may include genomic, proteomic and clinical data on the same subjects that TCIA hosts.
  5. Update the GUI with a "Download TCIA Manifest" button and leverage the TCIA-Utils downloadSeries() function with the input_type = "manifest" option to pass the path of a *.TCIA manifest file as the series_data parameter.

Progress and Next Steps

  1. Describe specific steps you have actually done.

Illustrations

No response

Background and References

No response

Project: Create Agatston Cardiac Scoring Module

Category

Quantification and Computation

Key Investigators

  • Curtis Lisle (KnowledgeVis, USA)
  • Andras Lasso (Queens University, Canada)

Project Description

The algorithm for calculating Agatston Cardiac scoring (a clinical way to measure arterial occlusion around the heart) was previously written by Jans Johnson et al. The script was recently tested by members of the community, but it would be more useful if a Slicer Module to run the Agatston scoring was available. This project is a start to creating the module and eventually a Slicer Extension.

Objective

  1. Objective A. Describe what you plan to achieve in 1-2 sentences.
    Start building a Slicer Extension to run the existing Agatston scoring algorithm.

Approach and Plan

  1. Describe specific steps of what you plan to do to achieve the above described objectives.
    Create an Extension stub with the Extension Wizard
    Refactor the Python code to fit in the extension
    Add GUI elements and description to guide the user in preparing data
    Test the Extension
    Work with the Slicer core community to publish the Agatston Cardiac Scoring Extension

Progress and Next Steps

  1. Describe specific steps you have actually done.
    Reviewed existing algorithm
    Acquired reference cardiac scan with corresponding Agatston score for testing

Illustrations

No response

Background and References

Sample Masked Image as input: https://github.com/lassoan/PublicTestingData/releases/download/data/CardiacAgatstonScore.mrb

Existing Algorithm to refactor:
https://github.com/BRAINSia/CardiacAgatstonMeasures

A recent update to interpreting Agatston scoring:
https://pubs.rsna.org/doi/10.1148/ryct.2021200484

Proposal: Translation/rotation of select points in a list.

Project Description

The goal of this project is to enable creation of synthetic data from landmark transforms.

Given a point list, the user will select points to be operated on. The selected points will be moved independently to create the target landmark set for the transform.

This can currently be done in the Markups module by copying the points to a new list, translating/rotating the points, and copying the point positions back to the original node. However this process can be tedious and error-prone. We plan to implement this function as an option in the Markup Editor module in the SlicerMorph extension.

This project proposal is related the forum post here

Project: Docker based system to assess challenge submissions

Project

Designing a Docker-based system to assess the submissions of challenge participants.

Category

Infrastructure

Presenter Location

Online

Key Investigators

  • Roya Khajavibajestani (Brigham and women’s hospital, USA)
  • Ron Kikinis (Harvard Medical School, USA)
  • Steve Pieper( Isomics, USA)

Project Description

Project Description:

Our project is focused on developing a Docker-based submission mechanism for challenge participant. To maintain fairness and make sure that the test set is not used in the training process, the test set will not be released to the participants. Instead, participants will be required to containerize their methods using Docker and submit their Docker containers for evaluation.

Docker provides an excellent solution for running algorithms in isolated environments known as containers. In our project, we will leverage Docker to create a container that replicates the participants' pipeline requirements and executes their inference script. By encapsulating the entire environment within a container, we can ensure consistent execution and reproducibility.

Objective

Create a sample docker container for submission
Create an evaluation mechanism on the challenge website
Create documentation, guidelines, and tutorial for participants

Approach and Plan

Design the docker container, input/output mechanism, requirements, and methods to perform inference using a subset of the validation set.
Create an evaluation mechanism on the challenge website
Create a sample submission docker for the test phase and test it on the challenge website
Create documentation to publish in phase 2 of the challenge.

Progress and Next Steps

  1. Describe specific steps you have actually done.

Illustrations

No response

Background and References

No response

Project: Deep learning model for B-line detection in lung ultrasound videos using crowdsourced labels

Category

Segmentation / Classification / Landmarking

Presenter Location

In-person

Key Investigators

  • Mike Jin (Brigham and Women's Hospital, USA)
  • Tamas Ungi (Queen's University, Canada)
  • Colton Barr (Queen's University, Canada / Brigham and Women's Hospital, USA)
  • Ameneh Asgari-Targhi (Brigham and Women's Hospital, USA)
  • Tina Kapur (Brigham and Women's Hospital, USA)

Project Description

Automated B-line detection in lung ultrasound videos has been demonstrated before, most recently by Lucassen 2023. However, acquiring the many labels necessary can be a resource-intensive process, limited by the availability of expert clinicians capable of producing high-quality labels. Recently, gamified crowdsourcing with a new quality control mechanism and built-in learning for labelers has been demonstrated to be capable of producing annotations on lung ultrasound videos comparable in quality to expert clinicians (as well as analogous results for EEG and skin lesion classification tasks), which can greatly shorten the time required to acquire high-quality labels for model training. Though these crowd labels have been shown to have expert-level quality, it has yet to be demonstrated whether crowd-produced labels are capable of training high-performance models.

Objective

  1. Train a deep learning model to classify lung ultrasound videos as having B-lines or having no B-lines.

Approach and Plan

  1. Create a data file associating all 3000+ clips with filepath, crowd label, and expert labels (for those that have expert labels).
  2. Adapt the model (ResNet(2+1)D-18 or similar pretrained model) and training procedure used in Lucassen 2023 to train a new model on a new crowd-labeled dataset of 3000+ lung ultrasound videos from 500 patients.
  3. Evaluate the model performance and compare to previously reported model performance for ultrasound video classification of B-line presence.

Progress and Next Steps

  1. De-identified and masked 3000+ lung ultrasound clips
  2. Uploaded 3000+ clips with standard filename format to a GPU cluster.
  3. Crowd-labeled all 3000+ lung ultrasound clips using 193 clips from ~70 patients for crowd training.

Illustrations

No response

Background and References

https://pubmed.ncbi.nlm.nih.gov/37276107/

Project: SystoleOS

Category

Infrastructure

Presenter Location

In-person

Key Investigators

  • Rafael Palomar (Oslo University Hospital, Norway)
  • Steve Pieper (Isomics, USA)

Project Description

Over a span of more than ten years, 3D Slicer has paved the way for cutting-edge biomedical research. Its unprecedented success is pushing the frontiers of research, leading numerous research groups and corporations to recognize 3D Slicer as a credible software for designing medical devices. These devices not only have the potential to support routine clinical workflows but may also evolve into marketable products. Although 3D Slicer's development has been largely research-focused, its modular architecture fosters the creation of industrial prototypes.

Systole OS envisions a harmonious integration of 3D Slicer and its associated software, such as the Plus Toolkit, MONAI Label, and others, within a freely accessible, open-source operating system based on GNU/Linux. This aims to facilitate the development and deployment of medical devices.

The following are key features we aim to leverage with Systole OS:

  1. State-of-the-Art Software: Built on the foundation of Gentoo Linux, Systole OS operates on a rolling-release model, ensuring continual, up-to-the-minute updated software.

  2. Easy Installation of Slicer: With Systole OS, installing Slicer and all its necessary dependencies is as easy as executing a single command (e.g., 'emerge sci-medical/slicer').

  3. Modular Slicer: The core installation of 3D Slicer will only encompass essential components to run the application, enabling additional modules to be installed separately as needed (e.g., 'emerge slicer-modules/models').

  4. Source-Based Distribution: Systole OS is derived directly from source code, allowing all packages to be built from source. This gives users the flexibility to make decisions at compile-time, leading to:

    • Improved hardware integration
    • Highly customizable packages (enable/disable certain features)
    • Compatibility with hardware architectures beyond amd64, like ARM and RISC-V.
  5. Extensibility: Systole OS utilizes the Gentoo overlay system, offering the ability to expand the system with your personal overlay or supersede packages supplied by Systole.

Objective

  1. Updating Packages: Ensure the timely update and maintenance of existing packages, targeting specifically the release Slicer-5.3.0.

  2. Integration and Testing Infrastructure: Develop a robust infrastructure that supports seamless integration and rigorous testing to maintain the highest quality standards.

  3. Generation of Containers and VMs: Establish a systematic approach for generating containers and Virtual Machines (VMs) that can effectively support both development and testing processes.

Approach and Plan

  1. Package Assessment: Review the status of existing packages and identify necessary updates for the release Slicer-5.3.0.

  2. Update Planning: Develop a plan and timeline for implementing the necessary updates.

  3. Update Implementation: Carry out the plan to update packages in line with the established timeline.

  4. Kubernetes Infrastructure Setup: Begin the process of setting up a Kubernetes-based infrastructure to support our integration and testing needs.

  5. Testing Protocol Development: With the Kubernetes infrastructure ready, establish systematic protocols for integration and testing to ensure high quality standards.

  6. Container and VM Generation: Implement a systematic approach for creating containers and Virtual Machines (VMs) for development and testing, ensuring this approach is scalable as needed.

Progress and Next Steps

  1. Describe specific steps you have actually done.

Illustrations

No response

Background and References

No response

Proposal: Extension for recurrent lung infections

Project Description

Yes, in fact the clinical problem is that in our context we often receive patients who have repeated infectious episodes. So, to assess the severity of the pulmonary involvement on the X-ray, it is difficult for us to know the limit between the old and the new lesions, especially since the patients often lose the previous images.

Thus with the 3d slicer, by making a comparative study between the old and recent lesions, one can create an extension capable from the thresholding, of coloring the zones differently.

This can be of great use to us!

Project: 3D Medical Registration and Segmentation with Elastix and MONAI Label

Category

Segmentation / Classification / Landmarking

Key Investigators

  • Konstantinos Ntatsis (Leiden University Medical Center, the Netherlands)
  • Andres Diaz-Pinto (NVIDIA & King's College London, United Kingdom)

Project Description

This project aims to investigate the application of itk-elastix (a python wrapping of Elastix) for image registration in combination with MONAI Label for segmentation/classification. Depending on the time/people availability, we will work in one or more sub-projects.

Initial sub-project:
We will starty by training a single modality MONAI Label model on Elastix-aligned brain images (T1, T2, FLAIR, etc) using SynthSeg as the source of annotations. SynthSeg is a tensorflow-based deep learning segmentation tool for brain MRIs. It consists of a generative network that produces the synthetic images and a 3D U-Net trained to do the segmentation. The only input (training data) is the training labels so no real images are used.

We will use SynthSeg to produce annotations as “ground truth” on a publicly available dataset like BRATS (multimodal + non-healthy brains) or OASIS (temporal/monomodal + healthy brains). Elastix will be used for the co-registration of the different modalities or temporal images and achieve segmentation via registration.

Other possible sub-projects:

  • Extend the whole brain segmentation model available in the Model Zoo, Use Elastix to perform affine registration of the data in the MNI305 space.
  • Compare registration performance between cross-modal registration (CT-MRI) versus intra-modal registration via synthesised MRI (MRI_syn - MRI). MONAI for the synthesis and elastix for the registration. What would a suitable dataset be?
  • Train MONAI Label model for automatic landmark identification in e.g. lung images (dataset) . Landmarks can be used either to assist registration with elastix OR elastix can be used to validate the landmark accuracy. 3D Slicer can be used to visualize the landmarks and ease the qualitative evaluation.
  • ... any other idea that is interesting to people, feel free to propose it!

Objective

  1. Working code, jupyter notebooks, any other artifacts etc that demonstrate the combination of itk-elastix and MONAI Label. They will be helpful for users that would like to solve similar problems.

Approach and Plan

  1. Configure and run Elastix
  2. Setup and run MONAI Label
  3. Make sure they work together nicely (e.g. output of Elastix should be suitable for MONAI, or the reverse)
  4. Improve the results (a bit)
  5. Polish and store the code/documentation/results so that they are helpful for future generations

Progress and Next Steps

  1. Preliminary registration of the BRATS dataset. Several details need to be sorted out still.
  2. ...

Illustrations

No response

Background and References

Project: PRISM Volume Renderer – Refactoring and bug fixing

Category

VR/AR and Rendering

Key Investigators

  • Andrey Titov (ÉTS, Canada)
  • Camille Hascoët (ÉTS, Canada)
  • Simon Drouin (ÉTS, Canada)

Project Description

The goal of this project is to enable the development of advanced 3D rendering techniques in Slicer. The goal is to facilitate access to GPU shaders and enable GPU-based filtering in Slicer by improving shader access multipass rendering in VTK and Slicer. The PRISM Module in Slicer will serve as a test environment for the new capabilities.

PRISM has a significant amount of unused and/or legacy code that was made for version 4.11, which isn't used anymore. The goal of the project is to simplify PRISM volume renderer to make it easier to work with and to remove as many bugs as possible.

Objective

  1. Objective A. Describe what you plan to achieve in 1-2 sentences.

Approach and Plan

  1. Describe specific steps of what you plan to do to achieve the above described objectives.

Progress and Next Steps

  1. Describe specific steps you have actually done.

Illustrations

image

Background and References

https://projectweek.na-mic.org/PW35_2021_Virtual/Projects/PRISM_volume_rendering/

Project: AMPSCZ First Data Release Documentation

Category

VR/AR and Rendering

Key Investigators

  • Sylvain Bouix (ÉTS, Canada)
  • Tina Kapur (BWH, USA)
  • Ameneh Asgari-Targhi (BWH, USA)

Project Description

The AMPSCZ project will have its first public data release in July and we want to finalize documentations and "customer-facing" material.

Objective

  1. Objective A. Generate documentation for the AMPSCZ data release.

Approach and Plan

  1. Describe specific steps of what you plan to do to achieve the above described objectives.

Progress and Next Steps

  1. Describe specific steps you have actually done.

Illustrations

No response

Background and References

No response

Proposal: Defining and Prototyping "Labelmap" Segmentations in DICOM Format

Project Description

The DICOM Segmentation format is used to store image segmentations in DICOM format. Using DICOM Segmentations, which use the DICOM information model and can be communicated over DICOM interfaces, has many advantages when it comes to deploying automated segmentation algorithms in practice. However, DICOM Segmentations are criticized for being inefficient, both in terms of their storage utilization and in terms of the speed at which they can be read and written. This is in comparison to other widely-used segmentation formats within the medical imaging community such as NifTi and NRRD.

While improvements in tooling may alleviate this to some extent, there appears to be an emerging consensus that changes to the standard are also necessary to allow DICOM Segmentations to compete with other formats. One of the major reasons for poor performance is that in segmentation images containing multiple segments (sometimes referred to as "classes"), each segment must be stored as an independent set of binary frames. This is in contrast to formats like NifTi and NRRD that store "labelmap" style arrays in which a pixel's value represents its segment membership and thus many (non-overlapping) segments can be stored in the same array. While the DICOM Segmentation has the advantage that it allows for overlapping segments, in my experience the overwhelming majority of segmentations consists of non-overlapping segments, and thus this representation is very inefficient when there are a large number of segments.

The goal of this project is to gather a team of relevant experts to formulate changes to the standard to address some issues with DICOM Segmentation. I propose to focus primarily on "labelmap" style segmentations, but I am open to other suggestions for focus.

The specific goals would be to complete or make significant progress on the following:

  • Formulate changes to the standard to allow for labelmap segmentations (@dclunie)
  • Complete prototype implementations within the highdicom library (of which I am a maintainer), dcmjs (@pieper) and possibly dcmqi (@fedorov )
  • Create example datasets for dissemination to others wishing to implement the changes
  • Begin the process of reaching out to others in the open source community to accelerate other implementations, particularly viewers such as slicer (@pieper ) and OHIF

Open questions:

  • Should we implement a new IOD or a new SegmentationType within the existing Segmentation IOD?
  • Should we implement "instance" segmentations, in which each segment is assumed to be a different instances of the same type, and thus need not be described separately, in addition to label-map style semantic segmentations?
  • Should we also allow 16 bit pixels to allow for more segments? How does this interact with the choice of new IOD vs new SegmentationType?

Other possible (alternative) topics:

  • Single bit compression to allow for more space-efficient storage
  • Omitting the per-frame functional group (like TILED FULL) for other types of segmentation image.
  • The inefficiency of pydicom in parsing long sequences, such as the per-frames functional groups sequence in segmentations, is a key bottleneck in Python. We could think through how to overcome this

Relevant team members: @fedorov @dclunie @pieper (@hackermd ) please give your feedback to help shape this project!

Project: Histology AI models imported into IDC

Category

Segmentation / Classification / Landmarking

Key Investigators

  • Curtis Lisle, KnowledgeVis (USA)
  • Daniela Schacherer, MEVIS (Germany)
  • David Clunie, PixelMed (USA)
  • Maximillian Fischer, DKFZ (Germany)

Project Description

This project focuses on importing whole slide image (WSI) histology images and trained deep learning models into the Imaging Data Commons for access by others. We have developed tissue-level segmentation models for detecting subtypes of rhabdomyosarcoma (RMS) in whole slides. Our project is releasing WSIs and the corresponding models trained on the slide images.

This project will test reading DICOM-WSI imagery (including compression) and focus on how to write out model segmentation results as DICOM-WSI annotations for upload to IDC. We also have classification and regression models, so we need to decide how to write non-imagery classification results as DICOM, as well.

Objective

  • Write out model segmentation image results as DICOM-WSI Segmentation or Parametric Map objects
  • Test models on sample DICOM-WSI images
  • Determine where to how to store regression and classification model results as DICOM annotations

Approach and Plan

  • Verify the algorithms run on DICOM-WSI source images (including compression)
  • Understand the semantics associated with DICOM Segmentation and Parametric Map objects
  • Write output formatter to generate proper DICOM for single class and multi-class segmentation images

Progress and Next Steps

  • Gather source images in DICOM-WSI format
  • Gather model source and pre-trained weights for inferencing

Illustrations

No response

Background and References

models wrapped in a girder3 web application: https://github.com/knowledgevis/rms_infer_web

Project: Translation/rotation of select points in a list

Category

Segmentation / Classification / Landmarking

Key Investigators

  • Sara Rolfe (Seattle Children's Research Institute, USA)
  • Murat Maga (University of Washington, USA)
  • Gabriella D'Albenzio (Oslo University Hospital, Norway)
  • Rafael Palomar (Oslo University Hospital, Norway)

Project Description

The goal of this project is to facilitate selection and independent manipulation of points in a list.

This can currently be done in the Markups module by copying the points to a new list, translating/rotating the points, and copying the point positions back to the original node. However this process is tedious and error-prone.

The initial motivation for this project was to simplify creation of synthetic data from landmark transforms by transforming an original set of landmarks into the target landmark set.

Objective

  1. Discuss overlapping goals between related projects (SlicerMorph, Slicer-Liver)
  2. Develop strategy for implementation

Approach and Plan

Two possible solutions have been discussed for the implementation:

  1. Add as options in the Markups Editor in the SlicerMorph extension: this module currently manages custom point interactions required by the the SlicerMorph extension.
  2. Update the Markups module to allow translation/rotation of unlocked points in a list while locked points remain fixed.

Progress and Next Steps

  1. Describe specific steps you have actually done.

Illustrations

No response

Background and References

  1. Forum post here discussing the issue and possible solutions.

Project: Defining and Prototyping "Labelmap" Segmentations in DICOM Format

Category

Segmentation / Classification / Landmarking

Key Investigators

  • Chris Bridge (MGH/Harvard, USA)
  • Steve Pieper (Isomics, USA)
  • David Clunie (PixelMed, USA)
  • Andrey Fedorov (BWH/Harvard, USA)

Project Description

The DICOM Segmentation format is used to store image segmentations in DICOM format. Using DICOM Segmentations, which use the DICOM information model and can be communicated over DICOM interfaces, has many advantages when it comes to deploying automated segmentation algorithms in practice. However, DICOM Segmentations are criticized for being inefficient, both in terms of their storage utilization and in terms of the speed at which they can be read and written. This is in comparison to other widely-used segmentation formats within the medical imaging community such as NifTi and NRRD.

While improvements in tooling may alleviate this to some extent, there appears to be an emerging consensus that changes to the standard are also necessary to allow DICOM Segmentations to compete with other formats. One of the major reasons for poor performance is that in segmentation images containing multiple segments (sometimes referred to as "classes"), each segment must be stored as an independent set of binary frames. This is in contrast to formats like NifTi and NRRD that store "labelmap" style arrays in which a pixel's value represents its segment membership and thus many (non-overlapping) segments can be stored in the same array. While the DICOM Segmentation has the advantage that it allows for overlapping segments, in my experience the overwhelming majority of segmentations consists of non-overlapping segments, and thus this representation is very inefficient when there are a large number of segments.

The goal of this project is to gather a team of relevant experts to formulate changes to the standard to address some issues with DICOM Segmentation. We will focus primarily on "Labelmap" style segmentations and issues surrounding frame compression. Other objectives for further discussion include simplifying per-frame metadata. Although we do not speak for the DICOM standards committee, we hope to put forward a complete proposal that can be considered by the committee. Ideally, the proposal will be backed by multiple interoperable implementations of the proposed objects and demonstrations of their value in reducing object size and complexity.

The proposal for this project received a considerable amount of constructive feedback from the community: #643

@pieper @fedorov @dclunie

Objective

  1. Put forward a proposal for changes to the DICOM Segmentation object that addresses the needs of the medical image computing community

Approach and Plan

  1. Gather relevant experts to discuss and appraise potential changes to the DICOM standard for Segmentations
  2. Compile a full proposal based on the resulting consensus from the team
  3. Implement prototypes of the new proposed objects in the highdicom (python) and dcmjs (javascript) libraries
  4. Use the prototype implementations to demonstrate the advantages of the proposed changes on realistic data (e.g. in terms of file size, read/write times)

Progress and Next Steps

  1. Solicited feedback and items for discussion on proposal #643

Illustrations

No response

Background and References

No response

Project: Slicer Pipelines v2

Category

Infrastructure

Key Investigators

  • Harald Scheirich (Kitware, United States)

Project Description

Slicer Pipelines is a framework to support the creation of workflows (Pipelines) inside of slicer. It allows users to attach a variety of slicer operations with pipeline support to each other and create a module that can then be executed on its own. Pipelines v2 is based on the work that Connor and others did with the ParameterWrapper.

Objective

  1. Adapt the PipelineCaseIterator to the new pipeline architecture

Approach and Plan

  1. Basic refactoring so that PipelineCaseIterator runs with a simple test case
  2. Move from allowing single input directory to driving input through csv file
  3. Adapt the output side of the case iterator to support multiple values
  4. Write output data into csv file
  5. Test with different pipelines

Progress and Next Steps

  1. Refactoring has been done, basic CaseIterator runs with test pipelin
  2. CSV files can be read to drive input parameters

Illustrations

No response

Background and References

Slicer Pipelines Module Repository: https://github.com/KitwareMedical/SlicerPipelines
Project Week 36: https://projectweek.na-mic.org/PW36_2022_Virtual/Projects/SlicerPipelines/
Project Week 38: https://projectweek.na-mic.org/PW38_2023_GranCanaria/Projects/SlicerPipelines/

Project: Efficient Handling and Progressive Loading of Compressed Multiframe DICOM Images

Category

Cloud / Web

Key Investigators

  • Ozge Yurtsever (Stanford, USA)
  • Emel Alkim (Stanford, USA)

Project Description

Loading compressed multiframe DICOM images as a whole causes frequent browser crashes, particularly on Microsoft machines. This issue arises due to the large file size of the DICOM images, exceeding the browser's memory capacity.

The browser's rendering engine attempts to load the entire file into memory, due to the significant size of these images, the browser can quickly exhaust its allocated memory, leading to crashes or unresponsive behavior.

This issue affects both ePAD and OHIF with the latest WADO-loader version.

Objective

Initiate a discourse about the methodologies for saving, storing, and reading DICOM data, and explore strategies for optimizing the handling of compressed multiframe images to achieve enhanced efficiency and avoid browser crashing.

Approach and Plan

Instead of loading the entire DICOM file at once, the image loading process can be modified to load the image in smaller chunks or frames progressively. This approach may allow the browser to handle smaller portions of the image, reducing the memory burden and enhancing overall stability.

Progress and Next Steps

We attempted to adapt a solution approach inspired by the PR link below. The link's solution specifically addresses uncompressed images. In our case, we tried a similar method to handle compressed images within the dicom-parser library, unfortunately, the attempted solution did not yield the desired outcome.

PR link: cornerstonejs/cornerstoneWADOImageLoader#454 (comment)
Ticket link: cornerstonejs/dicomParser#248

Illustrations

crash-image

Background and References

Unfortunately the ultrasound images are not deindentified, we can not provide sample data yet. We are working on getting a data set.

Related libraries:
https://github.com/cornerstonejs/cornerstoneWADOImageLoader
https://github.com/cornerstonejs/dicomParser

Project: Improving Project Page infrastructure

Category

Infrastructure

Presenter Location

In-person

Key Investigators

  • Sam Horvath (Kitware, USA)
  • Jean-Christophe Fillion-Robin (Kitware, USA)

Project Description

The Project Week team will continue to make improvements to the project page generation process

Objective

  1. Decrease complexity of project page creation
  2. Increase speed of site deployment

Approach and Plan

No response

Progress and Next Steps

No response

Illustrations

No response

Background and References

No response

Project: AMPSCZ Collaboration Space Tutorials

Category

Quantification and Computation

Key Investigators

  • Sylvain Bouix (ÉTS, Canada)
  • Tina Kapur (BWH, USA)
  • Ofer Pasternak (BWH, USA)
  • Nora Penzel (MGH, USA)
  • Kevin Cho (BWH, USA)
  • Ameneh Asgari-Targhi (BWH, USA)

Project Description

The AMPSCZ project allows consortium researchers to access an AWS workspaces virtual desktop with direct access to the AMPCZ data lake hosted at the NIMH data archive (NDA).
This project will consist of generating R and Python notebooks to illustrate how to access and analyze datasets using this collaboration space.

Objective

  1. Objective A. Build python and R notebooks showing how to access and interact with AMPSCZ data

Approach and Plan

  1. Build Python and R notebooks to access the data lake
  2. Build cross-instrument data analyses of tabular data
  3. Build example of loading and inspecting raw non-tabular data (e.g., MRI data with Slicer).

Progress and Next Steps

  1. Describe specific steps you have actually done.

Illustrations

No response

Background and References

No response

Project: Tutorials on working with DICOM annotations in pathology whole-slide images

Category

Segmentation / Classification / Landmarking

Key Investigators

  • Daniela Schacherer (Fraunhofer MEVIS, Germany)
  • Chris Bridge (MGH, USA)
  • David Clunie (PixelMed, USA)
  • Curtis Lisle (KnowledgeVis, USA)
  • Maximillian Fischer (DKFZ, Germany)
  • Andrey Fedorov (BWH, USA)

Project Description

This project aims to create tutorials on how to work with DICOM annotations in pathology whole-slide images (WSIs).
At first, we will focus on nuclei annotations stored as DICOM Microscopy Bulk Simple Annotations and compute nuclei density (cellularity) on tile-level from them. The computed cellularity values are then stored as DICOM parametric maps.

Objective

  1. Objective A: Have a Colaboratory notebook ready that at least reads DICOM Microscopy Bulk Simple Annotation files (currently from a Google Storage bucket, ideally later from the IDC directly) and computes cellularity values.
  2. Objective B: Encode computed cellularity values as DICOM parametric map that can be stored back to the IDC.

Approach and Plan

  1. Investigate nuclei annotations for plausibility
  2. Read nuclei annotations
  3. Efficiently compute cellularity values
  4. Encode cellularity values as DICOM parametric maps

Progress and Next Steps

  1. Set-up / Re-use of existing Google Cloud platform (GCP) project: idc-external-031
  2. Given key investigators access to the GCP project: if anyone else is interested in being added, please send me an e-mail to [email protected].
  3. Currently creating a DICOM store within idc-external-031 containing example images and annotations
  4. Currently working on deployment of Slim using firebase

Illustrations

No response

Background and References

No response

Project: Optimizing Bundle Size of PolySeg-WASM for Web Applications

Category

Cloud / Web

Presenter Location

Online

Key Investigators

  • Alireza Sedghi (OHIF, Canada)

Project Description

The Institute of Cancer Research (ICR) has created PolySeg-WASM is an extended WASM wrapper for the PerkLab/PolySeg library, including C++ code repurposed from Slicer and SlicerRT.

In the previous year project we created the contour segmentation representation for Cornerstone3D library, now this year we want to use the polySEG to convert the contours to closed surfaces.

The repo by ICR does the job However, the bundle is huge (3MB) which is not optimal for the web applications. This project aims to find out how to reduce the bundle size by choosing the VTK dependencies.

Objective

  1. Analyze VTK dependencies: dentify the specific VTK components used in the PolySeg-WASM library that contribute the most to the bundle size in order to determine areas for potential optimization.
  2. Optimize VTK bundle size: Reduce the bundle size of PolySeg-WASM by selectively choosing essential VTK dependencies, excluding or replacing components with lightweight alternatives, while maintaining the required functionality.
  3. Evaluate performance and functionality: Assess the performance and functionality of the optimized PolySeg-WASM library to ensure that the reduction in bundle size does not compromise accuracy or efficiency in converting contours to closed surfaces for web applications.

Approach and Plan

  1. Perform a detailed code analysis to identify the specific VTK components used in PolySeg-WASM.
  2. Measure the size contribution of each VTK component to the overall bundle size of PolySeg-WASM.
  3. Document the findings, including a breakdown of the size contribution of each component.

Progress and Next Steps

  1. Describe specific steps you have actually done.

Illustrations

image

image

Background and References

PolySEG repo
ICR Wrapper

Proposal: Using large language AI models to invoke Slicer modules and workflows

Project Description

3D Slicer is built with a powerful core to load, transform, and store manage medical images and derived datasets. Slicer has a catalog of loadable extensions that assist with or automate task-specific workflows. Slicer's web API provides remote access to invoke many of the processing steps which are available through its very complete user interface.

The recent explosion of generative LLMs (large language models) from the AI community has demonstrated that these language models can, in some cases, translate task or problem descriptions into sequences of operations. Wouldn't it be powerful if 3D Slicer could be verbally instructed to perform operations or process datasets as requested? Theoretically, an embedded LLM could be trained on Slicer's modules, including under what circumstances the modules could be applied to transform a Slicer scene as needed to solve a problem presented by the user.

As one example, in Operating Theaters during surgical procedures, the Slicer user interface is hard or impossible to access due to sterility restrictions and other factors. It would be helpful if clinicians could control Slicer's functions through an alternative method than the interactive user interface. For example, "Let me see the lung lesions more clearly" could be translated into increased transparency of the lung segmentation and an orientation repositioning to make a lesion segmentation visible in-situ.

A goal of this project proposal would be to schedule a meeting during Project Week to discuss this idea, assess the level of interest in the Slicer community, discuss early technical approaches, and decide who might be interested in working together to seek funding to pursue this together. Both clinicians with a problem to solve and AI technicians would be invited to participate.

General model registration and merging tool

Category

VR/AR and Rendering

Key Investigators

  • Chi Zhang (Seattle Children's Research Institute, USA)
  • Arthur Porto (Louisiana State University, USA)
  • Sara Rolfe (Seattle Children's Research Institute, USA)
  • Murat Maga (University of Washington, USA)

Project Description

We are working on developing a general purpose model registration tool in Slicer. At this moment, I developed a simple test module (https://github.com/chz31/registration_test) using rigid registration functions (RANSAC + ICP) from Open3D and new ITK-based ALPACA module. This can allow people to test registration for purposes such as ALPACA automated landmarking.

We are thinking about expanding this module into its own system for other purposes related to model registrations. One purpose is to register and align models that represent different parts of an object with overlapping area, and fuse them together. This could be useful for some purposes. For example, it would allow align and fuse models acquired from different angles, such as different parts of an object acquired by photogrammetry techniques. It would also allow virtual fossil reconstruction, which is usually done using commercial software such as Geomagic Studio.

Objective

  1. Develop a general purpose model registration tool in Slicer. Adding more utilities, such as a parameter adjustment tab.
  2. Add new functions for other purposes related to model registration. At this moment, we are thinking about how to align models that represent different parts of an object and fuse them together. This could be useful for photogrammetry and virtual fossil reconstruction.

Approach and Plan

  1. Add parameter adjustment tab for the current test version
  2. Merge registered models that represent different parts of an object into one. One way to aid the alignment is allow users to place a few matching landmarks on two or more models.

Progress and Next Steps

  1. Current testing version is here: https://github.com/chz31/registration_test. It uses rigid registration functions (RANSAC + ICP) from Open3D and new ITK-based ALPACA module.

Illustrations

Screenshot 2023-06-05 at 10 35 41 AM Screenshot 2023-06-05 at 10 35 52 AM Screenshot 2023-06-05 at 10 36 04 AM

These are the models acquired by photogrammetry from two angles. The yellow one has no top, and the red one has no bottom. Rigid registration from Open3D can align them pretty well, though not perfect.

Screenshot 2023-06-05 at 10 24 06 AM

A sample virtual reconstruction in Geomagic Studio. The skull missed a part at the right side. The yellow part is the mirror image of the counter part at the left side.

Background and References

Current testing version is here: https://github.com/chz31/registration_test. It uses rigid registration functions (RANSAC + ICP) from Open3D and new ITK-based ALPACA module.

ALPACA module (including the ITK version) repository: https://github.com/SlicerMorph/SlicerMorph/tree/master/ALPACA

ALPACA tutorial: https://github.com/SlicerMorph/Tutorials/tree/main/ALPACA

Open3D rigid registration utilized in ALPACA: http://www.open3d.org/docs/release/tutorial/pipelines/global_registration.html

Project: Training AI algorithms on IDC data

Category

Segmentation / Classification / Landmarking

Key Investigators

  • Cosmin, Ciausu (Brigham and Women's Hospital, USA)
  • Andrey, Fedorov (Brigham and Women's Hospital, USA)

Project Description

Imaging Data Commons provides publicly available cancer imaging data.

Previous works(IDC Prostate segmentation) (NLST-Body Part Regression) demonstrated through several use cases inference and analysis of AI algorithms on IDC data.
Downloading IDC data, conversion between file imaging standards, cloud environment setup and imaging pre-processing steps were studied through these inference and analysis use cases.

During this project week, our goal is to develop use cases of training AI algorithms on IDC data. We welcome any Project Week participants that are interested in leveraging IDC data for training AI algorithms(or evaluation) to collaborate with us!

Objective

  1. Leverage IDC data for SOTA segmentation algorithm(nnUNet, MONAI)
  2. Collaborate with other members to study the feasibility of using IDC data for training AI algorithms.

Approach and Plan

  1. Using nnUNet segmentation framework for prostate segmentation on IDC data(Prostatex/QIN collection) for training purposes.
  2. Expand AI training use cases beyond SOTA algorithms.

Progress and Next Steps

  1. Leverage information gained by applying inference using nnUNet prostate segmentation on several prostate imaging collections, for training pipelines.

Illustrations

No response

Background and References

Project: Localizing 3D Slicer to Spanish and Portuguese

Category

Infrastructure

Key Investigators

  • Sonia Pujol, (Brigham and Women's Hospital, Harvard Medical School, USA)
  • Steve Pieper (Isomics Inc., USA)
  • Andras Lasso (Queen's University, Canada)

Project Description

The goal of this project is to empower the biomedical research community in Latin America by localizing 3D Slicer to Spanish and Portuguese and improving tutorial localization infrastructure.

Objective

  1. To identify members of the Latin American community interested in 3D Slicer activities in Spanish and in Portuguese
  2. To run daily translation hackathons at PW39

Approach and Plan

Slice Internationalization Breakout session:

Monday, June 12, 2-4 pm EST
Daily Slicer internationalization sessions with members of the Slicer community

Tuesday, June 13, 9-11 am EST
Wednesday, June 14, 10-11 am EST
Thursday, June 15, 10-11 am EST

Progress and Next Steps

No response

Illustrations

No response

Background and References

No response

Project: Test project, please ignore

Category

VR/AR and Rendering

Key Investigators

  • Sam Horvath (Kitware, USA)

Project Description

Stuff

Objective

  1. Objective A. Describe what you plan to achieve in 1-2 sentences.

Approach and Plan

  1. Describe specific steps of what you plan to do to achieve the above described objectives.

Progress and Next Steps

  1. Describe specific steps you have actually done.

Illustrations

No response

Background and References

No response

Project: Tracked ultrasound integration into NousNav, a low-cost neuronavigation system

Category

IGT and Training

Key Investigators

  • Colton Barr (Queen's University / BWH)
  • Sarah Frisken (BWH)
  • Sonia Pujol (BWH)
  • Steve Pieper (Isomics)
  • Tina Kapur (BWH)
  • Tamas Ungi (Queen's University)
  • Sam Horvath (Kitware)

Project Description

NousNav is an ongoing project led by Dr. Alex Golby at Brigham and Women's Hospital to build and disseminate a low-cost neuronavigation system. Built as a 3D Slicer Custom App, NousNav uses low cost optical tracking (Optitrack Duo) in combination with custom optically-tracked tools and reference arrays to facilitate patient registration, procedure planning, and navigation.

The system is being continually updated based on user feedback. An important next step in development is the integration of tracked ultrasound data.

Objective

  1. Gather user feedback on the current iteration of the system and establish potential next steps for development.
  2. Discuss approaches for integrating tracked ultrasound data into the navigation workflow.
  3. Create a NousNav prototype that includes tracked ultrasound.

Approach and Plan

  1. Setup demo of NousNav system for participants to try and systematically collect user feedback.
  2. Collaborate with colleagues working on tracked neurosurgical ultrasound to establish best practices for integrating ultrasound into the system
  3. Create custom build of NousNav with basic tracked ultrasound workflow elements integrated.

Progress and Next Steps

No response

Illustrations

No response

Background and References

No response

Project: Facial expression feature extraction for video interviews

Category

Quantification and Computation

Key Investigators

  • Eduardo Castro (IBM Research, USA)
  • Kevin Cho (BWH, USA)

Project Description

Put together code that 1) performs facial expression feature extraction for video interviews stored on a data aggregation server, and 2) transfers them to a local directory. It would be based on existing scripts for facial expression feature extraction and an existing data management tool.

Objective

  1. Objective 1: Adapt our existing code for facial expression analysis to extract features through a proper video pipeline, including running this task for upcoming videos in the aggregated server.
  2. Objective 2: Adapt lochness to incorporate the files generated by this pipeline for data transfer.

Approach and Plan

  1. Describe specific steps of what you plan to do to achieve the above described objectives.

Progress and Next Steps

  1. Describe specific steps you have actually done.

Illustrations

No response

Background and References

Facial expression code: https://github.com/ecastrow/face-feats
Data Management tool: https://github.com/AMP-SCZ/lochness

Project: SlicerROS2

Category

IGT and Training

Key Investigators

  • Junichi Tokuda (Brigham and Women's Hospital, USA)
  • Laura Connolly (Queen's University, Canada) (Online)
  • Anton Deguet (Johns Hopkins University) (Online)
  • Arvind S. Kumar, (Johns Hopkins University) (Online)

Project Description

The goal of SlicerROS2 is to provide an open-source software platform for medical robotics research. Specifically, the project focuses on architectures to seamlessly integrate a robot system with medical image computing software using two popular open-source software packages: Robot Operating System (ROS) and 3D Slicer.

Objective

  1. Demo - Set up a live demo using SlicerROS2 and myCobot.
  2. Dissemination - Review and improve online documentation for rosmed.github.io
  3. Plan - Discuss future directions and maintenance (other potential projects, integration into nightly build, etc).

Approach and Plan

  1. Set up a build environment on a laptop with Ubuntu 22.04
  2. Set up myCobot

Progress and Next Steps

  1. Describe specific steps you have actually done.

Illustrations

No response

Background and References

Acknowledgement:

The National Institute of Biomedical Imaging and Bio-engineering of the U.S. National Institutes of Health (NIH) under award number R01EB020667, and 3R01EB020667-05S1 (MPI: Tokuda, Krieger, Leonard, and Fuge). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

The National Sciences and Engineering Research Council of Canada and the Canadian Institutes of Health Research, the Walter C. Sumner Memorial Award, the Mitacs Globalink Award and the Michael Smith Foreign Study Supplement.

Project: Live tracked ultrasound processing with PyTorch

Category

IGT and Training

Key Investigators

  • Tamas, Ungi (Queen's University)
  • Colton, Barr (Queen's University / BWH)
  • Tina, Kapur (Brigham and Women's Hospital)

Project Description

Our past code for training and deploying ultrasound segmentation in real time was based on TensorFlow. Example project:
https://youtu.be/WyscpAee3vw

The goal for this project week is to provide a new open-source implementation using PyTorch and modern AI tools like MONAI and wandb. A Slicer module will also be provided to deploy trained AI on recorded or live ultrasound streams.

Objective

  1. Export annotated ultrasound+tracking data for training
  2. Example code for training
  3. Slicer module to use trained models on ultrasound data in Slicer

Approach and Plan

  1. All data processing and training code will be here: https://github.com/SlicerIGT/aigt/tree/master/UltrasoundSegmentation
  2. Slicer module will be here: https://github.com/SlicerIGT/aigt/tree/master/SlicerExtension/LiveUltrasoundAi/TorchLiveUs

Progress and Next Steps

  1. Describe specific steps you have actually done.

Illustrations

No response

Background and References

No response

Proposals: Registration and Segmentation with Elastix & MONAI Label

Hi project-week community,

We have discussed a bit with Andres Diaz-Pinto (@diazandr3s) on possible projects for the upcoming Project Week that fall under the general title of 3D Medical Image Registration and Segmentation using Elastix and MONAI Label.

Proposals

Here are our tentative proposals for the moment:

Proposal 1

Train a single modality MONAI Label models on Elastix-aligned brain images (T1, T2, FLAIR, etc) using SynthSeg (https://github.com/BBillot/SynthSeg) as the source of annotated datasets - For Nomal brains

SynthSeg is a tensorflow-based deep learning segmentation tool for brain MRIs. It consists of a generative network that produces the synthetic images and a 3D U-Net trained to do the segmentation. The only input (training data) is the training labels so no real images are used.

We will use SynthSeg to produce annotations as “ground truth” on a publicly available dataset like BRATS (multimodal + non-healthy brains) or OASIS (temporal/monomodal + healthy brains). Elastix will be used for the co-registration of the different modalities or temporal images and achieve segmentation via registration.

Proposal 2

Train a MONAI Label model using the raw BRATS dataset (https://www.kaggle.com/competitions/rsna-miccai-brain-tumor-radiogenomic-classification/data). Elastix will be used to co-register the 4 modalities.

This is a classification task. Hypothesis: Registration with elastix might help the classification accuracy. So, we can compare the classification result with and without pre-alignment.

Proposal 3

Extend the whole brain segmentation model available in the Model Zoo (https://github.com/Project-MONAI/model-zoo/tree/dev/models/wholeBrainSeg_Large_UNEST_segmentation)

The data used for the training were registered affinely in the MNI305 space. Hence, elastix can be used to also register any data used for inference in the same space. We could also store all the result transform parameters so that the users could just do the resampling directly without registering again (this holds true for the traning data - unseen data used for inference should still need to be registered).

Proposal 4

Compare registration performance between cross-modal registration (CT-MRI) versus intra-modal registration via synthesised MRI (MRI_syn - MRI). MONAI for the synthesis and elastix for the registration. What would a suitable dataset be?

Proposal 5

Train MONAI Label model for automatic landmark identification in e.g. lung images (dataset: https://med.emory.edu/departments/radiation-oncology/research-laboratories/deformable-image-registration/index.html) . Landmarks can be used either to assist registration with elastix OR elastix can be used to validate the landmark accuracy. 3D Slicer can be used to visualize the landmarks and ease the qualitative evaluation.

Relevant resources


Looking forward to your feedback, declaration of interest, ideas or proposals. Projects can be adjusted, added or removed. Thanks! 🙏

Project: Test Project, please ignore

Category

VR/AR and Rendering

Presenter Location

In-person

Key Investigators

  • Sam Horvath (Kitware, USA)

Project Description

Stuff!

Objective

  1. Objective A. Describe what you plan to achieve in 1-2 sentences.

Approach and Plan

  1. Describe specific steps of what you plan to do to achieve the above described objectives.

Progress and Next Steps

  1. Describe specific steps you have actually done.

Illustrations

No response

Background and References

No response

Proposal: NCI Imaging Data Commons browser Slicer extension

Project Description

I would like to work on a Slicer extension that allows to browse and download content from NCI Imaging Data Commons. It would work similar to the TCIABrowser extension. In order to eliminate the need to log in to query IDC via BigQuery SQL, I was planning to use IDC API. Once the file URLs are identified, I would download them using s5cmd, which can be installed by downloading a single executable precompiled for all platforms.

Slicer-Liver

Category

IGT and Training

Key Investigators

  • Gabriella D'Albenzio (Oslo University Hospital, Norway)
  • Ruoyan Meng (NTNU, Norway)
  • Ole V. Solberg (SINTEF, Norway)
  • Geir A. Tangen (SINTEF, Norway
  • Rafael Palomar (Oslo University Hospital, Norway)

Project Description

Slicer-Liver is an advanced 3D Slicer extension developed for liver therapy planning. The extension currently offers essential features for liver resection planning and accurate computation of vascular territories. As part of an ongoing project, our aim is to further enhance the existing functionalities and introduce new tools for volumetry computation. Our objective is to provide a comprehensive and user-friendly solution for liver therapy planning within the Slicer platform.

Objective

  1. Advanced manipulation of deformable surfaces for resection planning. Our current solution for resection planning involves the deformation of Bezier surfaces in a 4x4 grid implemented by means of Slicer Markups (https://slicer.readthedocs.io/en/latest/user_guide/modules/markups.html). We are planning to include advanced features such as coloring and grouping of markups for a more effective manipulation.
  2. Volumetry computation. Planning of liver therapies largely relies on a volumetry analysis derived from the therapy plan. We are planning to include versatile tools for volume computations.
  3. Release of Slicer-Liver 1.0. As Slicer-Liver is becoming an feature rich extension, we aim to release the latest developments achieved during this and the last Project Week in the extension manager (currently, the version released in the Extension Mangager does not contain the latest advances).

Approach and Plan

  1. Discussion and find a strategy to improve our Markups-based resections interaction (Custom C++ markups vs. Python logic)
  2. Implementation of the new features (new markups interaction and volumetric computation tools).
  3. Testing of the new features and release of the new extension.

Progress and Next Steps

  1. Describe specific steps you have actually done.

Illustrations

No response

Background and References

Project: Using large language AI models to invoke Slicer modules and workflows

Category

Infrastructure

Presenter Location

In-person

Key Investigators

  • Curtis Lisle (KnowledgeVis, USA)
  • Steve Pieper (Isomics, USA)
  • Andrey Federov (Brigham and Women's Hospital, USA)

Project Description

3D Slicer is built with a powerful core to load, transform, and store manage medical images and derived datasets. Slicer has a catalog of loadable extensions that assist with or automate task-specific workflows. Slicer's web API provides remote access to invoke many of the processing steps which are available through its very complete user interface.

The recent explosion of generative LLMs (large language models) from the AI community has demonstrated that these language models can, in some cases, translate task or problem descriptions into sequences of operations. Wouldn't it be powerful if 3D Slicer could be verbally instructed to perform operations or process datasets as requested? Theoretically, an embedded LLM could be trained on Slicer's modules, including under what circumstances the modules could be applied to transform a Slicer scene as needed to solve a problem presented by the user.

As one example, in Operating Theaters during surgical procedures, the Slicer user interface is hard or impossible to access due to sterility restrictions and other factors. It would be helpful if clinicians could control Slicer's functions through an alternative method than the interactive user interface. For example, "Let me see the lung lesions more clearly" could be translated into increased transparency of the lung segmentation and an orientation repositioning to make a lesion segmentation visible in-situ.

Objective

  1. Objective A. Describe what you plan to achieve in 1-2 sentences.

A goal of this project proposal is to schedule a meeting during Project Week to discuss this idea, assess the level of interest in the Slicer community, discuss early technical approaches, and decide who might be interested in working together to seek funding to pursue this together. Both clinicians with a problem to solve and AI technicians are invited to participate.

Approach and Plan

  1. Describe specific steps of what you plan to do to achieve the above described objectives.
  • Schedule a meeting of interested parties during PW39
  • Discuss applicable existing open-source tools
  • Access the value to the community and define a plan to continue if this idea has merit
  • Possibly experiment with a proof of concept to connect to Slicer's API

Progress and Next Steps

  1. Describe specific steps you have actually done.
  • Researched several open-source LLM repositories that allow connection to external APIs (Application Programming Interfaces), such as what Slicer has with the web interface

Illustrations

No response

Background and References

Hugging Face has a new API called "Agents" that is designed to use tools according to their descriptions of the I/O they handle. The Agent API puts together a workflow of tools to accomplish the users request. This is not exactly what I was thinking, as there are issues related to how to identify and return a changed MRML scene, but it inspired my thinking somewhat: https://huggingface.co/docs/transformers/transformers_agents.

Work that seems more directly towards a way to invoke Slicer modules via API is Gorilla, a LLAMA model fine-tuned to invoke external APIs to accomplish a requested task: https://github.com/ShishirPatil/gorilla. I just started reading the paper referenced on the repository site.

Here's a related post: https://nickarner.com/notes/llm-powered-assistants-for-complex-interfaces-february-26-2023/

Somewhat related development applied to selection of data in IDC using LLM: https://discourse.canceridc.dev/t/text2cohort-a-new-llm-toolkit-to-query-idc-database-using-natural-language-queries/.

GuardRails repository provides validation on LLM output. This might help enforce Slicer API structure: https://github.com/ShreyaR/guardrails?utm_source=tldrnewsletter

Nvidia NeMo is also a potentially useful tool in this domain.

Proposal: Working with DICOM annotations

Project Description

I plan to create tutorials on how to work with DICOM annotations in pathology whole-slide images (WSIs). Two formats of DICOM annotation objects, i.e. Microscopy Bulk Simple Annotations and Segmentation, both currently under development will be used and I will try to eventually summarize advantages and disadvantages of each format for different use cases.

Most likely, @CPBridge and @dclunie, who develop the DICOM annotations, will support with their expertise and there might be some synergies with the project of @maxfscher and the project of @curtislisle.

Project: ChatIDC: Navigating DICOM and IDC using Natural Language

Category

Infrastructure

Presenter Location

In-person

Key Investigators

  • Justin Johnson (Department of Radiology, Brigham and Women’s Hospital, Boston, MA)
  • Suraj Pai (Department of Radiology, Brigham and Women’s Hospital, Boston, MA)
  • Andrey Fedorov (Department of Radiology, Brigham and Women’s Hospital, Boston, MA)

Project Description

ChatIDC is a natural language interface tool for exploring the rich ecosystem of DICOM tags and IDC. It is intended to filter and download highly specific cohorts of imaging data and discover relevant information pertaining to the DICOM standard, IDC documentation, and data that consists of DICOM tags.

Objective

The goal of this project is to reduce some technical barriers for clinical researchers to filter and download highly specific cohorts of imaging data. As a result, the project is poised to make the retrieval of data more efficient and encourage the widespread adoption of the platforms in which it is integrated.

For IDC, you can currently filter cohorts by some of the most common tags with sliders and buttons but this eventually has a limit when the researcher has to gather data that is highly tailored to their use case, which may be highly compositional and utilises more esoteric DICOM Tags. When the number of filter parameters is too large, manual selection and query construction may become infeasible if you are not an expert in both DICOM and SQL.

Approach and Plan

We will prepare a list of queries to motivate and test the development of the project. The list will contain “free text request” and the matching SQL query. We will work with IDC/SQL domain “experts” to confirm that SQL queries on this list are both syntactically and semantically correct. This list will be shared at the end of the project week.

We will implement semantic searching for DICOM tags based on the user's input that is then used for the pretext in the language model. We will work with IDC/DICOM experts to confirm that this curated list is meaningful and comprehensive. This list will be shared at the end of the project week.

We plan to document our current experience and recommendations to what prompts users should use to improve the quality of the responses generated by the existing LLM interfaces.
We will document our experience observing syntactic accuracy of generated queries to motivate future development (ie, what worked, what didn’t work, what can be fixed with refinements to the prompt, what can be improved with the approach used in the text2cohort project).

We would like to conduct interviews with the AI developers attending project week to gather the list of requests/ideas for queries that the users would like to see addressed.

Progress and Next Steps

No response

Illustrations

No response

Background and References

No response

Project: Integration of Haptic Device in 3D Slicer for Lumbar Puncture

Category

IGT and Training

Key Investigators

  • Pablo Sergio, Castellano Rodríguez (Universidad de Las Palmas de Gran Canaria, Spain)
  • Jose Carlos, Mateo Pérez (Universidad de Las Palmas de Gran Canaria, Spain)
  • Juan Bautista, Ruiz Alzola (Universidad de Las Palmas de Gran Canaria, Spain)

Project Description

The main objective of the project is to integrate the haptic device Touch 3D Systems into 3D Slicer through an OpenIGTLink connection with the Unity platform. Slicer To Touch is the 3D Slicer module that contains the scene with the 3D models of the spine and the needle. This module has an interface where the user can configure the number, position and value of the resistances to be exerted by the haptic device. These values will be included in a .json file that will later be transferred to Unity, which will process this data and configure the forces of the haptic device within the Unity environment. Finally, through the OpenIGTLink connection bridge, a real-time connection will be created, where the transformations and the resistances of the haptic device will be shared with the 3D Slicer scene. This idea comes from a project for a lumbar puncture training system that makes use of this device, but the body tissues, their location and thickness are generic. This way you can make a segmentation of a real patient's back with its own characteristics and practice the lumbar puncture before doing it with the patient. Due to the way it works, it could be used in other procedures.

Objective

  1. Create a module that with the help of Unity and OpenIGTLink allows us to interact with a back model of a real patient obtained by segmentation of medical images. In this way we can train the lumbar puncture on the model of a real patient feeling the resistance in the body tissues.
  2. Automate the process of creating resistances on segmentation-generated models so that clinicians can easily perform lumbar puncture and other procedures with a sense of realism.

Approach and Plan

  1. Creation of the 3D Slicer module with fields to enter the number of resistances, positions and values.
  2. Generate a .json file with all the information entered in the module.
  3. Create a Unity project with a script that reads the generated .json file and creates a scene with the resistances in that position and with those values.
  4. Connect Unity to 3D Slicer through OpenIGTLink to send the transforms and see the needle movements in 3DSlicer.
  5. Generate an executable application from the Unity project with a simple look and feel that does the procedure automatically so that the clinical user finds it easy to use and does not have to deal with the unity interface.
  6. Do a documentation search of other procedures to check that the project works correctly in them. We are looking for other clinical procedures for which this project may be useful and for which there is information a

Progress and Next Steps

  1. Creation of the 3D Slicer module with fields to enter the number of resistances, positions and values. (DONE)
  2. Generate a .json file with all the information entered in the module. (DONE)
  3. Create a Unity project that reads the generated .json file and creates a scene with the resistances in that position and with those values. (DONE)
  4. Connect Unity to 3D Slicer through OpenIGTLink. We are working on this step we are working on this step based on a NAMIC project link by Alicia Pose.

Illustrations

3D Slicer Module in which you enter the resistances (left) and the .json file with the information of these resistances (right)(Picture1.png)

Picture1

Unity interface after reading the information from the .json file with the resistances created in the positions and the needle as a visual mesh of the haptic device (left) and script that makes it work (right) (Picture2.png)

Picture2

Background and References

Project: Rendering support for multiple views

Category

VR/AR and Rendering

Key Investigators

  • Sara, Rolfe (Seattle Children's Research Institute, USA)
  • Murat, Maga (University of Washington, USA)
  • Chi, Zhang (Seattle Children's Research Institute, USA)

Project Description

The goal of this project is to extend the Volume Rendering interface to improve the convenience of multiple volume comparisons. We aim to create and test prototypes of features that will be added to the SlicerMorph extension in the short term and discuss appropriateness of integration into Slicer core.

Objective

Features to support multiple volume comparisons:

  1. Objective A: Option to link views in relative orientations defined by the user.
  2. Objective B. Option to link volume rendering properties for images in a folder.

Approach and Plan

  1. Objective A: Create module to manage two relative views, manage nodes displayed/transformed in each
  2. Objective B: Create prototype that links individual rendering properties of each volume in a folder.

Progress and Next Steps

  1. Created a Python function to link/unlink relative views.

Illustrations

No response

Background and References

No response

Project: Longitudinal model of psychosis conversion

Category

Quantification and Computation

Key Investigators

  • Pablo Polosecki (IBM Research, USA)
  • Nora Penzel (MGH, USA)
  • Ofer Pasternak (BWH, USA)
  • Guillermo Cecchi (IBM Research, USA)

Project Description

This project is part of the AMP SCZ program, an initiative for early detection of risk for schizophrenia(https://www.ampscz.org).

A key goal in AMPSCZ is to predict which patients that present initially mild or sub-threshold symptoms will eventually develop psychosis. Most predictive models are based on data acquired on their first medical visit (the baseline visit). An important question is how much is gained by following patients over time (longitudinal data). In this project we will implement predictive models that make use of this longitudinal information for psychosis prediction. We will focus on implementing a type of models called "joint models", which incorporate time-varying predictors into well known survival analyses.

Objective

  1. Objective A. Implement a Python-based version of longitudinal models adapted for common best practices in machine learning (separate train/test, scikit-learn compatible methods).
  2. Objective B Quantify the advantage of longitudinal models vs baseline predictors in a legacy dataset.

Approach and Plan

  1. Write a python wrapper, using rpy2, for the R library JM that implements longitudinal analysis.
  2. Use synthetic and legacy datasets to test the predictions.
  3. Use python libraries such as lifelines or scikit-survival to implement survival analysis with baseline predictions only.
  4. Implement permutation tests in time to asses the significance of prediction improvements due to longitudinal change.

Progress and Next Steps

  1. Describe specific steps you have actually done.

Illustrations

No response

Background and References

Project: mpReview: Development of a streamlined Slicer module for manual image annotation

Category

Cloud / Web

Key Investigators

  • Deepa, Krishnaswamy (BWH)
  • Nadya, Shusharina (MGH)
  • Andrey, Fedorov (BWH)

Project Description

The 3DSlicer module mpReview (part of the SlicerProstate extension) was previously developed to assist with manual annotation of the prostate and other related anatomical regions. In previous project weeks, we have streamlined the extension here and updated the module to use the latest SegmentEditor, and incorporated the use of Google Cloud Platform here.

However, there are improvements that can be made in terms of functionality. For instance, we would like to allow the user to access multiple types of servers, and perform annotation of body parts other than the prostate.

In this project week we'll focus on using a JSON file as input, which will allow users to customize the module to their annotation needs. Our goal will be to streamline the user's interaction with the module, allowing them to annotate large datasets efficiently and quickly using both the cloud (e.g. GCP and Kaapana) or a local DICOM database.

Objective

  1. Discuss the current multiple_server branch of the module.

  2. Brainstorm the JSON format specification necessary for streamlining the annotation workflow. Generate examples of JSON files for different use cases: local Slicer DICOM database, Google Cloud Platform, Kaapana, etc.

  3. Define the steps that are needed to accomplish the changes.

Approach and Plan

  1. We will discuss the state of the current branch, and identify the needs of users.
  2. After talking to researchers and clinicians and their annotation needs/concerns, we will develop JSON format specifications for a variety of use cases.

Progress and Next Steps

  1. We have started discussing the shortcomings of the current module.
  2. We have started to draft possible JSON file specifications.

Illustrations

Current screenshot of the module

image

Background and References

We have worked on this during multiple project weeks, PW35 and PW37. The code from PW37 is available here.

Slicer Flatpak

Category

Infrastructure

Key Investigators

  • Rafael Palomar (Oslo University Hospital, Norway)
  • Steve Pieper (Isomics, USA)
  • Jean-Christophe Fillion-Robin (Kitware, USA)
  • Andras Lasso (Queen's University, Canada)

Project Description

Slicer Flatpak is a project focused on packaging the 3D Slicer software as a Flatpak. This initiative aims to offer an easy and universal way to install and run the 3D Slicer on any Linux distribution that supports Flatpak. By doing this, it seeks to reduce installation complexities and improve compatibility across different systems. The distribution of 3D Slicer as a Flatpak has potential benefits:

  1. Cross-Distribution Compatibility: Flatpak applications can run on any Linux distribution that supports Flatpak, which means users of different distributions can use the same package.
  2. Simplified Installation: Flatpaks bundle most of the libraries and resources an application needs, reducing the complexity of the installation process.
  3. Isolation: Flatpak applications are isolated from the rest of the system, minimizing the risk of library conflicts or system destabilization.
  4. Security: Flatpaks are sandboxed, limiting their access to the host system. This feature enhances security by preventing malicious software from accessing data they shouldn't have access to.
  5. Easy Updates: Flatpak applications can be easily updated to newer versions, often without the need for manual intervention.
  6. Parallel Installation: Different versions of the same application (like 3D Slicer) can be installed in parallel without conflicts, useful for testing or development purposes.
  7. Consistent Environment: Flatpaks ensure that the software runs in the same environment regardless of the host system configuration, reducing the problems related to "it works on my machine" scenarios.
  8. Long-Term Stability: Even if the libraries in the host operating system change, the Flatpak application will still work because it's using its own bundled libraries, ensuring long-term stability for the application.

The convenience of having a 3D Slicer Flatpak has been long discussed in the 3D Slicer Discourse platform (https://discourse.slicer.org/t/interest-to-create-flatpak-for-3d-slicer-have-issue-with-guisupportqtopengl-not-found/16532). Soon after PW38, we started a renewed discussion on the topic and a new initiative to make 3D Slicer Flatpak happen. Right now, our efforts have been focused on getting a first feasible 3D Slicer Flapak (https://github.com/RafaelPalomar/Slicer-Flatpak/tree/feature/slicer-flatpak-generatorand https://github.com/RafaelPalomar/org.slicer.Slicer/tree/development). With this project we want to consolidate this effort and discuss about the potential distribution of the 3D Slicer Flatpak.

Objective

  1. Consolidation of 3D Slicer Flapak build infrastructure.
  2. Add support for deployment of SimpleITK along with 3D Slicer Flatpak.
  3. Testing and verification of 3D Slicer extensions.
  4. Discussion about the release model (flathub, own repository, etc.).

Approach and Plan

  1. Continue the current development and obtain a first version (even if with limited functionality) of the 3D Slicer Flatpak
  2. Fix a dependencies issue with SimpleITK
  3. Enable the use of the Slicer Extension manager and discuss on possibilities to deploy extensions (sandboxes vs. local)
  4. Strategy to build/deploy 3D Slicer Flatpak.

Progress and Next Steps

  1. Describe specific steps you have actually done.

Illustrations

No response

Background and References

No response

Usage of `light-torch`

It seems that you want to use light-the-torch after @fepegar submitted a patch and to other projects as well. Is everything now working as intended or is there something else need to be fixed on my end?

If not, I would push a release so that you simply can pip install light-the-torch.

Project: extension for recurrent lung infections

Category

Segmentation / Classification / Landmarking

Key Investigators

-Pape Mady, Thiao (école militaire de santé de Dakar , Sénégal)

Project Description

The objective is to create an extension capable of identifying lung lesions of different ages following repetitive infections.

Objective

In order to use the extension in pulmonology to correlate a recent symptomatology with an X-ray image having images related to old infections

Approach and Plan

1 Collect radiographs of 2 groups of patients (A with an ongoing infection, B having recovered from an infection but with sequelae images)
2. Compare the hounsfied of the different lesions and create a threshold for the 2 groups.
3. Install an extension to automate the procedure.

Progress and Next Steps

I am looking for scanner images of patients meeting my criteria

Illustrations

Not yet

Background and References

No response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.