na-mic / projectweek Goto Github PK
View Code? Open in Web Editor NEWWebsite for NA-MIC Project Weeks
Home Page: https://projectweek.na-mic.org
Website for NA-MIC Project Weeks
Home Page: https://projectweek.na-mic.org
@gulamhus Would you mind adding your affiliation on the following page: https://github.com/NA-MIC/ProjectWeek/tree/master/PW27_2018_Boston/Projects/SlicerVR
Thanks
VR/AR and Rendering
Microsoft HoloLens 2 has demonstrated to be an excellent device in many clinical applications. They are mainly used to display 3D patient-related virtual information overlayed to the real world. However, its processing capacity is quite limited, so developing complex applications that require medical image processing is quite convoluted.
A good solution could be to perform the difficult computations on a speciallized software on a computer (i.e. 3D Slicer) and send them in real time to HoloLens 2 so that it can focus solely on visualization.
Up to date, there has been a lack of software infrastructure to connect 3D Slicer to any augmented reality (AR) device.
During the last year, Universidad Carlos III de Madrid (Madrid, Spain) and Perk Lab in Queen's University have worked together to develop a novel connection approach between Microsoft HoloLens 2 and 3D Slicer using OpenIGTLink.
The results of that work are publicly available at this GitHub repository.
The current solution is implemented in a 3 elements system. It is composed by A Microsoft HoloLens 2 headset, the Unity software, and the 3D Slicer platform.
The HoloLens 2 application is not directly built on the device, but remotely transferred from Unity in real time using Holographic Remoting.
Evaluate the transferability of the aforementioned project to other AR devices. Specifically, we'll focus on the VARJO XR-3 headset.
So far, everything works for HoloLens 2. Our current application transfers geometrical transform and image messages between the platforms.
It displays CT reslices of a patient in the AR device. The user wearing the glasses can manipulate the CT plane to see different perspectives.
The application was build for pedicle screw placement planning.
Our main goal for this week is to replicate the exact same application in the new device.
No response
Check out our app in this GitHub repository.
This repository contains all the resources and code needed to replicate our work in your computer.
Transfer of geometrical transforms from HoloLens 2 to 3D Slicer:
Transfer of images from 3D Slicer to HoloLens 2:
After integrating updates into the main branch, it takes ~10mins to fully deploy the updated website.
Overview | |
Upload artifact | |
deploy |
Some of the large files copied during each deployment could be moved into a "Resource" release.
Files are associated with Project Week 28, 35, 37 and 38 takes close 0.5GB.
$ du -ah . | sort -hr | head -n 20
1003M .
244M ./PW38_2023_GranCanaria/Projects
244M ./PW38_2023_GranCanaria
174M ./PW37_2022_Virtual
173M ./PW37_2022_Virtual/Projects
136M ./PW28_2018_GranCanaria
121M ./PW28_2018_GranCanaria/Projects
113M ./PW35_2021_Virtual/Projects
113M ./PW35_2021_Virtual
100M ./PW38_2023_GranCanaria/Projects/SlicerLiver
100M ./PW37_2022_Virtual/Projects/SlicerLiver
83M ./PW33_2020_GranCanaria
72M ./PW31_2019_Boston
70M ./PW33_2020_GranCanaria/Projects
69M ./PW30_2019_GranCanaria
62M ./PW35_2021_Virtual/Projects/US_CT_VertebraRegistration
59M ./PW35_2021_Virtual/Projects/US_CT_VertebraRegistration/US-CTAlignment.gif
54M ./PW30_2019_GranCanaria/Projects
52M ./PW31_2019_Boston/Projects
44M ./PW38_2023_GranCanaria/Projects/MultiSpectralSensorIntegration
$ find . -type f -printf '%s %p\n'| sort -nr | head -20 | while IFS= read -r line; do
size=$(echo $line | cut -d" " -f1);
file=$(echo $line | cut -d" " -f2);
printf $size | numfmt --to=iec;
echo " $file";
done
59M ./PW35_2021_Virtual/Projects/US_CT_VertebraRegistration/US-CTAlignment.gif
33M ./PW38_2023_GranCanaria/Projects/SlicerLiver/distance-tumor.webm
33M ./PW37_2022_Virtual/Projects/SlicerLiver/distance-tumor.webm
32M ./PW38_2023_GranCanaria/Projects/SlicerLiver/planning.webm
32M ./PW37_2022_Virtual/Projects/SlicerLiver/planning.webm
24M ./PW28_2018_GranCanaria/Projects/3DViewsLinking/myimage.gif
24M ./PW38_2023_GranCanaria/Projects/MultiSpectralSensorIntegration/TEEV2+PCOUV.gif
23M ./PW37_2022_Virtual/Projects/StreamlinedROIAnnotationTool/FinalROITool_1.gif
21M ./PW38_2023_GranCanaria/Projects/MultiSpectralSensorIntegration/TEEV2PCOUV-2.gif
20M ./PW38_2023_GranCanaria/Projects/SlicerLiver/distance-vessels.webm
20M ./PW37_2022_Virtual/Projects/SlicerLiver/distance-vessels.webm
20M ./PW30_2019_GranCanaria/Projects/Data-glove_for_virtual_operations/20190201_095221.gif
16M ./PW31_2019_Boston/Breakouts/DataManagement/XNAT
15M ./PW38_2023_GranCanaria/Projects/MONAILabel2bundle/monai_bundle_vs_total_seg_spleen.gif
15M ./PW33_2020_GranCanaria/Projects/ClubFoot/Models/stage3.vtk
15M ./PW31_2019_Boston/Projects/ClubfootCasts/Models/stage3.vtk
14M ./PW38_2023_GranCanaria/Projects/MONAILabel2bundle/monai_bundle_vs_total_seg_idc.gif
14M ./PW37_2022_Virtual/Projects/SlicerTMS/tms_vis.gif
14M ./PW38_2023_GranCanaria/Projects/KaapanaFastViewingAndTaggingOfDICOMImages/NA-MIC.gif
14M ./PW38_2023_GranCanaria/Projects/SlicerLiver/liver_resection.mp4
Infrastructure
The goal of the project is to facilitate access to 3D Slicer in non-English speaking countries and foster global community engagement.
To identify members of the global Slicer community interested in new Slicer activities in their language
Slice Internationalization Breakout session:
Daily Slicer internationalization sessions with members of the Slicer community
No response
No response
Segmentation / Classification / Landmarking
MHub is a repository of self-contained deep-learning models trained for a wide variety of applications in the medical and medical imaging domain. MHub provides the community with reproducible and transparent AI pipelines that work out of the box as intended by the developers.
As part of our efforts, we developed a first version of a Slicer MHub extension that allows users to run different AI models directly in Slicer without the need to install potentially conflicting dependencies as part of their Slicer Python installation.
The goal of this project is to polish the extension, publish it, and further explore its potential applications and user feedback to expand the extension's capabilities, address its limitations, and ensure its seamless integration with Slicer.
Work on identified issues/enhancements, and collect feedback from the Slicer community.
No response
No response
Infrastructure
The Cancer Imaging Archive (TCIA) is an NCI-funded service which de-identifies and publishes cancer imaging datasets. The imaging data are organized as “collections” or "analysis result" datasets defined by a common disease (e.g. lung cancer), image modality or type (MRI, CT, digital histopathology, etc) or research focus. An emphasis is made to provide supporting data related to the images such as patient outcomes, treatment details, genomics and expert analyses where available.
TCIA Browser is an extension that lets users easily download and import TCIA data into 3D Slicer. This project seeks to improve the TCIA Browser extension for 3D Slicer by updating it to leverage TCIA-Utils to access TCIA's APIs.
The major improvements we'd like to address with TCIA Browser include:
No response
No response
Quantification and Computation
The algorithm for calculating Agatston Cardiac scoring (a clinical way to measure arterial occlusion around the heart) was previously written by Jans Johnson et al. The script was recently tested by members of the community, but it would be more useful if a Slicer Module to run the Agatston scoring was available. This project is a start to creating the module and eventually a Slicer Extension.
No response
Sample Masked Image as input: https://github.com/lassoan/PublicTestingData/releases/download/data/CardiacAgatstonScore.mrb
Existing Algorithm to refactor:
https://github.com/BRAINSia/CardiacAgatstonMeasures
A recent update to interpreting Agatston scoring:
https://pubs.rsna.org/doi/10.1148/ryct.2021200484
The goal of this project is to enable creation of synthetic data from landmark transforms.
Given a point list, the user will select points to be operated on. The selected points will be moved independently to create the target landmark set for the transform.
This can currently be done in the Markups module by copying the points to a new list, translating/rotating the points, and copying the point positions back to the original node. However this process can be tedious and error-prone. We plan to implement this function as an option in the Markup Editor module in the SlicerMorph extension.
This project proposal is related the forum post here
Designing a Docker-based system to assess the submissions of challenge participants.
Infrastructure
Online
Project Description:
Our project is focused on developing a Docker-based submission mechanism for challenge participant. To maintain fairness and make sure that the test set is not used in the training process, the test set will not be released to the participants. Instead, participants will be required to containerize their methods using Docker and submit their Docker containers for evaluation.
Docker provides an excellent solution for running algorithms in isolated environments known as containers. In our project, we will leverage Docker to create a container that replicates the participants' pipeline requirements and executes their inference script. By encapsulating the entire environment within a container, we can ensure consistent execution and reproducibility.
Create a sample docker container for submission
Create an evaluation mechanism on the challenge website
Create documentation, guidelines, and tutorial for participants
Design the docker container, input/output mechanism, requirements, and methods to perform inference using a subset of the validation set.
Create an evaluation mechanism on the challenge website
Create a sample submission docker for the test phase and test it on the challenge website
Create documentation to publish in phase 2 of the challenge.
No response
No response
Segmentation / Classification / Landmarking
In-person
Automated B-line detection in lung ultrasound videos has been demonstrated before, most recently by Lucassen 2023. However, acquiring the many labels necessary can be a resource-intensive process, limited by the availability of expert clinicians capable of producing high-quality labels. Recently, gamified crowdsourcing with a new quality control mechanism and built-in learning for labelers has been demonstrated to be capable of producing annotations on lung ultrasound videos comparable in quality to expert clinicians (as well as analogous results for EEG and skin lesion classification tasks), which can greatly shorten the time required to acquire high-quality labels for model training. Though these crowd labels have been shown to have expert-level quality, it has yet to be demonstrated whether crowd-produced labels are capable of training high-performance models.
No response
Infrastructure
In-person
Over a span of more than ten years, 3D Slicer has paved the way for cutting-edge biomedical research. Its unprecedented success is pushing the frontiers of research, leading numerous research groups and corporations to recognize 3D Slicer as a credible software for designing medical devices. These devices not only have the potential to support routine clinical workflows but may also evolve into marketable products. Although 3D Slicer's development has been largely research-focused, its modular architecture fosters the creation of industrial prototypes.
Systole OS envisions a harmonious integration of 3D Slicer and its associated software, such as the Plus Toolkit, MONAI Label, and others, within a freely accessible, open-source operating system based on GNU/Linux. This aims to facilitate the development and deployment of medical devices.
The following are key features we aim to leverage with Systole OS:
State-of-the-Art Software: Built on the foundation of Gentoo Linux, Systole OS operates on a rolling-release model, ensuring continual, up-to-the-minute updated software.
Easy Installation of Slicer: With Systole OS, installing Slicer and all its necessary dependencies is as easy as executing a single command (e.g., 'emerge sci-medical/slicer').
Modular Slicer: The core installation of 3D Slicer will only encompass essential components to run the application, enabling additional modules to be installed separately as needed (e.g., 'emerge slicer-modules/models').
Source-Based Distribution: Systole OS is derived directly from source code, allowing all packages to be built from source. This gives users the flexibility to make decisions at compile-time, leading to:
Extensibility: Systole OS utilizes the Gentoo overlay system, offering the ability to expand the system with your personal overlay or supersede packages supplied by Systole.
Updating Packages: Ensure the timely update and maintenance of existing packages, targeting specifically the release Slicer-5.3.0.
Integration and Testing Infrastructure: Develop a robust infrastructure that supports seamless integration and rigorous testing to maintain the highest quality standards.
Generation of Containers and VMs: Establish a systematic approach for generating containers and Virtual Machines (VMs) that can effectively support both development and testing processes.
Package Assessment: Review the status of existing packages and identify necessary updates for the release Slicer-5.3.0.
Update Planning: Develop a plan and timeline for implementing the necessary updates.
Update Implementation: Carry out the plan to update packages in line with the established timeline.
Kubernetes Infrastructure Setup: Begin the process of setting up a Kubernetes-based infrastructure to support our integration and testing needs.
Testing Protocol Development: With the Kubernetes infrastructure ready, establish systematic protocols for integration and testing to ensure high quality standards.
Container and VM Generation: Implement a systematic approach for creating containers and Virtual Machines (VMs) for development and testing, ensuring this approach is scalable as needed.
No response
No response
Yes, in fact the clinical problem is that in our context we often receive patients who have repeated infectious episodes. So, to assess the severity of the pulmonary involvement on the X-ray, it is difficult for us to know the limit between the old and the new lesions, especially since the patients often lose the previous images.
Thus with the 3d slicer, by making a comparative study between the old and recent lesions, one can create an extension capable from the thresholding, of coloring the zones differently.
This can be of great use to us!
Segmentation / Classification / Landmarking
This project aims to investigate the application of itk-elastix (a python wrapping of Elastix) for image registration in combination with MONAI Label for segmentation/classification. Depending on the time/people availability, we will work in one or more sub-projects.
Initial sub-project:
We will starty by training a single modality MONAI Label model on Elastix-aligned brain images (T1, T2, FLAIR, etc) using SynthSeg as the source of annotations. SynthSeg is a tensorflow-based deep learning segmentation tool for brain MRIs. It consists of a generative network that produces the synthetic images and a 3D U-Net trained to do the segmentation. The only input (training data) is the training labels so no real images are used.
We will use SynthSeg to produce annotations as “ground truth” on a publicly available dataset like BRATS (multimodal + non-healthy brains) or OASIS (temporal/monomodal + healthy brains). Elastix will be used for the co-registration of the different modalities or temporal images and achieve segmentation via registration.
Other possible sub-projects:
No response
Would it help to create a .github/PULL_REQUEST_TEMPLATE.md
?
See https://github.com/blog/2111-issue-and-pull-request-templates
VR/AR and Rendering
The goal of this project is to enable the development of advanced 3D rendering techniques in Slicer. The goal is to facilitate access to GPU shaders and enable GPU-based filtering in Slicer by improving shader access multipass rendering in VTK and Slicer. The PRISM Module in Slicer will serve as a test environment for the new capabilities.
PRISM has a significant amount of unused and/or legacy code that was made for version 4.11, which isn't used anymore. The goal of the project is to simplify PRISM volume renderer to make it easier to work with and to remove as many bugs as possible.
https://projectweek.na-mic.org/PW35_2021_Virtual/Projects/PRISM_volume_rendering/
VR/AR and Rendering
The AMPSCZ project will have its first public data release in July and we want to finalize documentations and "customer-facing" material.
No response
No response
The DICOM Segmentation format is used to store image segmentations in DICOM format. Using DICOM Segmentations, which use the DICOM information model and can be communicated over DICOM interfaces, has many advantages when it comes to deploying automated segmentation algorithms in practice. However, DICOM Segmentations are criticized for being inefficient, both in terms of their storage utilization and in terms of the speed at which they can be read and written. This is in comparison to other widely-used segmentation formats within the medical imaging community such as NifTi and NRRD.
While improvements in tooling may alleviate this to some extent, there appears to be an emerging consensus that changes to the standard are also necessary to allow DICOM Segmentations to compete with other formats. One of the major reasons for poor performance is that in segmentation images containing multiple segments (sometimes referred to as "classes"), each segment must be stored as an independent set of binary frames. This is in contrast to formats like NifTi and NRRD that store "labelmap" style arrays in which a pixel's value represents its segment membership and thus many (non-overlapping) segments can be stored in the same array. While the DICOM Segmentation has the advantage that it allows for overlapping segments, in my experience the overwhelming majority of segmentations consists of non-overlapping segments, and thus this representation is very inefficient when there are a large number of segments.
The goal of this project is to gather a team of relevant experts to formulate changes to the standard to address some issues with DICOM Segmentation. I propose to focus primarily on "labelmap" style segmentations, but I am open to other suggestions for focus.
The specific goals would be to complete or make significant progress on the following:
Open questions:
Other possible (alternative) topics:
Relevant team members: @fedorov @dclunie @pieper (@hackermd ) please give your feedback to help shape this project!
Segmentation / Classification / Landmarking
This project focuses on importing whole slide image (WSI) histology images and trained deep learning models into the Imaging Data Commons for access by others. We have developed tissue-level segmentation models for detecting subtypes of rhabdomyosarcoma (RMS) in whole slides. Our project is releasing WSIs and the corresponding models trained on the slide images.
This project will test reading DICOM-WSI imagery (including compression) and focus on how to write out model segmentation results as DICOM-WSI annotations for upload to IDC. We also have classification and regression models, so we need to decide how to write non-imagery classification results as DICOM, as well.
No response
models wrapped in a girder3 web application: https://github.com/knowledgevis/rms_infer_web
Segmentation / Classification / Landmarking
The goal of this project is to facilitate selection and independent manipulation of points in a list.
This can currently be done in the Markups module by copying the points to a new list, translating/rotating the points, and copying the point positions back to the original node. However this process is tedious and error-prone.
The initial motivation for this project was to simplify creation of synthetic data from landmark transforms by transforming an original set of landmarks into the target landmark set.
Two possible solutions have been discussed for the implementation:
No response
Segmentation / Classification / Landmarking
The DICOM Segmentation format is used to store image segmentations in DICOM format. Using DICOM Segmentations, which use the DICOM information model and can be communicated over DICOM interfaces, has many advantages when it comes to deploying automated segmentation algorithms in practice. However, DICOM Segmentations are criticized for being inefficient, both in terms of their storage utilization and in terms of the speed at which they can be read and written. This is in comparison to other widely-used segmentation formats within the medical imaging community such as NifTi and NRRD.
While improvements in tooling may alleviate this to some extent, there appears to be an emerging consensus that changes to the standard are also necessary to allow DICOM Segmentations to compete with other formats. One of the major reasons for poor performance is that in segmentation images containing multiple segments (sometimes referred to as "classes"), each segment must be stored as an independent set of binary frames. This is in contrast to formats like NifTi and NRRD that store "labelmap" style arrays in which a pixel's value represents its segment membership and thus many (non-overlapping) segments can be stored in the same array. While the DICOM Segmentation has the advantage that it allows for overlapping segments, in my experience the overwhelming majority of segmentations consists of non-overlapping segments, and thus this representation is very inefficient when there are a large number of segments.
The goal of this project is to gather a team of relevant experts to formulate changes to the standard to address some issues with DICOM Segmentation. We will focus primarily on "Labelmap" style segmentations and issues surrounding frame compression. Other objectives for further discussion include simplifying per-frame metadata. Although we do not speak for the DICOM standards committee, we hope to put forward a complete proposal that can be considered by the committee. Ideally, the proposal will be backed by multiple interoperable implementations of the proposed objects and demonstrations of their value in reducing object size and complexity.
The proposal for this project received a considerable amount of constructive feedback from the community: #643
No response
No response
Infrastructure
Slicer Pipelines is a framework to support the creation of workflows (Pipelines) inside of slicer. It allows users to attach a variety of slicer operations with pipeline support to each other and create a module that can then be executed on its own. Pipelines v2 is based on the work that Connor and others did with the ParameterWrapper.
No response
Slicer Pipelines Module Repository: https://github.com/KitwareMedical/SlicerPipelines
Project Week 36: https://projectweek.na-mic.org/PW36_2022_Virtual/Projects/SlicerPipelines/
Project Week 38: https://projectweek.na-mic.org/PW38_2023_GranCanaria/Projects/SlicerPipelines/
Cloud / Web
Loading compressed multiframe DICOM images as a whole causes frequent browser crashes, particularly on Microsoft machines. This issue arises due to the large file size of the DICOM images, exceeding the browser's memory capacity.
The browser's rendering engine attempts to load the entire file into memory, due to the significant size of these images, the browser can quickly exhaust its allocated memory, leading to crashes or unresponsive behavior.
This issue affects both ePAD and OHIF with the latest WADO-loader version.
Initiate a discourse about the methodologies for saving, storing, and reading DICOM data, and explore strategies for optimizing the handling of compressed multiframe images to achieve enhanced efficiency and avoid browser crashing.
Instead of loading the entire DICOM file at once, the image loading process can be modified to load the image in smaller chunks or frames progressively. This approach may allow the browser to handle smaller portions of the image, reducing the memory burden and enhancing overall stability.
We attempted to adapt a solution approach inspired by the PR link below. The link's solution specifically addresses uncompressed images. In our case, we tried a similar method to handle compressed images within the dicom-parser library, unfortunately, the attempted solution did not yield the desired outcome.
PR link: cornerstonejs/cornerstoneWADOImageLoader#454 (comment)
Ticket link: cornerstonejs/dicomParser#248
Unfortunately the ultrasound images are not deindentified, we can not provide sample data yet. We are working on getting a data set.
Related libraries:
https://github.com/cornerstonejs/cornerstoneWADOImageLoader
https://github.com/cornerstonejs/dicomParser
Infrastructure
In-person
The Project Week team will continue to make improvements to the project page generation process
No response
No response
No response
No response
Quantification and Computation
The AMPSCZ project allows consortium researchers to access an AWS workspaces virtual desktop with direct access to the AMPCZ data lake hosted at the NIMH data archive (NDA).
This project will consist of generating R and Python notebooks to illustrate how to access and analyze datasets using this collaboration space.
No response
No response
Segmentation / Classification / Landmarking
This project aims to create tutorials on how to work with DICOM annotations in pathology whole-slide images (WSIs).
At first, we will focus on nuclei annotations stored as DICOM Microscopy Bulk Simple Annotations and compute nuclei density (cellularity) on tile-level from them. The computed cellularity values are then stored as DICOM parametric maps.
No response
No response
Cloud / Web
Online
The Institute of Cancer Research (ICR) has created PolySeg-WASM is an extended WASM wrapper for the PerkLab/PolySeg library, including C++ code repurposed from Slicer and SlicerRT.
In the previous year project we created the contour segmentation representation for Cornerstone3D library, now this year we want to use the polySEG to convert the contours to closed surfaces.
The repo by ICR does the job However, the bundle is huge (3MB) which is not optimal for the web applications. This project aims to find out how to reduce the bundle size by choosing the VTK dependencies.
3D Slicer is built with a powerful core to load, transform, and store manage medical images and derived datasets. Slicer has a catalog of loadable extensions that assist with or automate task-specific workflows. Slicer's web API provides remote access to invoke many of the processing steps which are available through its very complete user interface.
The recent explosion of generative LLMs (large language models) from the AI community has demonstrated that these language models can, in some cases, translate task or problem descriptions into sequences of operations. Wouldn't it be powerful if 3D Slicer could be verbally instructed to perform operations or process datasets as requested? Theoretically, an embedded LLM could be trained on Slicer's modules, including under what circumstances the modules could be applied to transform a Slicer scene as needed to solve a problem presented by the user.
As one example, in Operating Theaters during surgical procedures, the Slicer user interface is hard or impossible to access due to sterility restrictions and other factors. It would be helpful if clinicians could control Slicer's functions through an alternative method than the interactive user interface. For example, "Let me see the lung lesions more clearly" could be translated into increased transparency of the lung segmentation and an orientation repositioning to make a lesion segmentation visible in-situ.
A goal of this project proposal would be to schedule a meeting during Project Week to discuss this idea, assess the level of interest in the Slicer community, discuss early technical approaches, and decide who might be interested in working together to seek funding to pursue this together. Both clinicians with a problem to solve and AI technicians would be invited to participate.
VR/AR and Rendering
We are working on developing a general purpose model registration tool in Slicer. At this moment, I developed a simple test module (https://github.com/chz31/registration_test) using rigid registration functions (RANSAC + ICP) from Open3D and new ITK-based ALPACA module. This can allow people to test registration for purposes such as ALPACA automated landmarking.
We are thinking about expanding this module into its own system for other purposes related to model registrations. One purpose is to register and align models that represent different parts of an object with overlapping area, and fuse them together. This could be useful for some purposes. For example, it would allow align and fuse models acquired from different angles, such as different parts of an object acquired by photogrammetry techniques. It would also allow virtual fossil reconstruction, which is usually done using commercial software such as Geomagic Studio.
These are the models acquired by photogrammetry from two angles. The yellow one has no top, and the red one has no bottom. Rigid registration from Open3D can align them pretty well, though not perfect.
A sample virtual reconstruction in Geomagic Studio. The skull missed a part at the right side. The yellow part is the mirror image of the counter part at the left side.
Current testing version is here: https://github.com/chz31/registration_test. It uses rigid registration functions (RANSAC + ICP) from Open3D and new ITK-based ALPACA module.
ALPACA module (including the ITK version) repository: https://github.com/SlicerMorph/SlicerMorph/tree/master/ALPACA
ALPACA tutorial: https://github.com/SlicerMorph/Tutorials/tree/main/ALPACA
Open3D rigid registration utilized in ALPACA: http://www.open3d.org/docs/release/tutorial/pipelines/global_registration.html
Segmentation / Classification / Landmarking
Imaging Data Commons provides publicly available cancer imaging data.
Previous works(IDC Prostate segmentation) (NLST-Body Part Regression) demonstrated through several use cases inference and analysis of AI algorithms on IDC data.
Downloading IDC data, conversion between file imaging standards, cloud environment setup and imaging pre-processing steps were studied through these inference and analysis use cases.
During this project week, our goal is to develop use cases of training AI algorithms on IDC data. We welcome any Project Week participants that are interested in leveraging IDC data for training AI algorithms(or evaluation) to collaborate with us!
No response
Infrastructure
The goal of this project is to empower the biomedical research community in Latin America by localizing 3D Slicer to Spanish and Portuguese and improving tutorial localization infrastructure.
Slice Internationalization Breakout session:
Monday, June 12, 2-4 pm EST
Daily Slicer internationalization sessions with members of the Slicer community
Tuesday, June 13, 9-11 am EST
Wednesday, June 14, 10-11 am EST
Thursday, June 15, 10-11 am EST
No response
No response
No response
The new venue info is helpful, but could someone add a link with the actual location populated so that people can easily get directions? @drouin-simon ?
VR/AR and Rendering
Stuff
No response
No response
IGT and Training
NousNav is an ongoing project led by Dr. Alex Golby at Brigham and Women's Hospital to build and disseminate a low-cost neuronavigation system. Built as a 3D Slicer Custom App, NousNav uses low cost optical tracking (Optitrack Duo) in combination with custom optically-tracked tools and reference arrays to facilitate patient registration, procedure planning, and navigation.
The system is being continually updated based on user feedback. An important next step in development is the integration of tracked ultrasound data.
No response
No response
No response
Quantification and Computation
Put together code that 1) performs facial expression feature extraction for video interviews stored on a data aggregation server, and 2) transfers them to a local directory. It would be based on existing scripts for facial expression feature extraction and an existing data management tool.
No response
Facial expression code: https://github.com/ecastrow/face-feats
Data Management tool: https://github.com/AMP-SCZ/lochness
IGT and Training
The goal of SlicerROS2 is to provide an open-source software platform for medical robotics research. Specifically, the project focuses on architectures to seamlessly integrate a robot system with medical image computing software using two popular open-source software packages: Robot Operating System (ROS) and 3D Slicer.
No response
The National Institute of Biomedical Imaging and Bio-engineering of the U.S. National Institutes of Health (NIH) under award number R01EB020667, and 3R01EB020667-05S1 (MPI: Tokuda, Krieger, Leonard, and Fuge). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.
The National Sciences and Engineering Research Council of Canada and the Canadian Institutes of Health Research, the Walter C. Sumner Memorial Award, the Mitacs Globalink Award and the Michael Smith Foreign Study Supplement.
IGT and Training
Our past code for training and deploying ultrasound segmentation in real time was based on TensorFlow. Example project:
https://youtu.be/WyscpAee3vw
The goal for this project week is to provide a new open-source implementation using PyTorch and modern AI tools like MONAI and wandb. A Slicer module will also be provided to deploy trained AI on recorded or live ultrasound streams.
No response
No response
Hi project-week community,
We have discussed a bit with Andres Diaz-Pinto (@diazandr3s) on possible projects for the upcoming Project Week that fall under the general title of 3D Medical Image Registration and Segmentation using Elastix and MONAI Label.
Here are our tentative proposals for the moment:
Train a single modality MONAI Label models on Elastix-aligned brain images (T1, T2, FLAIR, etc) using SynthSeg (https://github.com/BBillot/SynthSeg) as the source of annotated datasets - For Nomal brains
SynthSeg is a tensorflow-based deep learning segmentation tool for brain MRIs. It consists of a generative network that produces the synthetic images and a 3D U-Net trained to do the segmentation. The only input (training data) is the training labels so no real images are used.
We will use SynthSeg to produce annotations as “ground truth” on a publicly available dataset like BRATS (multimodal + non-healthy brains) or OASIS (temporal/monomodal + healthy brains). Elastix will be used for the co-registration of the different modalities or temporal images and achieve segmentation via registration.
Train a MONAI Label model using the raw BRATS dataset (https://www.kaggle.com/competitions/rsna-miccai-brain-tumor-radiogenomic-classification/data). Elastix will be used to co-register the 4 modalities.
This is a classification task. Hypothesis: Registration with elastix might help the classification accuracy. So, we can compare the classification result with and without pre-alignment.
Extend the whole brain segmentation model available in the Model Zoo (https://github.com/Project-MONAI/model-zoo/tree/dev/models/wholeBrainSeg_Large_UNEST_segmentation)
The data used for the training were registered affinely in the MNI305 space. Hence, elastix can be used to also register any data used for inference in the same space. We could also store all the result transform parameters so that the users could just do the resampling directly without registering again (this holds true for the traning data - unseen data used for inference should still need to be registered).
Compare registration performance between cross-modal registration (CT-MRI) versus intra-modal registration via synthesised MRI (MRI_syn - MRI). MONAI for the synthesis and elastix for the registration. What would a suitable dataset be?
Train MONAI Label model for automatic landmark identification in e.g. lung images (dataset: https://med.emory.edu/departments/radiation-oncology/research-laboratories/deformable-image-registration/index.html) . Landmarks can be used either to assist registration with elastix OR elastix can be used to validate the landmark accuracy. 3D Slicer can be used to visualize the landmarks and ease the qualitative evaluation.
Looking forward to your feedback, declaration of interest, ideas or proposals. Projects can be adjusted, added or removed. Thanks! 🙏
VR/AR and Rendering
In-person
Stuff!
No response
No response
I would like to work on a Slicer extension that allows to browse and download content from NCI Imaging Data Commons. It would work similar to the TCIABrowser extension. In order to eliminate the need to log in to query IDC via BigQuery SQL, I was planning to use IDC API. Once the file URLs are identified, I would download them using s5cmd, which can be installed by downloading a single executable precompiled for all platforms.
IGT and Training
Slicer-Liver is an advanced 3D Slicer extension developed for liver therapy planning. The extension currently offers essential features for liver resection planning and accurate computation of vascular territories. As part of an ongoing project, our aim is to further enhance the existing functionalities and introduce new tools for volumetry computation. Our objective is to provide a comprehensive and user-friendly solution for liver therapy planning within the Slicer platform.
No response
Infrastructure
In-person
3D Slicer is built with a powerful core to load, transform, and store manage medical images and derived datasets. Slicer has a catalog of loadable extensions that assist with or automate task-specific workflows. Slicer's web API provides remote access to invoke many of the processing steps which are available through its very complete user interface.
The recent explosion of generative LLMs (large language models) from the AI community has demonstrated that these language models can, in some cases, translate task or problem descriptions into sequences of operations. Wouldn't it be powerful if 3D Slicer could be verbally instructed to perform operations or process datasets as requested? Theoretically, an embedded LLM could be trained on Slicer's modules, including under what circumstances the modules could be applied to transform a Slicer scene as needed to solve a problem presented by the user.
As one example, in Operating Theaters during surgical procedures, the Slicer user interface is hard or impossible to access due to sterility restrictions and other factors. It would be helpful if clinicians could control Slicer's functions through an alternative method than the interactive user interface. For example, "Let me see the lung lesions more clearly" could be translated into increased transparency of the lung segmentation and an orientation repositioning to make a lesion segmentation visible in-situ.
A goal of this project proposal is to schedule a meeting during Project Week to discuss this idea, assess the level of interest in the Slicer community, discuss early technical approaches, and decide who might be interested in working together to seek funding to pursue this together. Both clinicians with a problem to solve and AI technicians are invited to participate.
No response
Hugging Face has a new API called "Agents" that is designed to use tools according to their descriptions of the I/O they handle. The Agent API puts together a workflow of tools to accomplish the users request. This is not exactly what I was thinking, as there are issues related to how to identify and return a changed MRML scene, but it inspired my thinking somewhat: https://huggingface.co/docs/transformers/transformers_agents.
Work that seems more directly towards a way to invoke Slicer modules via API is Gorilla, a LLAMA model fine-tuned to invoke external APIs to accomplish a requested task: https://github.com/ShishirPatil/gorilla. I just started reading the paper referenced on the repository site.
Here's a related post: https://nickarner.com/notes/llm-powered-assistants-for-complex-interfaces-february-26-2023/
Somewhat related development applied to selection of data in IDC using LLM: https://discourse.canceridc.dev/t/text2cohort-a-new-llm-toolkit-to-query-idc-database-using-natural-language-queries/.
GuardRails repository provides validation on LLM output. This might help enforce Slicer API structure: https://github.com/ShreyaR/guardrails?utm_source=tldrnewsletter
Nvidia NeMo is also a potentially useful tool in this domain.
I plan to create tutorials on how to work with DICOM annotations in pathology whole-slide images (WSIs). Two formats of DICOM annotation objects, i.e. Microscopy Bulk Simple Annotations and Segmentation, both currently under development will be used and I will try to eventually summarize advantages and disadvantages of each format for different use cases.
Most likely, @CPBridge and @dclunie, who develop the DICOM annotations, will support with their expertise and there might be some synergies with the project of @maxfscher and the project of @curtislisle.
Infrastructure
In-person
ChatIDC is a natural language interface tool for exploring the rich ecosystem of DICOM tags and IDC. It is intended to filter and download highly specific cohorts of imaging data and discover relevant information pertaining to the DICOM standard, IDC documentation, and data that consists of DICOM tags.
The goal of this project is to reduce some technical barriers for clinical researchers to filter and download highly specific cohorts of imaging data. As a result, the project is poised to make the retrieval of data more efficient and encourage the widespread adoption of the platforms in which it is integrated.
For IDC, you can currently filter cohorts by some of the most common tags with sliders and buttons but this eventually has a limit when the researcher has to gather data that is highly tailored to their use case, which may be highly compositional and utilises more esoteric DICOM Tags. When the number of filter parameters is too large, manual selection and query construction may become infeasible if you are not an expert in both DICOM and SQL.
We will prepare a list of queries to motivate and test the development of the project. The list will contain “free text request” and the matching SQL query. We will work with IDC/SQL domain “experts” to confirm that SQL queries on this list are both syntactically and semantically correct. This list will be shared at the end of the project week.
We will implement semantic searching for DICOM tags based on the user's input that is then used for the pretext in the language model. We will work with IDC/DICOM experts to confirm that this curated list is meaningful and comprehensive. This list will be shared at the end of the project week.
We plan to document our current experience and recommendations to what prompts users should use to improve the quality of the responses generated by the existing LLM interfaces.
We will document our experience observing syntactic accuracy of generated queries to motivate future development (ie, what worked, what didn’t work, what can be fixed with refinements to the prompt, what can be improved with the approach used in the text2cohort project).
We would like to conduct interviews with the AI developers attending project week to gather the list of requests/ideas for queries that the users would like to see addressed.
No response
No response
No response
IGT and Training
The main objective of the project is to integrate the haptic device Touch 3D Systems into 3D Slicer through an OpenIGTLink connection with the Unity platform. Slicer To Touch is the 3D Slicer module that contains the scene with the 3D models of the spine and the needle. This module has an interface where the user can configure the number, position and value of the resistances to be exerted by the haptic device. These values will be included in a .json file that will later be transferred to Unity, which will process this data and configure the forces of the haptic device within the Unity environment. Finally, through the OpenIGTLink connection bridge, a real-time connection will be created, where the transformations and the resistances of the haptic device will be shared with the 3D Slicer scene. This idea comes from a project for a lumbar puncture training system that makes use of this device, but the body tissues, their location and thickness are generic. This way you can make a segmentation of a real patient's back with its own characteristics and practice the lumbar puncture before doing it with the patient. Due to the way it works, it could be used in other procedures.
3D Slicer Module in which you enter the resistances (left) and the .json file with the information of these resistances (right)(Picture1.png)
Unity interface after reading the information from the .json file with the resistances created in the positions and the needle as a visual mesh of the haptic device (left) and script that makes it work (right) (Picture2.png)
Real-Time integration between Microsoft HoloLens 2 and 3D Slicer. (Alicia Pose Diez de la Lastra)
https://github.com/BSEL-UC3M/HoloLens2and3DSlicer-PedicleScrewPlacementPlanning
OpenIGTLink-Unity.
https://github.com/franklinwk/OpenIGTLink-Unity
VR/AR and Rendering
The goal of this project is to extend the Volume Rendering interface to improve the convenience of multiple volume comparisons. We aim to create and test prototypes of features that will be added to the SlicerMorph extension in the short term and discuss appropriateness of integration into Slicer core.
Features to support multiple volume comparisons:
No response
No response
Quantification and Computation
This project is part of the AMP SCZ program, an initiative for early detection of risk for schizophrenia(https://www.ampscz.org).
A key goal in AMPSCZ is to predict which patients that present initially mild or sub-threshold symptoms will eventually develop psychosis. Most predictive models are based on data acquired on their first medical visit (the baseline visit). An important question is how much is gained by following patients over time (longitudinal data). In this project we will implement predictive models that make use of this longitudinal information for psychosis prediction. We will focus on implementing a type of models called "joint models", which incorporate time-varying predictors into well known survival analyses.
No response
Cloud / Web
The 3DSlicer module mpReview (part of the SlicerProstate extension) was previously developed to assist with manual annotation of the prostate and other related anatomical regions. In previous project weeks, we have streamlined the extension here and updated the module to use the latest SegmentEditor, and incorporated the use of Google Cloud Platform here.
However, there are improvements that can be made in terms of functionality. For instance, we would like to allow the user to access multiple types of servers, and perform annotation of body parts other than the prostate.
In this project week we'll focus on using a JSON file as input, which will allow users to customize the module to their annotation needs. Our goal will be to streamline the user's interaction with the module, allowing them to annotate large datasets efficiently and quickly using both the cloud (e.g. GCP and Kaapana) or a local DICOM database.
Discuss the current multiple_server branch of the module.
Brainstorm the JSON format specification necessary for streamlining the annotation workflow. Generate examples of JSON files for different use cases: local Slicer DICOM database, Google Cloud Platform, Kaapana, etc.
Define the steps that are needed to accomplish the changes.
Current screenshot of the module
We have worked on this during multiple project weeks, PW35 and PW37. The code from PW37 is available here.
Infrastructure
Slicer Flatpak is a project focused on packaging the 3D Slicer software as a Flatpak. This initiative aims to offer an easy and universal way to install and run the 3D Slicer on any Linux distribution that supports Flatpak. By doing this, it seeks to reduce installation complexities and improve compatibility across different systems. The distribution of 3D Slicer as a Flatpak has potential benefits:
The convenience of having a 3D Slicer Flatpak has been long discussed in the 3D Slicer Discourse platform (https://discourse.slicer.org/t/interest-to-create-flatpak-for-3d-slicer-have-issue-with-guisupportqtopengl-not-found/16532). Soon after PW38, we started a renewed discussion on the topic and a new initiative to make 3D Slicer Flatpak happen. Right now, our efforts have been focused on getting a first feasible 3D Slicer Flapak (https://github.com/RafaelPalomar/Slicer-Flatpak/tree/feature/slicer-flatpak-generatorand https://github.com/RafaelPalomar/org.slicer.Slicer/tree/development). With this project we want to consolidate this effort and discuss about the potential distribution of the 3D Slicer Flatpak.
No response
No response
It seems that you want to use light-the-torch
after @fepegar submitted a patch and to other projects as well. Is everything now working as intended or is there something else need to be fixed on my end?
If not, I would push a release so that you simply can pip install light-the-torch
.
Segmentation / Classification / Landmarking
-Pape Mady, Thiao (école militaire de santé de Dakar , Sénégal)
The objective is to create an extension capable of identifying lung lesions of different ages following repetitive infections.
In order to use the extension in pulmonology to correlate a recent symptomatology with an X-ray image having images related to old infections
1 Collect radiographs of 2 groups of patients (A with an ongoing infection, B having recovered from an infection but with sequelae images)
2. Compare the hounsfied of the different lesions and create a threshold for the 2 groups.
3. Install an extension to automate the procedure.
I am looking for scanner images of patients meeting my criteria
Not yet
No response
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.