Site is made with Hugo https://gohugo.io/
With Northendlab theme https://docs.gethugothemes.com/northendlab/
Brainchop: In-browser 3D MRI rendering and segmentation
Home Page: https://neuroneural.github.io/brainchop/
License: MIT License
Site is made with Hugo https://gohugo.io/
With Northendlab theme https://docs.gethugothemes.com/northendlab/
The Cortical Atlas 50 (failsafe) model fails as follows:
in the following commit:
b12fec8
This is an M2 laptop with 64Gb of RAM.
The info popup that comes up when you click on the
button next to the segmentation model selection dropdown has a couple issues:
ref: JOSS Review
I would like to use MeshNet for knee MRI bone segmentation. The training results of MeshNet using Catalyst are excellent, but I am facing challenges in implementing an efficient model for use in Brainchop. Additionally, I plan to deploy Brainchop not only on machines dedicated to deep learning but also on regular computers with integrated GPUs, 16GB RAM, etc. 3 hours training example below:
To address this, I trained the model on subvolumes. However, during the PyTorch to TensorFlow.js conversion, it overloads 30GB of memory in the conversion to ONYX stage. As a solution, I decided to divide the conversion into chunks. Unfortunately, this led to memory overloads in Brainchop, causing crashes. Consequently, I opted to process each subvolume separately and accumulate the results. Despite these efforts, I am still unable to achieve results comparable to the example models in Brainchop. Given my relative newness to this topic, I believe I may be overlooking some important aspects. Any guidance would be appreciated.
def process_subvolumes(model, subvolume, device):
with torch.no_grad():
subvolume = subvolume.to(device)
output = model(subvolume)
return output
volume_shape = [256, 256, 256]
subvolume_shape = [128, 128, 128]
n_subvolumes = 1024
n_classes = 3
atlas_classes = 104
scube = 64 # 64
model_path = '/kaggle/working/logs/best_full.pth'
device_name = "cuda" if torch.cuda.is_available() else "cpu"
device = torch.device(device_name)
meshnet_model = MeshNet(n_channels=1, n_classes=n_classes)
meshnet_model.load_state_dict(torch.load(model_path, map_location=device)['model_state_dict'])
meshnet_model.to(device)
mnm = fuse_bn_recursively(meshnet_model)
mnm.model.eval()
# Generate random input volume
x = torch.randn(1, 1, volume_shape[0], volume_shape[1], volume_shape[2], requires_grad=True)
# Split the input volume into subvolumes
subvolumes = x.unfold(2, subvolume_shape[0], subvolume_shape[0]).unfold(3, subvolume_shape[1], subvolume_shape[1]).unfold(4, subvolume_shape[2], subvolume_shape[2])
# Initialize the result tensor
result = torch.zeros(1, n_classes, subvolume_shape[0], subvolume_shape[1], subvolume_shape[2])
# Process each subvolume and accumulate the results
for i in range(subvolumes.size(2)):
print("subvolumes.size(2)")
for j in range(subvolumes.size(3)):
print("subvolumes.size(3)")
for k in range(subvolumes.size(4)):
subvolume = subvolumes[:, :, i, j, k, :, :, :]
output = process_subvolumes(mnm, subvolume, device)
result = result + output[:, :, :subvolume_shape[0], :subvolume_shape[1], :subvolume_shape[2]]
# Export the model to ONNX in smaller chunks
chunk_size = 32 # 32=6/14GB RAM, 64=12GB RAM # Adjust this value based on your available memory
num_chunks = volume_shape[0] // chunk_size
onnx_file_paths = []
for i in range(num_chunks):
start_index = i * chunk_size
end_index = min((i + 1) * chunk_size, volume_shape[0])
subvolume = x[:, :, start_index:end_index, :, :]
onnx_file_path = f'/kaggle/working/logs/tmp/mnm_model_chunk_{i}.onnx'
onnx_file_paths.append(onnx_file_path)
torch.onnx.export(mnm, subvolume.to(device), onnx_file_path, export_params=True,
opset_version=13, do_constant_folding=True,
input_names=['input'], output_names=['output'],
dynamic_axes={'input': {0: 'batch_size'}, 'output': {0: 'batch_size'}})
# Merge exported chunks into a single ONNX file
merged_onnx_file_path = '/kaggle/working/logs/tmp/mnm_model_merged.onnx'
# Create a list to store individual onnx models
onnx_models = [onnx.load(onnx_file_path) for onnx_file_path in onnx_file_paths]
# Concatenate onnx models
onnx_model = onnx_models[0]
for i in range(1, num_chunks):
onnx_model.graph.node.extend(onnx_models[i].graph.node)
onnx_model.graph.output.extend(onnx_models[i].graph.output)
# Save the merged onnx model
onnx.save(onnx_model, merged_onnx_file_path)
# Load the merged ONNX model and convert to Keras
loaded_onnx_model = onnx.load(merged_onnx_file_path)
k_model = onnx_to_keras(loaded_onnx_model, ['input'], name_policy='renumerate')
# Save Keras model for TensorFlow.js
tfjs.converters.save_keras_model(k_model, '/kaggle/working')
fixjson_file('/kaggle/working/model.json', scube=scube)
# Remove temporary chunk files and folders
for onnx_file_path in onnx_file_paths:
os.remove(onnx_file_path)
#print("Done")
If you open this MRI from OpenNeuro
sub-01_ses-01_acq-MEMPRvNav_rec-RMS_T1w.nii.gz
It ends up not normalized to the values between 0 and 255 in the Papaya viewer as the screenshot below taken after the preprocessing demonstrates (observe the value under cursor is 2060):
The model still runs fine and produces reasonable results, but the composite display on the left is broken because of the unnormalized values:
In paper.md, could you please add an author contributions statement detailing what each author's respective role is in the software. For example, you could categorize author contribution according to the CRediT taxonomy by including a section
## Author contributions
We describe contributions to this paper using
the CRediT taxonomy [@credit].
Writing – Original Draft: <<insert appropriate author initials here>>;
Writing – Review & Editing: <<insert appropriate author initials here>>;
Conceptualization and methodology: <<insert appropriate author initials here>>;
Software and data curation: <<insert appropriate author initials here>>;
Validation: <<insert appropriate author initials here>>;
Resources: <<insert appropriate author initials here>>;
Visualization: <<insert appropriate author initials here>>;
Supervision: <<insert appropriate author initials here>>;
Project Administration: <<insert appropriate author initials here>>;
Funding Acquisition: <<insert appropriate author initials here>>;
and then adding a credit citation to paper.bib:
@article{credit,
author = {Brand, Amy and Allen, Liz and Altman, Micah and Hlava, Marjorie and Scott, Jo},
title = {Beyond authorship: attribution, contribution, collaboration, and credit},
journal = {Learned Publishing},
volume = {28},
number = {2},
pages = {151-155},
doi = {10.1087/20150211},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1087/20150211},
year = {2015}
}
ref: JOSS Review
On the home page of brainchop.org, the sidebar (the one with options "Open Brain T1 MRI," "Segmentation Options," etc) does not scroll and is cut off at the bottom on my Macbook. Would it be possible to make the sidebar scrollable?
Also, perhaps you could make each of the sidebar cards collapsible so that the user could hide them when they don't want them.
ref: JOSS Review
The current list of models looks very messy and does not leave a feeling of what is unique and important about brainchop.
Minimally, I suggest the following modification that leaves the content as is but makes it clear by the order which models are unique and powerful:
Leave only the above under models. Just 4 models and 2 ways to run them. Eventually we may make the choice which of the ways to run a model automatic based on the users browser capabilities.
If we want to still enable the user to perform the two operations that are currently cluttering the menu without adding to clarity and much to functionality, let's inside the "Models" menu add a divider line for "Operations" followed by:
Ideally, the above menu also contains only 2 operations: extract and compute.
In the "About" model on brainchop.org, you mention:
Special thanks to Kevin Wang and Alex Fedorov for discussions and pre-trained Meshnet models.
Would it be appropriate to include them in the acknowledgments section of paper.md as well?
ref: JOSS Review
If you can kindly share what dataset has been used for training, especially the segmentation that would be helpful, as we are working on a CNN based segmentation where we are looking for brain MRI segmentations.
Will version 3 support the same features as version 4,such as input 3D , output volumes?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.