christophkirst / clearmap2 Goto Github PK
View Code? Open in Web Editor NEWClearMap 2.0 with WobblyStitcher, TubeMap and CellMap
Home Page: https://christophkirst.github.io/ClearMap2Documentation
License: GNU General Public License v3.0
ClearMap 2.0 with WobblyStitcher, TubeMap and CellMap
Home Page: https://christophkirst.github.io/ClearMap2Documentation
License: GNU General Public License v3.0
This commit 65b9b34 breaks an import of IO.IO
due to cyclical dependencies.
lisergey@barry:~/org$ python3 -c 'import ClearMap.IO.IO'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/lisergey/org/ClearMap/IO/IO.py", line 33, in <module>
import ClearMap.IO.MHD as mhd
File "/home/lisergey/org/ClearMap/IO/MHD.py", line 20, in <module>
import ClearMap.IO.IO as io
AttributeError: module 'ClearMap.IO' has no attribute 'IO'
Hi Christoph,
Could you share a copy of the main script running through the whole pipeline?
I receive the following error when setting h_max to a float: TypeError: Cannot cast array data from dtype('O') to dtype('float64') according to the rule 'safe'
I have attached the full error log.
The error does not show up when setting h_max to 0, and when setting h_max to 1 or higher the seed intensity is too high.
I've been looking through the code and it seems to me the error is in the find_center_of_maxima function, specifically where it defines centers:
centers = np.array(ndm.center_of_mass(source, label, index=np.arange(1, n_label)));
The center_of_mass documentation specifies the first input should be a ndarray: "Data from which to calculate center-of-mass. The masses can either be positive or negative."
However the input is 'source', which in my case is just the location of the stitched image. Shouldn't this be 'maxima' as that's an array with true or false?
I've tried this fix myself, and although it doesn't give me errors anymore it doesn't seem changing h_max does anything.
I'm not a Python expert at all and I'm not sure if I'm on the right path here, any help would be much appreciated.
S.L. Lesuis, PhD
Josselyn lab
I used our own dataset for the first time after running the Cellmap Pipeline using all Datasets you provided. Our Dataset is a little bit twisted (about -5 degree on the x-axis). The resample_to_auto file looks great like both resampled files. But the auto_to_resample file got a "zoomed in" look.
Where could this problem come from? Could it have something to do with the twist? Shouldn't the affine alignment correct for something like that?
Update: what did I try:
I found out that the auto_to_resample file is nearly static not only zoomed in (i.e. every slice is very similar). The amount of transformation-parameters is poor (normally I got about 11340 here only 5300).
The last 3 affine parameters are wrong calculated by elastix. In the first alignmentstep there're:
(TransformParameters 1.005806 -0.000168 0.007140 -0.000562 1.002287 -0.006515 0.000080 -0.000762 1.019741
-0.058954 -0.004397 -0.143601)
in the Second:
(TransformParameters 1.018400 0.017734 -0.008881 0.001208 0.894661 -0.038549 0.003045 0.209495 -0.026565
81.829106 48.647958 127.684375)
So it's obvious the last 3 parameters are way to large. Why? I don't know.
PS: yes I rotated the reference atlas file so it matches the orientation of my dataset
Line 32 in d2ecc5c
conda env create -f ClearMap.yml
Collecting package metadata (repodata.json): done
Solving environment: failed
ResolvePackageNotFound:
- torch
Hello,
Has anyone run ClearMap on Ubuntu virtualbox on a window10? I allocate 128GB memory, 6core, and 1TB space. For some reason, the code is running really slow.
Before this, I ran on a ubuntu virtualbox on a mac with way less memory and space but still faster than this. Any insight would be appreciated!
Hello,
Is there testing dataset for cellmap that requires no stitching? Thank you!
First of all, great package! I tried to run the Tubemap.py script on your test brain samples and encountered a small error in line 272 ("processing_parameter = vasc.default_processing_parameter.copy()").
"default_processing_parameter" cannot be found and throws an error.
I replaced the line with "processing_parameter = vasc.default_binarization_processing_parameter.copy()". Is this still correct, or am I missing something?
Goal: To annotate and segment ClearMap1-generated .npy files in ClearMap2, due to a ClearMap1 bug that alters the output of full regional hierarchy in the annotated cell count .csv output file.
Problem: Attempts to load and define ClearMap1 .npy files "cells.npy" and "cells_transformed_to_atlas.npy" as the ClearMap2 variables "cells_filtered" and "coordinates_transformed" respectively, results in the following error:
IndexError: tuple index out of range
Therefore, existing ClearMap2 code does not provide compatibility with ClearMap1-generated .npy files due to differences in file structure.
Recommended solution: Provide code in ClearMap2 that will reformat ClearMap1 .npy files into a structure that is compatible with with the Data Generation and CSV Export sections of code within the CellMap.py script.
Dear Christoph,
Related to issue #13 , we also wondered about specific hardware requirements (for TubeMap). Currently, we are using regular hard disks for processing.
However, due to the memmap objects, we imagine that there are a lot of read-and-write processes.
Our question: For the total processing time, is it crucial to have high-speed SSDs, or is the time required for reading and writing negligible compared to the pure processing time (e.g., during binarization or vessel filling) ? Have to compared SSD-based and HDD-based processing?
Best wishes,
Peter
As I cannot (and need not) run a whole brain on my current machine I wanted to try those parts I need and where I expect your code to be faster than the one I previously used. I could confirm this for the skeletonization which works really well. But I am stuck with the segmentation/ binarization. This is my code using your function:
import time
import numpy as np
import ClearMap.IO.IO as io
import ClearMap.ImageProcessing.Experts.Vasculature as vasc
binarization_parameter = vasc.default_binarization_parameter.copy()
processing_parameter = vasc.default_binarization_processing_parameter.copy()
processing_parameter.update(as_memory=True)
file = "14-16-41_tricocktail_UltraII[03 x 06]_C00_UltraII Filter0000.ome.tif"
source = io.read(file)
sink = "seg_" + "".join(filter(str.isalnum, file.split('.')[0]))
print(sink)
start_time = time.time()
im_seg = vasc.binarize(source, sink + ".npy",
binarization_parameter=binarization_parameter,
processing_parameter=processing_parameter)
elapsed_time = time.time() - start_time
print(elapsed_time)
im_seg.array = im_seg.array.astype(np.uint8) * 255
io.write(sink + ".tif", im_seg)
And this is the error message:
Traceback (most recent call last):
File "/home/saskra/anaconda3/envs/ClearMap_II/lib/python3.6/multiprocessing/queues.py", line 240, in _feed
send_bytes(obj)
File "/home/saskra/anaconda3/envs/ClearMap_II/lib/python3.6/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/home/saskra/anaconda3/envs/ClearMap_II/lib/python3.6/multiprocessing/connection.py", line 393, in _send_bytes
header = struct.pack("!i", n)
struct.error: 'i' format requires -2147483648 <= number <= 2147483647
Any idea how to fix that, some parameter settings maybe?
Hi, when I do cell alignment the following error occurred. it seems that there the file could not be found. is it a naming error?
FileNotFoundError Traceback (most recent call last)
in
24 coordinates = np.array([source[c] for c in 'xyz']).T;
25
---> 26 coordinates_transformed = transformation(coordinates);
in transformation(coordinates)
12 coordinates, sink=None,
13 transform_directory=ws.filename('resampled_to_auto'),
---> 14 binary=True, indices=False);
15
16 coordinates = elx.transform_points(
~/ClearMap2/ClearMap/Alignment/Elastix.py in transform_points(source, sink, transform_parameter_file, transform_directory, indices, result_directory, temp_file, binary)
958 else:
959 if binary:
--> 960 transpoints = read_points(os.path.join(outdirname, 'outputpoints.bin'), indices = indices, binary = True);
961 else:
962 transpoints = read_points(os.path.join(outdirname, 'outputpoints.txt'), indices = indices, binary = False);
~/ClearMap2/ClearMap/Alignment/Elastix.py in read_points(filename, indices, binary)
811
812 if binary:
--> 813 with open(filename) as f:
814 index = np.fromfile(f, dtype=np.int64, count = 1)[0];
815 #print(index)
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/elastix_output/outputpoints.bin'
Hello!Sorry to bother you.
Can ClearMap2 be installed in windows?The graph-tool library seems cannot be installed on windows.Can I continue to use ClearMap2?
The pipeline use this code to "align the autofluorescence to the reference":
#%% Alignment - autoflourescence to reference
# align autofluorescence to reference
align_reference_parameter = {
#moving and reference images
"moving_image" : reference_file,
"fixed_image" : ws.filename('resampled', postfix='autofluorescence'),
#elastix parameter files for alignment
"affine_parameter_file" : align_reference_affine_file,
"bspline_parameter_file" : align_reference_bspline_file,
#directory of the alignment result
"result_directory" : ws.filename('auto_to_reference')
};
elx.align(**align_reference_parameter);
But the result isn't a re-aligned autofluorescence image, it is a re-aligned reference file.
It seems unusual but I think I got the point: we're looking for 2 collections of transformation parameters:
In the end we do a 3 step transformation for every coordinate the algorithm selected as a cell:
So this is a one way track because there's an information-loss thanks to resample.
I hope I got it right and maybe this could be helpful for others. I close the issue after someone confirm my asumptions.
We want to run ClearMap "headlessly": on a computational node without a display and a keyboard. It does not work because many modules have dependencies on GUI components.
For example,
$ DISPLAY= python3 -c 'import ClearMap.IO.Workspace'
qt.qpa.xcb: could not connect to display
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl, xcb.
Aborted (core dumped)
The offending line is this one
https://github.com/ChristophKirst/ClearMap2/blob/master/ClearMap/IO/Workspace.py#L35
A list of modules which fail to import:
ClearMap.Alignment.Stitching.StitchingRigid
ClearMap.Alignment.Stitching.StitchingWobbly
ClearMap.Environment
ClearMap.ImageProcessing.Tracing.Connect
ClearMap.IO.Workspace
ClearMap.Visualization.Plot3d
ClearMap.Visualization.Qt.DataViewer
ClearMap.Visualization.Qt.Plot3d
Many files which are used in tests are missing. For example,
ClearMap/Tests/Data/Vasculature/vasculature_pre.npy
Hi,
When I am running Cell detection: the following error occurred. i tried to change the permissions to read and write but it doesn't allow me. Has anybody come across with this issue? Thank you for your time!
OSError: [Errno 30] Read-only file system: '/home/riera/Documents/Brain4/stitched.npy'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/riera/ClearMap2/CellMap.py", line 198, in
processing_parameter=processing_parameter)
File "/home/riera/ClearMap2/ClearMap/ImageProcessing/Experts/Cells.py", line 317, in detect_cells
results, blocks = bp.process(detect_cells_block, source, sink=None, function_type='block', return_result=True, return_blocks=True, parameter=cell_detection_parameter, **processing_parameter)
File "/home/riera/ClearMap2/ClearMap/ParallelProcessing/BlockProcessing.py", line 249, in process
result = [f.result() for f in futures];
File "/home/riera/ClearMap2/ClearMap/ParallelProcessing/BlockProcessing.py", line 249, in
result = [f.result() for f in futures];
File "/home/riera/anaconda3/envs/ClearMap/lib/python3.7/concurrent/futures/_base.py", line 435, in result
return self.__get_result()
File "/home/riera/anaconda3/envs/ClearMap/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
OSError: [Errno 30] Read-only file system: '/home/riera/Documents/Brain4/stitched.npy'
Original Traceback (most recent call last):
File "/home/riera/ClearMap2/ClearMap/ParallelProcessing/ParallelTraceback.py", line 32, in wrapper
return func(*args, **kwargs)
File "/home/riera/ClearMap2/ClearMap/ParallelProcessing/BlockProcessing.py", line 338, in process_block_block
result = function(*sources_and_sinks, **kwargs);
File "/home/riera/ClearMap2/ClearMap/ImageProcessing/Experts/Cells.py", line 493, in detect_cells_block
maxima = md.find_maxima(source.array, **parameter_maxima, verbose=verbose);
File "/home/riera/ClearMap2/ClearMap/IO/Slice.py", line 331, in array
return self.source.getitem(self.slicing);
File "/home/riera/ClearMap2/ClearMap/IO/Source.py", line 442, in getitem
return self.as_real().getitem(*args);
File "/home/riera/ClearMap2/ClearMap/IO/MMP.py", line 152, in as_real
return Source(location=self.location, shape=self.shape, dtype=self.dtype, order=self.order, name=self.name);
File "/home/riera/ClearMap2/ClearMap/IO/MMP.py", line 42, in init
memmap = _memmap(location=location, shape=shape, dtype=dtype, order=order, mode=mode, array=array);
File "/home/riera/ClearMap2/ClearMap/IO/MMP.py", line 343, in _memmap
array = np.lib.format.open_memmap(location);
File "/home/riera/anaconda3/envs/ClearMap/lib/python3.7/site-packages/numpy/lib/format.py", line 872, in open_memmap
mode=mode, offset=offset)
File "/home/riera/anaconda3/envs/ClearMap/lib/python3.7/site-packages/numpy/core/memmap.py", line 225, in new
f_ctx = open(os_fspath(filename), ('r' if mode == 'c' else mode)+'b')
OSError: [Errno 30] Read-only file system: '/home/riera/Documents/Brain4/stitched.npy'
align_channels_parameter = {
#moving and reference images
"moving_image" : ws.filename('resampled', postfix='autofluorescence'),
"fixed_image" : ws.filename('resampled'),
#elastix parameter files for alignment
"affine_parameter_file" : align_channels_affine_file,
"bspline_parameter_file" : None,
#directory of the alig'/home/nicolas.renier/Documents/ClearMap_Ressources/Par0000affine.txt',nment result
"result_directory" : ws.filename('elastix_resampled_to_auto')
};
elx.align(**align_channels_parameter);
However, I ran in to issue:
Running elastix with parameter file 0: "/home/yidan/ClearMap2/ClearMap/Resources/Alignment/align_affine.txt".
Current time: Fri Jul 24 13:03:12 2020.
Reading the elastix parameters from file ...
ERROR: problem defining fixed image dimension.
The parameter file says: 3
The fixed image header says: 2
Note that from elastix 4.6 the parameter file definition "FixedImageDimension" is not needed anymore.
Please remove this entry from your parameter file.
Errors occurred!
Thank you for your time!
Hello,
During the cell alignment step, we could set binary = True or False, corresponding to using outputpoints.bin or outputpoints.txt. What are the differences between the two files? Would the results be different if I set binary = False, using the outputpoints.txt file? Because when I set binary = True, there is no outputpoint.bin file for me, while outputpoints.txt runs fine. Thank you for your time!
I used the 2020er Example Dataset and every alignment step works pretty fine. I also did the cell detection step 2 times using 2 different thresholds 1800 and 2000. When I plot the coordinates of the cfos cells it seems pretty good:
(my own point plotter, not part of cellmap).
Also the coordinates seems good. But when I check for the transformed to reference coordinates they end up outside the tissue (some clusters are in slices larger than the largest slice in the reference file max(z) = 246):
... I don't know what's happen here. Something is wrong with the alignment but I don't know what it is.
I also checked the cells.csv directly:
print("Issue Size: ", ogSize)
print("max(x):", max(df['x']), " max(y):", max(df['y']), "max(z):", max(df['z']))
print("Reference Size", refSize)
print("max(xt):", max(df['xt']), " max(yt):", max(df['yt']), "max(zt):", max(df['zt']))
Issue Size: [4555, 7218, 1030]
max(x): 4553.0 max(y): 7130.0 max(z): 1029.0
Reference Size [320, 528, 246]
max(xt): 300.9986608482074 max(yt): 526.9999750841395 max(zt): 282.5654691645599
PS: I cutted the "universe" area.
Two files are missing from https://osf.io/sa3x8
Brain-39L/Raw Data/14-16-41_tricocktail_UltraII[00 x 06]_C00_UltraII Filter0000.ome.tif
Brain-39L/Raw Data/14-16-41_tricocktail_UltraII[00 x 05]_C00_UltraII Filter0001.ome.tif
Hi Christoph,
Would that be possible to run the pipeline on the windows subsystem for linux (https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux)? Will there be an support changes for this?
Hello Christoph,
I am really interested in the project and really appreciate the efforts you took to create such a big software suite.
I am currently trying to get the cellmap juypter notebook running. I used some dummy data for test purposes. Everything seems to work really fine. There were some minor bugs in naming conventions which already have been posted in the issues.
Unfortunately the Dataviewer does not seem to work for me. Disregarding the files and arguments i pass to the p3d module, I always just get an black, frozen window. Afterwards the jupyter session crashes and I have to rerstart the kernel. If I script the process in a normal python file I have the same problems. Are there any known issues regarding the dataviewer? And do you have some suggestions on how to tackle this problem?
I am currently working on Ubuntu 20.04 and I installed the stable release via the stable .yml (anaconda) file provided in the repo.
I also tried it with the newest packages, but that does not change anything for me.
System:
AMD Ryzen 3950X
Nvidia RTX Super 2060
64 GB RAM DDR4 RAM
Hello Christoph,
I really appreciate the efforts you took to make such a big project publicly available. Since I really would like to use your software package I wanted to ask whether there will be any support? Currently this project looks quite abandoned and a lot people seem to have issues with the package.
So I just wanted to ask: Will you still work on the project and commit updates?
Hello, what are those serial numbers mean with the #, and how can we apply them?
processing_parameter = cells.default_cell_detection_processing_parameter.copy();
processing_parameter.update(
processes = None, # 'serial',
size_max = 20, #100, #35,
size_min = 11,# 30, #30,
overlap = 10, #32, #10,
verbose = True
)
I started to work using our own dataset. I managed to create the resampled_autofluorescence.tif but it is completely black. I resliced the source images so there're all very similar and in great brightness. But still the result file is completely black in all pixes.
Update: Quote from simpleITK Tutorial: "
A common issue with resampling resulting in an all black image is due to (a) incorrect specification of the desired output image's spatial domain (its meta-data); or (b) the use of the inverse of the transformation mapping from the output spatial domain to the resampled image."
What did I try:
hello, a simple p3d.plot works for me but when it uses list_plot_3d, it has this GLError :(
Really appreciate your time!
#%% visualization
coordinates = np.hstack([ws.source('cells', postfix='raw')[c][:,None] for c in 'xyz']);
p = p3d.list_plot_3d(coordinates)
p3d.plot_3d(ws.filename('stitched'), view=p, cmap=p3d.grays_alpha(alpha=1))
File "/home/yidan/anaconda3/envs/ClearMap/lib/python3.7/site-packages/OpenGL/error.py", line 234, in glCheckError
baseOperation = baseOperation,
OpenGL.error.GLError: GLError(
err = 1281,
description = b'invalid value',
baseOperation = glTexImage3D,
pyArgs = (
GL_TEXTURE_3D,
0,
GL_LUMINANCE,
2007,
2379,
68,
0,
GL_LUMINANCE,
GL_BYTE,
None,
),
cArgs = (
GL_TEXTURE_3D,
0,
GL_LUMINANCE,
2007,
2379,
68,
0,
GL_LUMINANCE,
GL_BYTE,
None,
),
cArguments = (
GL_TEXTURE_3D,
0,
GL_LUMINANCE,
2007,
2379,
68,
0,
GL_LUMINANCE,
GL_BYTE,
None,
)
)
ERROR: Invoking <bound method SceneCanvas.on_draw of <SceneCanvas (PyQt5) at 0x7fb6810fb610>> for DrawEvent
WARNING: Error drawing visual <Volume at 0x7fb6810b7a90>
ERROR: Invoking <bound method SceneCanvas.on_draw of <SceneCanvas (PyQt5) at 0x7fb6810fb610>> repeat 2
WARNING: Error drawing visual <Volume at 0x7fb6810b7a90>
Since my server can't calculate a whole brain in between, I wanted to use only a small part of your data to test the TubeMap script. I took the autofluorescence from the first brain 39L completely and from the raw data only four consecutive files from the middle (Y02, X05-06, Filter0000-0001).
#directories and files
directory = '/home/saskra/PycharmProjects/ClearMap2/ClearMap/Tests/Data/TubeMap_Example'
expression_raw = 'Raw/14-16-41_tricocktail_UltraII[<Y,2> x <X,2>]_C00_UltraII Filter0001.ome.tif'
expression_arteries = 'Raw/14-16-41_tricocktail_UltraII[<Y,2> x <X,2>]_C00_UltraII Filter0000.ome.tif'
expression_auto = 'Autofluorescence/14-02-13_auto_UltraII_C00_xyz-Table Z<Z,4>.ome.tif'
But I get this error message:
Graph reduction: initialized.
Graph reduction: Found 20393 branching and 592581 non-branching nodes: elapsed time: 0:00:00.591
Graph reduction: Scanned 20393/20393 branching nodes, found 1018 branches: elapsed time: 0:00:01.420
Graph reduction: Scanned 250000/592581 non-branching nodes found 11667 branches: elapsed time: 0:00:09.205
Graph reduction: Scanned 500000/592581 non-branching nodes found 22705 branches: elapsed time: 0:00:16.715
Graph reduction: Scanned 592581/592581 non-branching nodes found 26871 branches: elapsed time: 0:00:19.219
Graph reduction: Graph reduced from 612974 to 20393 nodes and 619452 to 26871 edges: elapsed time: 0:00:20.528
Transforming vertex property: coordinates -> coordinates_atlas
Traceback (most recent call last):
File "/snap/pycharm-professional/209/plugins/python/helpers/pydev/pydevd.py", line 1438, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/snap/pycharm-professional/209/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/saskra/PycharmProjects/ClearMap2/ClearMap/Scripts/TubeMap.py", line 482, in <module>
verbose=True);
File "/home/saskra/PycharmProjects/ClearMap2/ClearMap/Analysis/Graphs/GraphGt.py", line 1366, in transform_properties
values = transformation(values);
File "/home/saskra/PycharmProjects/ClearMap2/ClearMap/Scripts/TubeMap.py", line 465, in transformation
sink_shape=io.shape(ws.filename('resampled')));
File "/home/saskra/PycharmProjects/ClearMap2/ClearMap/IO/IO.py", line 258, in shape
source = as_source(source);
File "/home/saskra/PycharmProjects/ClearMap2/ClearMap/IO/IO.py", line 206, in as_source
source = mod.Source(source, *args, **kwargs);
File "/home/saskra/PycharmProjects/ClearMap2/ClearMap/IO/TIF.py", line 36, in __init__
self._tif = tif.TiffFile(location, multifile = multi_file);
File "/home/saskra/anaconda3/envs/ClearMap/lib/python3.7/site-packages/tifffile/tifffile.py", line 2183, in __init__
fh = FileHandle(arg, mode='rb', name=name, offset=offset, size=size)
File "/home/saskra/anaconda3/envs/ClearMap/lib/python3.7/site-packages/tifffile/tifffile.py", line 6408, in __init__
self.open()
File "/home/saskra/anaconda3/envs/ClearMap/lib/python3.7/site-packages/tifffile/tifffile.py", line 6421, in open
self._fh = open(self._file, self._mode)
FileNotFoundError: [Errno 2] Datei oder Verzeichnis nicht gefunden: '/home/saskra/PycharmProjects/ClearMap2/ClearMap/Tests/Data/TubeMap_Example/debug_resampled.tif'
It is true there is now file called "debug_resampled.tif", but I do not know where it should have come from and how to get it.
I have to slice and rotate the reference because of some quality issues regarding the dataset I'm using right now. But the max parameter of the slice-function for the reference file seems not to work in relation to a min parameter:
annotation_file, reference_file, distance_file=ano.prepare_annotation_files(
slicing=(slice(None),slice(None),slice(13,456)), orientation=(3,-2,1),
overwrite=False, verbose=True);
In the tutorial pipeline they also used the max-parameter in z-direction and it works, so I think it could have something to do with the rotation. What I found out till now is that the function does the slicing firsthand and the rotation of axes afterward. Could this also lead to some other problems like my distortion issue when I try to align to the reference file? Could it be neccessary to rotate the dataset always in the same direction like in the tutorial pipeline?
PS: if you look at the top pictures, something happened with the brightness. It's actually in the pictures.
Hi,
First off, thanks so much for building this incredible resource.
When a tuple of length 3 is supplied to the orientation
argument of resample_points
(e.g. orientation = (3,2,1)
), I receive the following error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-125-cfc4e5c87456> in <module>
5 centroids, sink=None, orientation=(3,2,1),
6 source_shape = io.shape(ws.filename('raw')),
----> 7 sink_shape = io.shape(ws.filename('resampled')));
8
9 coordinates = elx.transform_points(
~/clearmap/ClearMap2/ClearMap/Alignment/Resampling.py in resample_points(source, sink, resample_source, resample_sink, orientation, source_shape, sink_shape, source_resolution, sink_resolution, **args)
897 #permute
898 per = orientation_to_permuation(orientation);
--> 899 resampled = resampled.transpose(per);
900
901 #reverse axes
ValueError: axes don't match array
I was able to bypass this by manually re-orienting source/sink shapes and leaving orientation=None
, but thought you all might want to know this.
Hi,
after alignment, I opened the result.0.mhd file which appears as following: I also tried to drag it to imageJ which does the same. What is the problem here? Thank you!
ObjectType = Image
NDims = 3
BinaryData = True
BinaryDataByteOrderMSB = False
CompressedData = True
CompressedDataSize = 787716
TransformMatrix = 1 0 0 0 1 0 0 0 1
Offset = 0 0 0
CenterOfRotation = 0 0 0
AnatomicalOrientation = RAI
ElementSpacing = 1 1 1
DimSize = 351 416 7
ElementType = MET_SHORT
ElementDataFile = result.0.zraw
Hello all,
I've been trying to run cellmap now but keep running into problems at the rigid z alignment step.
`layout = stw.WobblyLayout(expression=ws.filename('raw'), tile_axes=['X','Y'], overlaps=(400, 400));
st.align_layout_rigid_mip(layout, depth=[400, 400, None], max_shifts=[(-50,50),(-50,50),(-30,30)],
ranges = [None,None,None], background=(400, 100), clip=25000,
processes=None, verbose=True)
st.place_layout(layout, method='optimization', min_quality=-np.inf, lower_to_origin=True, verbose=True)
st.save_layout(ws.filename('layout', postfix='aligned_axis'), layout)
Traceback (most recent call last):
File "", line 1, in
layout = stw.WobblyLayout(expression=ws.filename('raw'), tile_axes=['X','Y'], overlaps=(400, 400));
File "G:\user\ClearMap2-master\ClearMap2-master\ClearMap\Alignment\Stitching\StitchingWobbly.py", line 721, in init
strg.TiledLayout.init(self, sources = sources, expression = expression, tile_axes = tile_axes, tile_shape = tile_shape, tile_positions = tile_positions, positions = positions, overlaps = overlaps, alignments = alignments, position = position, shape = shape, dtype = dtype, order = order);
File "G:\user\ClearMap2-master\ClearMap2-master\ClearMap\Alignment\Stitching\StitchingRigid.py", line 1906, in init
sources, alignments, tile_positions = _initialize_tiles_from_expression(expression, tile_axes=tile_axes, tile_shape=tile_shape, tile_positions=tile_positions, overlaps=overlaps, positions=positions, alignments=alignments);
File "G:\user\ClearMap2-master\ClearMap2-master\ClearMap\Alignment\Stitching\StitchingRigid.py", line 2243, in _initialize_tiles_from_expression
raise ValueError('The expression does not have the named pattern %s' % n);
ValueError: The expression does not have the named pattern X`
I feel like this may be some issue with how the data is structured, in a way that clearmap can't recognize the x and y axes of the images. I acquired my original lightsheet images with the Zeiss Z1, and later converted the czi to a single tif for use in clearmap. From the clearmap documentation, I know this was first designed around a different microscope, though I don't know if the different systems actually format the images differently enough to cause an issue. Any help or insight in fixing this issue would be greatly appreciated.
Running Elastix.py in the ClearMap environment returns a "ModuleNotFoundError: No module named 'ClearMap'" even after following the installation process and activating the environment.
I am pretty new to using Anaconda, so any help is appreciated.
Hi,
I can't find the Dataset you use for the CellMap Tutorial. Is this provided?
Hi Christoph,
This is more of a feature/documentation request than an issue. My apologies if I have missed it somewhere
My team would like to know if this can run CPU-only. We have noticed some functions have CPU-only options, but it is not clear how much of the pipeline we can access this way. We are specifically interested in CellMap for now
Likewise (and this is a separate but related request), do you have any performance metrics on CellMap in ClearMap2 (with GPU; CPU-only would also be interesting tho) vs ClearMap1? My team is excited about the opportunity to accelerate our pipeline, but to justify the cost of a GPU we would like some more information. In the absence of metrics, just a gut feel about the acceleration would also be helpful
Let me know if you would like more information or clarification.
-Pieter
In the paper [1] an artificial dataset is mentioned. Is the dataset or a generator available somewere?
Tested on an artificial dataset made from a real graph of vessels with
radii ranging from 2 to 25 pixels (Figures S5C and S5D), [...] We
applied this CNN to both the vessels and arterial binary masks to
obtain filled tubes throughout the datasets (Figures 2H, S5E, and
S5F).
[1] Kirst, C., Skriabine, S., Vieites-Prado, A., Topilko, T., Bertin, P., Gerschenfeld, G., ... & Renier, N. (2020). Mapping the Fine-Scale Organization and Plasticity of the Brain Vasculature. Cell, 180(4), 780-795.
ClearMap2/ClearMap/Scripts/TubeMap.py
Line 34 in eca2934
ClearMap2/ClearMap/Scripts/TubeMap.py
Line 86 in eca2934
Just a comment and only one example, I already figured it out myself. I think this can be highly confusing, when you consider these search results (some of them even from the same file):
import ClearMap.Alignment.Stitching.StitchingRigid as st
import ClearMap.Alignment.Stitching as st
import ClearMap.Analysis.Statistics.StatisticalTests as st
import ClearMap.Analysis.Statistics.GroupStatistics as st
import scipy.stats as st
Question = Solution here (I should try out the pull request function).
The function:
io.mhd.write_header_from_source(ws.filename('stitched'))
Creates a MHD file with parameter DimSize = 1892 2671 1802
out of a stitched-npy with shape = (1802, 2671, 1892) so the imageJ view is broken.
I changed the order in the DimSize parameter by hand and it works out just fine.
"result_directory" : ws.filename('elastix_resampled_to_auto')
default_file_type_to_name = odict(
raw = "/Raw/raw_<X,2>_<Y,2>.npy",
autofluorescence = "/Autofluorescence/auto_<X,2>_<Y,2>.npy",
stitched = "stitched.npy",
layout = "layout.lyt",
background = "background.npy",
resampled = "resampled.tif",
resampled_to_auto = 'elastix_resampled_to_auto',
auto_to_reference = 'elastix_auto_to_reference',
);
So the identifier can't be found the code won't run. I changed the workspace.py to match the code.
elx.align(**align_reference_parameter);
ends up in an:ERROR: the file /home/user/anaconda3/clearmap/ClearMap2/ClearMap/Resources/Alignment/align_bspline.txt does not exist.
So both errors solved.
ClearMap2/ClearMap/Scripts/TubeMap.py
Line 742 in eca2934
Is that intentionally or accidentally a "gr", because in the other lines it says "grt"?
Where does this "grt" variable even come from?
Since my computer can't calculate a whole brain, I wanted to use only a small part of the published data to test the TubeMap script. (In the end, I might use the pipeline for many medium-sized regions of interest.) I took the autofluorescence from the first brain 39L completely and from the raw data only four consecutive files from the middle (Y02, X05-06, Filter0000-0001).
#directories and files
directory = '/home/saskra/PycharmProjects/ClearMap2/ClearMap/Tests/Data/TubeMap_Example'
expression_raw = 'Raw/14-16-41_tricocktail_UltraII[<Y,2> x <X,2>]_C00_UltraII Filter0001.ome.tif'
expression_arteries = 'Raw/14-16-41_tricocktail_UltraII[<Y,2> x <X,2>]_C00_UltraII Filter0000.ome.tif'
expression_auto = 'Autofluorescence/14-02-13_auto_UltraII_C00_xyz-Table Z<Z,4>.ome.tif'
But I get this error message:
transformix has finished at Tue Aug 11 15:32:15 2020.
Total time elapsed: 12.1s.
/home/saskra/PycharmProjects/ClearMap2/ClearMap/Alignment/Annotation.py:459: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
label[valid] = atlas[indices];
Traceback (most recent call last):
File "/home/saskra/anaconda3/envs/ClearMap/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3331, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-788a7e8ab86d>", line 1, in <module>
runfile('/home/saskra/PycharmProjects/ClearMap2/ClearMap/Scripts/TubeMap.py', wdir='/home/saskra/PycharmProjects/ClearMap2/ClearMap/Scripts')
File "/snap/pycharm-professional/211/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/snap/pycharm-professional/211/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/saskra/PycharmProjects/ClearMap2/ClearMap/Scripts/TubeMap.py", line 527, in <module>
edge_geometry_properties = {'coordinates_atlas' : 'distance_to_surface'});
File "/home/saskra/PycharmProjects/ClearMap2/ClearMap/Analysis/Graphs/GraphGt.py", line 1366, in transform_properties
values = transformation(values);
File "/home/saskra/PycharmProjects/ClearMap2/ClearMap/Scripts/TubeMap.py", line 522, in distance
d = distance_atlas[x,y,z];
File "/home/saskra/PycharmProjects/ClearMap2/ClearMap/IO/TIF.py", line 109, in __getitem__
return array[slicing_xy];
MemoryError: Unable to allocate 662. GiB for an array with shape (298039, 298039) and data type float64
It is true that I do not have that much memory available - but I think that I do not need it for this ROI. Does anyone have any advice for me on where to change the script to use only the actual size of my selected region?
When I run the csv-export
source = ws.source('cells');
header = ', '.join([h[0] for h in source.dtype.names]);
np.savetxt(ws.filename('cells', extension='csv'), source[:], header=header, delimiter=',')
I get the error:
source = ws.source('cells');
header = ', '.join([h[0] for h in source.dtype.names]);
np.savetxt(ws.filename('cells', extension='csv'), source[:], header=header, delimiter=',')
Traceback (most recent call last):
File "<ipython-input-15-d80deed88a15>", line 3, in <module>
np.savetxt(ws.filename('cells', extension='csv'), source[:], header=header, delimiter=',')
File "<__array_function__ internals>", line 6, in savetxt
File "/home/user/anaconda3/envs/ClearMapStable/lib/python3.6/site-packages/numpy/lib/npyio.py", line 1451, in savetxt
% (str(X.dtype), format))
TypeError: Mismatch between array dtype ('[('x', '<i8'), ('y', '<i8'), ('z', '<i8'), ('size', '<i8'), ('source', '<f8'), ('xt', '<f8'), ('yt', '<f8'), ('zt', '<f8'), ('order', '<i8'), ('name', 'S256')]') and format specifier ('%.18e,%.18e,%.18e,%.18e,%.18e,%.18e,%.18e,%.18e,%.18e,%.18e')
File ClearMap2.yml
(conda build recipe) is mentioned in the handbook but is missing from the repository.
Hi there,
I am trying to get started running your example TubeMap scripts and am running into errors using gcc
. I am on macOS 10.12.6
using conda 4.7.12
. Does ClearMap run on macOS? Any pointers in how to troubleshoot would be appreciated.
First, install
git clone https://github.com/ChristophKirst/ClearMap2.git
cd ClearMap2
conda env create -f ClearMap.yml
conda activate ClearMap
Then in an ipython prompt:
from ClearMap.Environment import *
Results in gcc errors
clang: error: unsupported option '-fopenmp'
...
DistutilsExecError: command 'gcc' failed with exit status 1
...
~/anaconda3/envs/ClearMap/lib/python3.7/distutils/unixccompiler.py in _compile(self, obj, src, ext, cc_args, extra_postargs, pp_opts)
119 except DistutilsExecError as msg:
--> 120 raise CompileError(msg)
121
CompileError: command 'gcc' failed with exit status 1
When I am in the conda ClearMap environment, gcc is still in /usr/bin/gcc
.
which gcc
Gives
/usr/bin/gcc
I tried installing gcc inside conda with conda install gcc
but get PackageNotFound
. I am using conda 4.7.12
and as far as I understand, conda before version 5 were using system level gcc. After version 5 they use their own gcc installers. This would explain why 'conda install gcc' is failing?
Finally, /usr/bin/gcc -v
gives me:
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 9.0.0 (clang-900.0.39.2)
Target: x86_64-apple-darwin16.7.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
Do you know of any env paths I can set so your clearmap compile steps can find gcc?
Hi,
Really more of a suggestion. In the CellMap tutorial, when ano.label_points
is called, the key = "order"
. This is also the underlying default in Annotation.py file. It took quite a while for me to track down that the label returned was not the label in the atlas, but the index from what I believe is an ordered dictionary, with no apparent link to the ABA nomenclature. I totally get there might be some reason under the hood for doing it like this, but it seems more intuitive as a user to have the following in the tutorial:
label = ano.label_points(coordinates_transformed, key='id');
names = ano.convert_label(label, key='id', value='name');
Because the user might want to map results back to ABA, having the default be the 'id' seems more natural, at least to me. If defaults or the tutorial aren't changed, perhaps the documentation could be a bit more informative on this issue.
Thanks so much!
Zach
Hi Christoph,
I met an issue when running CellMap, in this section:
`#%% Aignment - resampled to autofluorescence
align_channels_parameter = {
#moving and reference images
"moving_image" : ws.filename('resampled', postfix='autofluorescence'),
"fixed_image" : ws.filename('resampled'),
#elastix parameter files for alignment
"affine_parameter_file" : align_channels_affine_file,
"bspline_parameter_file" : None,
#directory of the alig'/home/nicolas.renier/Documents/ClearMap_Ressources/Par0000affine.txt',nment result
"result_directory" : ws.filename('**elastix_resampled_to_auto**')
}; `
The error message is when accessing Workspace.py:
cannot find name for type 'elastix_resampled_to_auto'!
Do you have any idea why this may happen?
Hello,
While I was running Cell Detection, there were many occurrences of this BrokenProcessPool. I understand this is the most intensive step of ClearMap. Does this happen when memory runs out? Any help would be appreciated!
File "/home/riera/anaconda3/ClearMap2-master/ClearMap/Scripts/CellMap.py", line 196, in
processing_parameter=processing_parameter)
File "/home/riera/anaconda3/ClearMap2-master/ClearMap/ImageProcessing/Experts/Cells.py", line 317, in detect_cells
results, blocks = bp.process(detect_cells_block, source, sink=None, function_type='block', return_result=True, return_blocks=True, parameter=cell_detection_parameter, **processing_parameter)
File "/home/riera/anaconda3/ClearMap2-master/ClearMap/ParallelProcessing/BlockProcessing.py", line 249, in process
result = [f.result() for f in futures];
File "/home/riera/anaconda3/ClearMap2-master/ClearMap/ParallelProcessing/BlockProcessing.py", line 249, in
result = [f.result() for f in futures];
File "/home/riera/anaconda3/envs/ClearMap/lib/python3.7/concurrent/futures/_base.py", line 435, in result
return self.__get_result()
File "/home/riera/anaconda3/envs/ClearMap/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
Just a short question: The gamma parameter in your Frangi filter in the code is slightly different from the one in the documentation and paper - do you expect differences in the results, or is it negligible?
Another small issue: During the smoothing operation, a pre-calculated look-up-table is used, which is saved in ClearMap2/ClearMap/ImageProcessing/Binary/Smoothing.npy, or in the zip-file. I'm not sure whether the file in the zip-file is intact, because it did not work in my case. I therefore recomputed the LUT from scratch, but this takes quite long. Maybe you can check whether the zip file contains an intact npy file?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.