pblab / python-pysight Goto Github PK
View Code? Open in Web Editor NEWCreate images and volumes from photon lists generated by a multiscaler
License: Other
Create images and volumes from photon lists generated by a multiscaler
License: Other
Ahlan,
The following file seemingly contains 4 frames, based on visual inspection. 2904 lines were resolved but a single frame was generated:
/data/Lior/Multiscaler data/TAG lens in FITC dish/masking tape edge-1024x1024 - 2x zoom - TAG OFF - 002.lst
The following files were taken with TAG lens off in START channel, and possibly with a deactivated START channel:
/data/Lior/Multiscaler data/TAG lens in FITC dish/masking tape edge-1024x1024 - 2x zoom - TAG OFF - 003.lst
/data/Lior/Multiscaler data/TAG lens in FITC dish/masking tape edge-1024x1024 - 2x zoom - TAG OFF - 004.lst
When processed with empty START channel it fails with the following warning:
UserWarning: Channels that were inserted in GUI don't match actual data channels recorded.
Recorded channels are {'010', '001', '110'}.
When processed with TAG lens at START channel it fails with following warning:
UserWarning: Wrong number of user inputs (3) compared to number of actual inputs (2).
Should it work some other way?
Thanks,
Lior
Processing one particular list file [1] which is 3.5 GB long, the following bug was encountered:
Interpolating TAG lens data...
The mean frequency of TAG events is 185669.76 Hz.
Traceback (most recent call last):
(...)
File "/home/liorg/anaconda3/lib/python3.6/site-packages/pysight/tag_tools_v2.py", line 34, in run
phaser.allocate_phase()
File "/home/liorg/anaconda3/lib/python3.6/site-packages/pysight/tag_tools_v2.py", line 166, in allocate_phase
raw_tag=self.tag.values)
ZeroDivisionError: division by zero
Please help (:
Thanks!
Lior
[1] /14 November 2017/GCaMP6sMouse_StartPMT1_850mV_Stop1TAG_188kHz62p_Stop2Lines_100umDeepFOV3_3xZoom1024px100pPower021-LONG-ACQUISITION.lst
The file is start_pmt_stop2_lines_512px007.lst
recorded in 21-12-17 during the drosophila experiments.
The resulting DF when TAG interpolation fails should contain the TAG data, for debugging purposes.
Modernize GUI, possibly with remi, allowing for cross-platform control over the script.
Also consider adding a .toml
file as the basic requirement for the script. This will allow the user to save his settings and change workstations without issues.
Summarizing the questions I have:
NA, not 0.0.
Why can't the smallest bin be after slot 65?
๐ฑ
You're using many Python built-in functions here. It will be better if you use their numpy equivalents.
Start with a numpy array full of nans instead of zeros.
Was this copied from scipy?
When scanning unidirectionally, the flyback of the fast axis should be blank. In normal imaging conditions, using an EOM to darken the returning phase of the mirror, no real gain can be achieved from the stray photons arriving at the detector.
In special occasions, like wreckage of an EOM, PySight should allow one to recover the photons in the returning phase.
Instead of changing the start of frame time in the linspace of 'create_frame_array', try to add an offset to the time_rel_frame column.
Current issues with the new SI software:
self.bidir
variable seems to have little effect of the generated images. I might've switched the frame flyback and line bidir
variables.Search sorted for frames should run on the line input instead of the photon list, speeding up execution considerably.
The edges of the image are always spatially distorted, due to the nature of the resonant scanning element we're using. A fill fraction (that can also be read from ScanImage) should cut off the photons at the edges of the image.
attrs
provide new tools, especially validators, that can be very useful. With the new class-based structure of fileIO
and lst
, this can come in handy.
Hi,
I've called main.mp_batch(foldername=fld,glob_str='*.lst'). It correctly identified 27 matching list files, but then exited after failing to parse the first list file. See below.
Calling main.run_batch_lst(foldername=fld,glob_str='*.lst') yielded the expected behaviour of moving onwards to the next list files after failing to parse first.
Thanks,
Lior
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/export/home/pblab/.conda/envs/py36pysight/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/export/home/pblab/.conda/envs/py36pysight/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/export/home/pblab/.conda/envs/py36pysight/lib/python3.6/site-packages/pysight/main.py", line 57, in main_data_readout
cur_file.run()
File "/export/home/pblab/.conda/envs/py36pysight/lib/python3.6/site-packages/pysight/ascii_list_file_parser/file_io.py", line 52, in run
self.timepatch: str = self.get_timepatch(metadata)
File "/export/home/pblab/.conda/envs/py36pysight/lib/python3.6/site-packages/pysight/ascii_list_file_parser/file_io.py", line 173, in get_timepatch
f"The timepatch used ({timepatch}) isn't supported "
NotImplementedError: The timepatch used (2) isn't supported for binary files since it uses a 6-byte word representation. Please disallow this option in the MPANT software.
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "", line 1, in
File "/export/home/pblab/.conda/envs/py36pysight/lib/python3.6/site-packages/pysight/main.py", line 423, in mp_batch
pool.map(main_data_readout, all_guis)
File "/export/home/pblab/.conda/envs/py36pysight/lib/python3.6/multiprocessing/pool.py", line 266, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/export/home/pblab/.conda/envs/py36pysight/lib/python3.6/multiprocessing/pool.py", line 644, in get
raise self._value
NotImplementedError: The timepatch used (2) isn't supported for binary files since it uses a 6-byte word representation. Please disallow this option in the MPANT software.
When TAG lens interpolation fails, the data matrix is collapsed along the axial dimension. While this solution is helpful for debug, it can complicate downstream analysis pipeline that expects the data matrix to include the axial dimension. Please allocate a distinct matrix in the resulting HDF5 for the collapsed matrix.
This request is relevant, of course, only for version 1.0 (#13).
Thanks!
Movie()
by grabbing the raw signals from dict_of_data
and using the row indices of the DataFrame
as the data points - not the abs_time
column.SignalValidator
- remove CORRUPT
option and clarify API to the mscan
and scanimage
modules.When asking PySight to parse a binary list file, I get the following error:
File "/state/partition1/home/pblab/data/Hagai/python-pysight/src/pysight/main.py", line 194, in run
return main_data_readout(gui_as_object)
File "/state/partition1/home/pblab/data/Hagai/python-pysight/src/pysight/main.py", line 43, in main_data_readout
cur_file.run()
File "/state/partition1/home/pblab/data/Hagai/python-pysight/src/pysight/ascii_list_file_parser/fileIO_tools.py", line 65, in run
self.data: np.ndarray = self.read_lst(num_of_items=num_of_items)
File "/state/partition1/home/pblab/data/Hagai/python-pysight/src/pysight/ascii_list_file_parser/fileIO_tools.py", line 337, in read_lst
count=num_of_lines_to_read).astype('{}U'.format(data_length))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xfc in position 11: ordinal not in range(128)
The exact hexadecimal value varies of course from one list file to another, but the bug has been consistent across several imaging sessions:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xda in position 16: ordinal not in range(128)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xda in position 10: ordinal not in range(128)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xb8 in position 15: ordinal not in range(128)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xda in position 8: ordinal not in range(128)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xda in position 12: ordinal not in range(128)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xda in position 0: ordinal not in range(128)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xda in position 4: ordinal not in range(128)
UnicodeDecodeError: 'ascii' codec can't decode byte 0x82 in position 2: ordinal not in range(128)
And so forth.
Oddly enough, the error takes longer (a minute versus a few seconds) to appear for a specific list file [1]. That delay is gone once running PySight in Debug mode, i.e. once checking the debug checkbox in PySight's advanced GUI window.
With a few other list files [2] a different error was reported:
...
File "/state/partition1/home/pblab/data/Hagai/python-pysight/src/pysight/ascii_list_file_parser/fileIO_tools.py", line 65, in run
self.data: np.ndarray = self.read_lst(num_of_items=num_of_items)
File "/state/partition1/home/pblab/data/Hagai/python-pysight/src/pysight/ascii_list_file_parser/fileIO_tools.py", line 330, in read_lst
raise NotImplementedError('Binary files are still not supported.')
NotImplementedError: Binary files are still not supported.
All these errors were observed using pydev debugger (build 172.3757.67), as the GUI is still pixelated when run ordinarily.
Thanks!
Lior
[1] 2018-04-24/940 nm wo OPO bypass/LPT (...) - 010.lst
[2] 2018-05-23/Setup/LPT (...) - 024.lst
2018-05-23/Setup/LPT (...) - 023.lst
2018-05-23/Setup/LPT (...) - 022.lst
Generate a distribution of the number of photon per laser pulse. To that end, simulate a laser signal and distribute all events inside it. This shouldn't take more than an hour using searchsorted
.
Temporal structure: With the simulated laser pulses, obtain a histogram of the photon arrival times with 16 bins. Most "binary words" should be sparse - a single 1
bit in all 16 bins, representing a lone photon being detected shortly after the laser pulse hit the sample. Two hours.
Using the edges of the generated histogram, sum all relevant laser pulses - i.e. the pulses that belong to this specific pixel. This should generate a histogram that expresses the fact that 50 of the laser pulses generated no photons, 25 of them generated one, etc.
Now comes the look-up table for our cumulative binary words. The histogram generated in 3 will be compared to that look-up table, and will be replaced by the most suitable entry in that table. To generate this table we need a scikit-learn
framework capable of being fed 16 bit words as input, and an integer number corresponding to the number of photons as its output.
A comprehensive user guide needs to be written. mdBook
is a good option.
We also need a user forum. A gitter channel is an option, but a discord server is a better one.
Pickle is not safe. An alternative implementation may use PyArrow instead:
import pyarrow as pa
import pandas as pd
pa.serialize_to(dict_of_data, r'/path/to/file.a')
data = pa.deserialize_from(r'/path/to/file.a', pd.DataFrame)
type(data) # dict
These two topics are currently untested - tests for the lst_tools
module, and the logic of channel selection as seen in the validations happening at the beginning of the script.
Following the massive refactoring of version 0.4.6, all tests are broken.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.