sbozzolo / kuibit Goto Github PK
View Code? Open in Web Editor NEWPython 3 post-processing tools for simulations performed with the Einstein Toolkit
License: GNU General Public License v3.0
Python 3 post-processing tools for simulations performed with the Einstein Toolkit
License: GNU General Public License v3.0
carpet-grid.asc
doesn't follow the same conventions as the output in CarpetIOASCII
, but the structure of the filename is the same. This confuses kuibit and results in errors.
As a possible workaround for the moment, it is possible to simply delete the carpet-grid.asc
files.
The full log of errors is as follows.
psi4 = sim.gf.xyz["qlm_psi4[0]"]
psi4.get_iteration(0)
KeyError: "Can't open attribute (can't locate attribute: 'origin')"
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/tmp/ipykernel_24443/2044816109.py in <module>
1 psi4 = sim.gf.xyz["qlm_psi4[0]"]
----> 2 psi4.get_iteration(0)
/mnt/beegfs/zhong/spack/opt/spack/linux-ubuntu16.04-baltasar/gcc-10.3.0/python-3.8.11-55gzuixjsxhm4rjhdbiqoybjah7tjj24/lib/python3.8/site-packages/kuibit/cactus_grid_functions.py in get_iteration(self, iteration, default)
493 if iteration not in self.available_iterations:
494 return default
--> 495 return self[iteration]
496
497 @lru_cache(128)
/mnt/beegfs/zhong/spack/opt/spack/linux-ubuntu16.04-baltasar/gcc-10.3.0/python-3.8.11-55gzuixjsxhm4rjhdbiqoybjah7tjj24/lib/python3.8/site-packages/kuibit/cactus_grid_functions.py in __getitem__(self, iteration)
518 raise KeyError(f"Iteration {iteration} not present")
519
--> 520 return self._read_iteration_as_HierarchicalGridData(iteration)
521
522 def read_on_grid(self, iteration, grid, resample=False):
/mnt/beegfs/zhong/spack/opt/spack/linux-ubuntu16.04-baltasar/gcc-10.3.0/python-3.8.11-55gzuixjsxhm4rjhdbiqoybjah7tjj24/lib/python3.8/site-packages/kuibit/cactus_grid_functions.py in _read_iteration_as_HierarchicalGridData(self, iteration)
464 ):
465 uniform_grid_data_components.append(
--> 466 self._read_component_as_uniform_grid_data(
467 path, iteration, ref_level, comp
468 )
/mnt/beegfs/zhong/spack/opt/spack/linux-ubuntu16.04-baltasar/gcc-10.3.0/python-3.8.11-55gzuixjsxhm4rjhdbiqoybjah7tjj24/lib/python3.8/site-packages/kuibit/cactus_grid_functions.py in _read_component_as_uniform_grid_data(self, path, iteration, ref_level, component)
1145 path, iteration, ref_level, component
1146 ) as dataset:
-> 1147 grid = self._grid_from_dataset(
1148 dataset, iteration, ref_level, component
1149 )
/mnt/beegfs/zhong/spack/opt/spack/linux-ubuntu16.04-baltasar/gcc-10.3.0/python-3.8.11-55gzuixjsxhm4rjhdbiqoybjah7tjj24/lib/python3.8/site-packages/kuibit/cactus_grid_functions.py in _grid_from_dataset(self, dataset, iteration, ref_level, component)
1067 shape = np.array(dataset.shape[::-1])
1068
-> 1069 x0 = dataset.attrs["origin"]
1070 dx = dataset.attrs["delta"]
1071 time = dataset.attrs["time"]
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
~/.local/lib/python3.8/site-packages/h5py/_hl/attrs.py in __getitem__(self, name)
54 """ Read the value of an attribute.
55 """
---> 56 attr = h5a.open(self._id, self._e(name))
57
58 if is_empty_dataspace(attr):
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/h5a.pyx in h5py.h5a.open()
KeyError: "Can't open attribute (can't locate attribute: 'origin')"
Currently the absmax()
method is supported for grid quantities. It would be useful to get the index of that also.
Moreover, a function that reports the grid position (ie. x,y,z) and indices and the value of the max()
would be useful.
In HierarchicalGridData
, we often check if a point belongs to a grid. In practice, the check consists in:
if np.any(point < (self.lowest_vertex)) or np.any(point >= (self.highest_vertex)):
return False
return True
where we have the two vertices of the grid computed as self.x0 - 0.5 * self.dx
and self.x1 + 0.5 * self.dx
. Floating-point arithmetic can lead to false positives for values on the edge of a cell. This can manifest itself in an error in the method merge_refinement_levels
.
EDIT:
Here is a file that (probably) triggers the problem: https://pastebin.com/raw/A536eJmA
Tests test_antenna_responses
and test_ra_dec_to_theta_phi
are failing for me at the moment, with errors:
AssertionError: 0.1676658308656924 != 0.173418558 within 7 places (0.005752727134307606 difference)
and
AssertionError: 2.458839344228527 != 2.3697174 within 7 places (0.08912194422852693 difference)
respectively.
Could this be the effect of any of the changes implemented as a result of the review in openjournals/joss-reviews#3099?
mayavi
, the Python package that we've used so far for 3D visualization, has been an endless source of pain (from a packaging point of view). Some of the problems come from VTK
itself (e.g., they just added support to Python 3.10, one year after it was released), but others come from mayavi
itself. In addition to this, mayavi
doesn't seem actively developed and we do not have too much code that relies on it. For these reasons, I want to drop mayavi
and use vedo or pyvista instead.
We don't need very sophisticated features. Both the packages seem to be heavily focused on meshes, which is not something that we strongly need (we only need them for horizons). We primarily need volumetric rendering and field lines.
3D visualization has not been my focus for a while, but kuibit
has all the capabilities to enable cool animations. I am opening this issue to collect ideas and experiences. If anyone wants to help, I am happy to work with them to figure out the best solution.
sim = sd.SimDir(r"D:\simulations\sim_1c")
betaz = sim.gf.z["betaz"]
betaz[0]
These lines of code worked fine in my previous simulations, but when I ran them in my new simulation I encountered the following error
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
c:\Users\auror\Dropbox\SnelliusData\Work\test.ipynb Cell 6' in <cell line: 3>()
<a href='vscode-notebook-cell:/c%3A/Users/auror/Dropbox/SnelliusData/Work/test.ipynb#ch0000015?line=0'>1</a> sim = sd.SimDir(r"D:\simulations\sim_1b")
<a href='vscode-notebook-cell:/c%3A/Users/auror/Dropbox/SnelliusData/Work/test.ipynb#ch0000015?line=1'>2</a> hc = sim.gf.z["hc"]
----> <a href='vscode-notebook-cell:/c%3A/Users/auror/Dropbox/SnelliusData/Work/test.ipynb#ch0000015?line=2'>3</a> hc[0]
File c:\Users\auror\anaconda3\envs\et\lib\site-packages\kuibit\cactus_grid_functions.py:518, in BaseOneGridFunction.__getitem__(self, iteration)
515 if iteration not in self.available_iterations:
516 raise KeyError(f"Iteration {iteration} not present")
--> 518 return self._read_iteration_as_HierarchicalGridData(iteration)
File c:\Users\auror\anaconda3\envs\et\lib\site-packages\kuibit\cactus_grid_functions.py:472, in BaseOneGridFunction._read_iteration_as_HierarchicalGridData(self, iteration)
462 for comp in self._components_in_file(
463 path, iteration, ref_level
464 ):
465 uniform_grid_data_components.append(
466 self._read_component_as_uniform_grid_data(
467 path, iteration, ref_level, comp
468 )
469 )
471 return (
--> 472 grid_data.HierarchicalGridData(uniform_grid_data_components)
473 if uniform_grid_data_components
474 else None
475 )
File c:\Users\auror\anaconda3\envs\et\lib\site-packages\kuibit\grid_data.py:1782, in HierarchicalGridData.__init__(self, uniform_grid_data)
1779 for comp in uniform_grid_data_sorted:
1780 components.setdefault(comp.ref_level, []).append(comp.copy())
-> 1782 self.grid_data_dict = {
1783 ref_level: self._try_merge_components(comps)
1784 for ref_level, comps in components.items()
1785 }
1787 # Map between coordinates and which component to use when computing
1788 # values. self._component_mapping is a function that takes a point and
1789 # returns the associated component. The reason this is an attribute is
1790 # to save it and avoid re-computing the mapping all the time
1791 self._component_mapping = None
File c:\Users\auror\anaconda3\envs\et\lib\site-packages\kuibit\grid_data.py:1783, in <dictcomp>(.0)
1779 for comp in uniform_grid_data_sorted:
1780 components.setdefault(comp.ref_level, []).append(comp.copy())
1782 self.grid_data_dict = {
-> 1783 ref_level: self._try_merge_components(comps)
1784 for ref_level, comps in components.items()
1785 }
1787 # Map between coordinates and which component to use when computing
1788 # values. self._component_mapping is a function that takes a point and
1789 # returns the associated component. The reason this is an attribute is
1790 # to save it and avoid re-computing the mapping all the time
1791 self._component_mapping = None
File c:\Users\auror\anaconda3\envs\et\lib\site-packages\kuibit\grid_data.py:2089, in HierarchicalGridData._try_merge_components(self, components)
2086 components_no_ghosts.sort(key=lambda x: tuple(x.x0))
2088 # Next, we prepare the global grid
-> 2089 grid = gdu.merge_uniform_grids(
2090 [comp.grid for comp in components_no_ghosts]
2091 )
2093 merged_grid_data, indices_used = self._fill_grid_with_components(
2094 grid, components_no_ghosts
2095 )
2097 if np.amin(indices_used) == 1:
File c:\Users\auror\anaconda3\envs\et\lib\site-packages\kuibit\grid_data_utils.py:144, in merge_uniform_grids(grids, component)
141 return property_
143 ref_level = _extract_property("ref_level")
--> 144 time = _extract_property("time")
145 iteration = _extract_property("iteration")
146 num_ghost = _extract_property("num_ghost")
File c:\Users\auror\anaconda3\envs\et\lib\site-packages\kuibit\grid_data_utils.py:135, in merge_uniform_grids.<locals>._extract_property(property_)
132 all_properties = {tuple(getattr(g, property_)) for g in grids}
134 if len(all_properties) != 1:
--> 135 raise ValueError(
136 f"Can only merge grids with the same {property_}."
137 )
139 # Extract the only element from the set (using tuple unpacking)
140 (property_,) = all_properties
ValueError: Can only merge grids with the same time.
I copied the following code from
kuibit/kuibit/cactus_grid_functions.py
Lines 460 to 469 in 0a3dcb5
for x in uniform_grid_data_components:
print((x.iteration,x.time))
it gives me the following result (e.g. iteration 0 and 128)
Here is download link of the data file admbase-shift.z.asc
https://www.dropbox.com/t/RCkGcwDj9YM1pbYh
In several cases, we want to be able to ignore part of the data that satisfy a specific condition. For example, we want to ignore everything we density smaller than 100 * rho_atmosphere, or everything inside an horizon. It would be useful to add support for masks in grid data, so that we can easily block out regions we are not interested in.
Your contributing document already contains lots of great info. But there are a couple of things that I think might be missing. What is your procedure for merge requests? Do tests have to pass? How many people have to review a piece of code? Do you prefer merge requests from forks or branches? etc.
It seems to me that Kuibit uses the ET-B sensitivity curve for the Einstein Telescope. I was informed by Tim Dietrich that this sensitivity curve is outdated. People should use the ET-D. These are the relevant information on the new sensitivity curve for ET:
Keeping the poetry.lock
file updated is important to ensure that kuibit keeps working when new software is release (e.g., tikzplotlib
and matplotlib
). At the moment, I update the dependencies every now and then, but it would be best to automate this process. There exist services like pyup
to do this, but compatibility with poetry
seems lacking at the moment (pyupio/pyup#332).
I currently have a simulation using EinsteinAnalysis/Outflow which generates asc files at each detector. I can look at SD.allfiles and see the SimDir recognizes these files are located in the directory, but it does not create timeseries for the various columns. Is there an additional step that needs to be taken for these to be loaded? Or are asc files ingested with very strict format requirements?
Thanks,
Samuel
The tutorial on SimDir
is supposed to introduce the user to the various attributes in SimDir
. At the moment, only timeseries
is discussed.
I know poetry isn't yours---it's essentially a dependency. But from skimming Poetry's documentation, and yours, one thing isn't clear to me, which might be worth highlighting in the documentation.
When I develop a python package---e.g., yt---with a traditional setup.py
build system, I can call
python setup.py develop --user
to tell it to create symlinks to the python code so that when I import a module from anywhere, if I modify the code in the repo, the changes are reflected in the import. Is the same true here? i.e., do I have to do anything special in poetry to enable this kind of behaviour?
It strikes me that it would be extremely easy to use kuibit
as the backend for loading data into yt
to enable, e.g., volume rendering. In fact, volume rendering a uniform grid, after resampling in kuibit
should work essentially out of the box.
This isn't a feature request, just a suggestion for a straightforward enhancement, should you decide you want volume rendering.
Overall the paper reads quite well. One thing I noticed is that there is not much comparison to other visualization and analysis tools. For example, how does kuibit compare to visit, yt, or even pycactus? It may also be enough to mention that many users of the Einstein Toolkit write their own visualization pipelines by hand.
Why are the methods for differentiating a time series called derived
? Would differentiate not be more appropriate?
Calling mask_removed() on a masked series removes the masked points, but calling is_masked() on the resulting series still returns True. The problem is that the x and y arrays are still MaskedArrays, to fix this they should be converted to standard ndarrays.
My fork of kuibit is both ahead and behind by dozens of commits, and making a special branch with just the relevant commit is too much of a hassle for this minor issue, so rather than creating a pull request I'm just going to explain the edit that should be done:
To fix, change line 673 of series.py to:
return type(self)(np.ma.compressed(self.x[mask]), np.ma.compressed(self.y[mask]), True)
I encountered an issue when kuibit read a corrupted HDF5 (because the program ends before the cactus finished writing to HDF5). When kuitbit reads a corrupted HDF5 file, the reader fail directly and the exception is thrown by the h5py package. Currently, the error message is:
RuntimeError: Can't determine # of objects (addr overflow, addr = 1231693139, size = 328, eoa = 1231364739)
I think it didn't provide useful debug information about which file caused this issue. Because there are a lot of files in each directory, users might take some effort to dig for more information about the corrupted file. Hence, it might be beneficial to catch the exception from h5py and tell the user which file in which directory caused that error.
Could you add an additional exception handling after this line to check the integrity of the file, https://github.com/Sbozzolo/kuibit/blob/master/kuibit/cactus_grid_functions.py#L1410
and throw the error again but with additional information about the file name, its path etc?
That will give users more useful debugging information. Thanks.
When trying to run the example plot_1d_slice.py
with matplotlib 3.8.0
, I get the following error
ImportError: cannot import name 'common_texification' from 'matplotlib.backends.backend_pgf'
I do not get this error when executing the same example with, say, matplotlib 3.7.3
.
Hi Gabriele,
Out of curiosity, is there a specific reason to continue to restrict the version of mopi to <0.3 even in the current development version? I usually run the latest version which is technically in conflict with kuibit versioning requirements and haven't had any issues so far, but wanted to ask in case there are edge cases that I might need to be concerned with.
Thanks!
Samuel
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
kuibit 1.4.0.dev2 requires motionpicture<0.3.0,>=0.2.0, but you have motionpicture 0.3.1 which is incompatible.
kuibit depends on argcomplete and tikzplotlib. These were not installed during the execution of "pip3 install kuibit."
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.