GithubHelp home page GithubHelp logo

sbozzolo / kuibit Goto Github PK

View Code? Open in Web Editor NEW
37.0 37.0 12.0 353.08 MB

Python 3 post-processing tools for simulations performed with the Einstein Toolkit

License: GNU General Public License v3.0

Python 100.00%

kuibit's People

Contributors

chcheng3 avatar deepsource-autofix[bot] avatar eloisabentivegna avatar jonahm-lanl avatar sbozzolo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

kuibit's Issues

Bug in Reading QuasiLocalMeasures::qlm_weyl_scalars

The full log of errors is as follows.

psi4 = sim.gf.xyz["qlm_psi4[0]"]
psi4.get_iteration(0)
KeyError: "Can't open attribute (can't locate attribute: 'origin')"
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
/tmp/ipykernel_24443/2044816109.py in <module>
      1 psi4 = sim.gf.xyz["qlm_psi4[0]"]
----> 2 psi4.get_iteration(0)

/mnt/beegfs/zhong/spack/opt/spack/linux-ubuntu16.04-baltasar/gcc-10.3.0/python-3.8.11-55gzuixjsxhm4rjhdbiqoybjah7tjj24/lib/python3.8/site-packages/kuibit/cactus_grid_functions.py in get_iteration(self, iteration, default)
    493         if iteration not in self.available_iterations:
    494             return default
--> 495         return self[iteration]
    496 
    497     @lru_cache(128)

/mnt/beegfs/zhong/spack/opt/spack/linux-ubuntu16.04-baltasar/gcc-10.3.0/python-3.8.11-55gzuixjsxhm4rjhdbiqoybjah7tjj24/lib/python3.8/site-packages/kuibit/cactus_grid_functions.py in __getitem__(self, iteration)
    518             raise KeyError(f"Iteration {iteration} not present")
    519 
--> 520         return self._read_iteration_as_HierarchicalGridData(iteration)
    521 
    522     def read_on_grid(self, iteration, grid, resample=False):

/mnt/beegfs/zhong/spack/opt/spack/linux-ubuntu16.04-baltasar/gcc-10.3.0/python-3.8.11-55gzuixjsxhm4rjhdbiqoybjah7tjj24/lib/python3.8/site-packages/kuibit/cactus_grid_functions.py in _read_iteration_as_HierarchicalGridData(self, iteration)
    464                 ):
    465                     uniform_grid_data_components.append(
--> 466                         self._read_component_as_uniform_grid_data(
    467                             path, iteration, ref_level, comp
    468                         )

/mnt/beegfs/zhong/spack/opt/spack/linux-ubuntu16.04-baltasar/gcc-10.3.0/python-3.8.11-55gzuixjsxhm4rjhdbiqoybjah7tjj24/lib/python3.8/site-packages/kuibit/cactus_grid_functions.py in _read_component_as_uniform_grid_data(self, path, iteration, ref_level, component)
   1145                 path, iteration, ref_level, component
   1146             ) as dataset:
-> 1147                 grid = self._grid_from_dataset(
   1148                     dataset, iteration, ref_level, component
   1149                 )

/mnt/beegfs/zhong/spack/opt/spack/linux-ubuntu16.04-baltasar/gcc-10.3.0/python-3.8.11-55gzuixjsxhm4rjhdbiqoybjah7tjj24/lib/python3.8/site-packages/kuibit/cactus_grid_functions.py in _grid_from_dataset(self, dataset, iteration, ref_level, component)
   1067         shape = np.array(dataset.shape[::-1])
   1068 
-> 1069         x0 = dataset.attrs["origin"]
   1070         dx = dataset.attrs["delta"]
   1071         time = dataset.attrs["time"]

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

~/.local/lib/python3.8/site-packages/h5py/_hl/attrs.py in __getitem__(self, name)
     54         """ Read the value of an attribute.
     55         """
---> 56         attr = h5a.open(self._id, self._e(name))
     57 
     58         if is_empty_dataspace(attr):

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

h5py/h5a.pyx in h5py.h5a.open()

KeyError: "Can't open attribute (can't locate attribute: 'origin')"

Addition of argmax() method

Currently the absmax() method is supported for grid quantities. It would be useful to get the index of that also.

Moreover, a function that reports the grid position (ie. x,y,z) and indices and the value of the max() would be useful.

`grid_data` can fail due to floating point approximations

In HierarchicalGridData, we often check if a point belongs to a grid. In practice, the check consists in:

if np.any(point < (self.lowest_vertex)) or np.any(point >= (self.highest_vertex)):
    return False
return True

where we have the two vertices of the grid computed as self.x0 - 0.5 * self.dx and self.x1 + 0.5 * self.dx. Floating-point arithmetic can lead to false positives for values on the edge of a cell. This can manifest itself in an error in the method merge_refinement_levels.

EDIT:
Here is a file that (probably) triggers the problem: https://pastebin.com/raw/A536eJmA

Two failing tests.

Tests test_antenna_responses and test_ra_dec_to_theta_phi are failing for me at the moment, with errors:

AssertionError: 0.1676658308656924 != 0.173418558 within 7 places (0.005752727134307606 difference)

and

AssertionError: 2.458839344228527 != 2.3697174 within 7 places (0.08912194422852693 difference)

respectively.

Could this be the effect of any of the changes implemented as a result of the review in openjournals/joss-reviews#3099?

Drop `mayavi` in favor of `vedo` or `pyvista`

mayavi, the Python package that we've used so far for 3D visualization, has been an endless source of pain (from a packaging point of view). Some of the problems come from VTK itself (e.g., they just added support to Python 3.10, one year after it was released), but others come from mayavi itself. In addition to this, mayavi doesn't seem actively developed and we do not have too much code that relies on it. For these reasons, I want to drop mayavi and use vedo or pyvista instead.

We don't need very sophisticated features. Both the packages seem to be heavily focused on meshes, which is not something that we strongly need (we only need them for horizons). We primarily need volumetric rendering and field lines.

3D visualization has not been my focus for a while, but kuibit has all the capabilities to enable cool animations. I am opening this issue to collect ideas and experiences. If anyone wants to help, I am happy to work with them to figure out the best solution.

Can only merge grids with the same time

sim = sd.SimDir(r"D:\simulations\sim_1c")
betaz = sim.gf.z["betaz"]
betaz[0]

These lines of code worked fine in my previous simulations, but when I ran them in my new simulation I encountered the following error

stack of error
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
c:\Users\auror\Dropbox\SnelliusData\Work\test.ipynb Cell 6' in <cell line: 3>()
      <a href='vscode-notebook-cell:/c%3A/Users/auror/Dropbox/SnelliusData/Work/test.ipynb#ch0000015?line=0'>1</a> sim = sd.SimDir(r"D:\simulations\sim_1b")
      <a href='vscode-notebook-cell:/c%3A/Users/auror/Dropbox/SnelliusData/Work/test.ipynb#ch0000015?line=1'>2</a> hc = sim.gf.z["hc"]
----> <a href='vscode-notebook-cell:/c%3A/Users/auror/Dropbox/SnelliusData/Work/test.ipynb#ch0000015?line=2'>3</a> hc[0]

File c:\Users\auror\anaconda3\envs\et\lib\site-packages\kuibit\cactus_grid_functions.py:518, in BaseOneGridFunction.__getitem__(self, iteration)
    515 if iteration not in self.available_iterations:
    516     raise KeyError(f"Iteration {iteration} not present")
--> 518 return self._read_iteration_as_HierarchicalGridData(iteration)

File c:\Users\auror\anaconda3\envs\et\lib\site-packages\kuibit\cactus_grid_functions.py:472, in BaseOneGridFunction._read_iteration_as_HierarchicalGridData(self, iteration)
    462         for comp in self._components_in_file(
    463             path, iteration, ref_level
    464         ):
    465             uniform_grid_data_components.append(
    466                 self._read_component_as_uniform_grid_data(
    467                     path, iteration, ref_level, comp
    468                 )
    469             )
    471 return (
--> 472     grid_data.HierarchicalGridData(uniform_grid_data_components)
    473     if uniform_grid_data_components
    474     else None
    475 )

File c:\Users\auror\anaconda3\envs\et\lib\site-packages\kuibit\grid_data.py:1782, in HierarchicalGridData.__init__(self, uniform_grid_data)
   1779 for comp in uniform_grid_data_sorted:
   1780     components.setdefault(comp.ref_level, []).append(comp.copy())
-> 1782 self.grid_data_dict = {
   1783     ref_level: self._try_merge_components(comps)
   1784     for ref_level, comps in components.items()
   1785 }
   1787 # Map between coordinates and which component to use when computing
   1788 # values. self._component_mapping is a function that takes a point and
   1789 # returns the associated component. The reason this is an attribute is
   1790 # to save it and avoid re-computing the mapping all the time
   1791 self._component_mapping = None

File c:\Users\auror\anaconda3\envs\et\lib\site-packages\kuibit\grid_data.py:1783, in <dictcomp>(.0)
   1779 for comp in uniform_grid_data_sorted:
   1780     components.setdefault(comp.ref_level, []).append(comp.copy())
   1782 self.grid_data_dict = {
-> 1783     ref_level: self._try_merge_components(comps)
   1784     for ref_level, comps in components.items()
   1785 }
   1787 # Map between coordinates and which component to use when computing
   1788 # values. self._component_mapping is a function that takes a point and
   1789 # returns the associated component. The reason this is an attribute is
   1790 # to save it and avoid re-computing the mapping all the time
   1791 self._component_mapping = None

File c:\Users\auror\anaconda3\envs\et\lib\site-packages\kuibit\grid_data.py:2089, in HierarchicalGridData._try_merge_components(self, components)
   2086 components_no_ghosts.sort(key=lambda x: tuple(x.x0))
   2088 # Next, we prepare the global grid
-> 2089 grid = gdu.merge_uniform_grids(
   2090     [comp.grid for comp in components_no_ghosts]
   2091 )
   2093 merged_grid_data, indices_used = self._fill_grid_with_components(
   2094     grid, components_no_ghosts
   2095 )
   2097 if np.amin(indices_used) == 1:

File c:\Users\auror\anaconda3\envs\et\lib\site-packages\kuibit\grid_data_utils.py:144, in merge_uniform_grids(grids, component)
    141     return property_
    143 ref_level = _extract_property("ref_level")
--> 144 time = _extract_property("time")
    145 iteration = _extract_property("iteration")
    146 num_ghost = _extract_property("num_ghost")

File c:\Users\auror\anaconda3\envs\et\lib\site-packages\kuibit\grid_data_utils.py:135, in merge_uniform_grids.<locals>._extract_property(property_)
    132     all_properties = {tuple(getattr(g, property_)) for g in grids}
    134 if len(all_properties) != 1:
--> 135     raise ValueError(
    136         f"Can only merge grids with the same {property_}."
    137     )
    139 # Extract the only element from the set (using tuple unpacking)
    140 (property_,) = all_properties

ValueError: Can only merge grids with the same time.

I copied the following code from

for path in self.allfiles:
for ref_level in self._ref_levels_in_file(path, iteration):
for comp in self._components_in_file(
path, iteration, ref_level
):
uniform_grid_data_components.append(
self._read_component_as_uniform_grid_data(
path, iteration, ref_level, comp
)
)

and

for x in uniform_grid_data_components:
  print((x.iteration,x.time))

it gives me the following result (e.g. iteration 0 and 128)

iteration 0 (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 0.0) (0, 1.6)
iteration 128 (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 1.6) (128, 3.2)

Here is download link of the data file admbase-shift.z.asc
https://www.dropbox.com/t/RCkGcwDj9YM1pbYh

Implement masks for grid data

In several cases, we want to be able to ignore part of the data that satisfy a specific condition. For example, we want to ignore everything we density smaller than 100 * rho_atmosphere, or everything inside an horizon. It would be useful to add support for masks in grid data, so that we can easily block out regions we are not interested in.

[JOSS-paper] questions about contributor guidelines?

Your contributing document already contains lots of great info. But there are a couple of things that I think might be missing. What is your procedure for merge requests? Do tests have to pass? How many people have to review a piece of code? Do you prefer merge requests from forks or branches? etc.

ET sensitivity curve is outdated

It seems to me that Kuibit uses the ET-B sensitivity curve for the Einstein Telescope. I was informed by Tim Dietrich that this sensitivity curve is outdated. People should use the ET-D. These are the relevant information on the new sensitivity curve for ET:

Update dependencies automatically

Keeping the poetry.lock file updated is important to ensure that kuibit keeps working when new software is release (e.g., tikzplotlib and matplotlib). At the moment, I update the dependencies every now and then, but it would be best to automate this process. There exist services like pyup to do this, but compatibility with poetry seems lacking at the moment (pyupio/pyup#332).

Regarding ingestion of arbitrary asc files

I currently have a simulation using EinsteinAnalysis/Outflow which generates asc files at each detector. I can look at SD.allfiles and see the SimDir recognizes these files are located in the directory, but it does not create timeseries for the various columns. Is there an additional step that needs to be taken for these to be loaded? Or are asc files ingested with very strict format requirements?

Thanks,
Samuel

The `simdir` tutorial is incomplete

The tutorial on SimDir is supposed to introduce the user to the various attributes in SimDir. At the moment, only timeseries is discussed.

A question on developing with poetry?

I know poetry isn't yours---it's essentially a dependency. But from skimming Poetry's documentation, and yours, one thing isn't clear to me, which might be worth highlighting in the documentation.

When I develop a python package---e.g., yt---with a traditional setup.py build system, I can call

python setup.py develop --user

to tell it to create symlinks to the python code so that when I import a module from anywhere, if I modify the code in the repo, the changes are reflected in the import. Is the same true here? i.e., do I have to do anything special in poetry to enable this kind of behaviour?

Integration with `yt`

It strikes me that it would be extremely easy to use kuibit as the backend for loading data into yt to enable, e.g., volume rendering. In fact, volume rendering a uniform grid, after resampling in kuibit should work essentially out of the box.

This isn't a feature request, just a suggestion for a straightforward enhancement, should you decide you want volume rendering.

[JOSS-Paper] comparison to the state of the field

Overall the paper reads quite well. One thing I noticed is that there is not much comparison to other visualization and analysis tools. For example, how does kuibit compare to visit, yt, or even pycactus? It may also be enough to mention that many users of the Einstein Toolkit write their own visualization pipelines by hand.

One-line change to fix bug in mask_removed()

Calling mask_removed() on a masked series removes the masked points, but calling is_masked() on the resulting series still returns True. The problem is that the x and y arrays are still MaskedArrays, to fix this they should be converted to standard ndarrays.

My fork of kuibit is both ahead and behind by dozens of commits, and making a special branch with just the relevant commit is too much of a hassle for this minor issue, so rather than creating a pull request I'm just going to explain the edit that should be done:

To fix, change line 673 of series.py to:

return type(self)(np.ma.compressed(self.x[mask]), np.ma.compressed(self.y[mask]), True)

Improvement on handling corrupted files

I encountered an issue when kuibit read a corrupted HDF5 (because the program ends before the cactus finished writing to HDF5). When kuitbit reads a corrupted HDF5 file, the reader fail directly and the exception is thrown by the h5py package. Currently, the error message is:

RuntimeError: Can't determine # of objects (addr overflow, addr = 1231693139, size = 328, eoa = 1231364739)

I think it didn't provide useful debug information about which file caused this issue. Because there are a lot of files in each directory, users might take some effort to dig for more information about the corrupted file. Hence, it might be beneficial to catch the exception from h5py and tell the user which file in which directory caused that error.

Could you add an additional exception handling after this line to check the integrity of the file, https://github.com/Sbozzolo/kuibit/blob/master/kuibit/cactus_grid_functions.py#L1410
and throw the error again but with additional information about the file name, its path etc?
That will give users more useful debugging information. Thanks.

Incompatibility with matplotlib 3.8.0

When trying to run the example plot_1d_slice.py with matplotlib 3.8.0, I get the following error

ImportError: cannot import name 'common_texification' from 'matplotlib.backends.backend_pgf'

I do not get this error when executing the same example with, say, matplotlib 3.7.3.

MOPI version constraint inquiry

Hi Gabriele,
Out of curiosity, is there a specific reason to continue to restrict the version of mopi to <0.3 even in the current development version? I usually run the latest version which is technically in conflict with kuibit versioning requirements and haven't had any issues so far, but wanted to ask in case there are edge cases that I might need to be concerned with.

Thanks!
Samuel

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
kuibit 1.4.0.dev2 requires motionpicture<0.3.0,>=0.2.0, but you have motionpicture 0.3.1 which is incompatible.

Dependencies...

kuibit depends on argcomplete and tikzplotlib. These were not installed during the execution of "pip3 install kuibit."

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.