Comments (7)
it is possible to "officially" add some variable to configure the maximum int on element type/real/section and correct the type of the cython pointer ?
These are great catches! I'm implementing these fixes in a patch release of the reader. Thanks for your work finding these issues.
from pymapdl-reader.
One note:
If we implement the following:
NUM_MAX = 1000000 # <- new maximum
nodelm = np.empty(NUM_MAX, np.int32) # n nodes for this element type
...
We're going to run into an issue where the user needs to dynamically change NUM_MAX
(annoying), or we set it to some large value (memory inefficient). I think the best approach is to dynamically create just the right sized array at runtime:
def dict_to_arr(index_dict):
"""Convert an index dictionary to an array
For example:
{1: 20, 2: 30} --> [UNDEF, 20, 30]
"""
arr = np.empty(max(index_dict.keys()) + 1, np.int32)
for key, value in index_dict.items():
arr[key] = value
return arr
Then employ it with:
nodfor = {}
...
nodfor = dict_to_arr(nodfor)
Ideally ansys-mapdl-reader
would simply use the dictionary or some sort of hashtable throughout the entire module, but I'll leave that for another day. For now, this is a more efficient and adaptive approach.
from pymapdl-reader.
@Tyrin451, please checkout https://github.com/pyansys/PyMAPDL-reader/tree/fix/typeref_int and let me know if it works for you.
If you're unfamiliar with git, you commands will be:
git clone https://github.com/pyansys/PyMAPDL-reader/
git checkout fix/typeref_int
pip install -e .
You can forgo the -e
flag, but I like to install with that mode when testing out things so I can quickly revert to master and reinstall.
from pymapdl-reader.
Thanks ! this patch work fine.
However I think that an optimization is required in binary_reader.pyx.
I loop over my results sets (~4000 ) and the time per call is not consistent.
tqdm says 60it/s in the beginning then it fall to 8it/s.
On the 100 first sets I have the cProfile:
ncalls tottime percall cumtime percall filename:lineno(function)
100 0.983 0.010 0.983 0.010 {method 'read_element_data' of 'ansys.mapdl.reader._binary_reader.AnsysFile' objects}
174/157 0.458 0.003 0.466 0.003 {built-in method _imp.create_dynamic}
100 0.441 0.004 0.565 0.006 runPostPython.py:32(<listcomp>)
2740 0.352 0.000 0.352 0.000 {built-in method nt.stat}
100 0.124 0.001 0.140 0.001 rst.py:2321(<listcomp>)
516 0.067 0.000 0.067 0.000 {built-in method io.open_code}
and for the all dataset
ncalls tottime percall cumtime percall filename:lineno(function)
4231 206.065 0.049 206.065 0.049 {method 'read_element_data' of 'ansys.mapdl.reader._binary_reader.AnsysFile' objects}
4231 36.187 0.009 44.205 0.010 runPostPython.py:32(<listcomp>)
4231 7.058 0.002 7.932 0.002 rst.py:2309(<listcomp>)
579647 4.292 0.000 6.807 0.000 runPostPython.py:18(mises1D)
1 2.857 2.857 270.347 270.347 runPostPython.py:27(getMaxOverTime)
588134 1.795 0.000 1.795 0.000 {method 'reduce' of 'numpy.ufunc' objects}
8728 1.782 0.000 1.782 0.000 {method 'read_record' of 'ansys.mapdl.reader._binary_reader.AnsysFile' objects}
The append of the record to the pythonic list could be the problem (_binary_reader.pyx line 283) ?
from pymapdl-reader.
Never mind, sometimes the iteration rates drop and sometimes not... I don't clearly identify why.
I think this issue can be closed. Thank you !
from pymapdl-reader.
Read element data is remarkably inefficient and I've not put much effort in optimizing that since I've been under the assumption that most will use the nodal results (since it's one contiguous array of data).
200 seconds is pretty terrible. Let me look into this and see if there's a way to optimize it.
from pymapdl-reader.
Interestingly, I'm not seeing a huge variation between the array approach and the list approach. For example, the same data ('ENS') with the list and then the array methods on a 6.6 GB file:
>>> enum, edata, enode = rst.element_solution_data(0, 'ENS')
>>> enum, edata, enode = rst.element_stress(0)
Respective timing in seconds:
26.808578729629517
25.371294498443604
I'm also seeing some variation in the run times. I think it has to do with the random read operations when working with the result file. The way these results are organized is not conducive for efficient buffered reading.
Closing this with the merge of #32
from pymapdl-reader.
Related Issues (20)
- Installing from cloned repo fails on MacOS Apple Silicon arch HOT 1
- Bug located in PyMAPDL plotting HOT 7
- Bug located in nodal_solution reading HOT 10
- Bug located in Plotting. Mapdl.aplot HOT 3
- Segfault in fix_missing_midside
- Plot Nodal stress Error HOT 1
- Multiversion docs are missing HOT 3
- Result animations don't work with the new pyvista default backend (trame) HOT 6
- Support for ETBLOCK HOT 3
- PyMAPDL_Reader Attribute Error: FullReader() HOT 3
- plot is not the same between APDL and Pymapdl_reader HOT 7
- Reading 2023R1 CDB files fails
- The stresses are shown as NaN in ParaView HOT 1
- Bug with the full module HOT 1
- Bug with the rst module HOT 1
- how to read all elements information from .dat file by PyMapdl_reader HOT 13
- Read the .rst file generated from modal analysis in ANSYS 2022 R1 HOT 1
- Unknown error while reading the rst file via "pymapdl_reader.read_binary"
- Bug located in ...pymapdl_reader HOT 4
- Cyclic Result Analysis Plotting
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pymapdl-reader.