GithubHelp home page GithubHelp logo

Comments (7)

akaszynski avatar akaszynski commented on July 17, 2024

it is possible to "officially" add some variable to configure the maximum int on element type/real/section and correct the type of the cython pointer ?

These are great catches! I'm implementing these fixes in a patch release of the reader. Thanks for your work finding these issues.

from pymapdl-reader.

akaszynski avatar akaszynski commented on July 17, 2024

One note:
If we implement the following:

NUM_MAX = 1000000 # <- new maximum
nodelm = np.empty(NUM_MAX, np.int32)  # n nodes for this element type
...

We're going to run into an issue where the user needs to dynamically change NUM_MAX (annoying), or we set it to some large value (memory inefficient). I think the best approach is to dynamically create just the right sized array at runtime:

def dict_to_arr(index_dict):
    """Convert an index dictionary to an array

    For example:
    {1: 20, 2: 30} --> [UNDEF, 20, 30]

    """
    arr = np.empty(max(index_dict.keys()) + 1, np.int32)
    for key, value in index_dict.items():
        arr[key] = value
    return arr

Then employ it with:

nodfor = {}
...
nodfor = dict_to_arr(nodfor)

Ideally ansys-mapdl-reader would simply use the dictionary or some sort of hashtable throughout the entire module, but I'll leave that for another day. For now, this is a more efficient and adaptive approach.

from pymapdl-reader.

akaszynski avatar akaszynski commented on July 17, 2024

@Tyrin451, please checkout https://github.com/pyansys/PyMAPDL-reader/tree/fix/typeref_int and let me know if it works for you.

If you're unfamiliar with git, you commands will be:

git clone https://github.com/pyansys/PyMAPDL-reader/
git checkout fix/typeref_int
pip install -e .

You can forgo the -e flag, but I like to install with that mode when testing out things so I can quickly revert to master and reinstall.

from pymapdl-reader.

Tyrin451 avatar Tyrin451 commented on July 17, 2024

Thanks ! this patch work fine.

However I think that an optimization is required in binary_reader.pyx.

I loop over my results sets (~4000 ) and the time per call is not consistent.
tqdm says 60it/s in the beginning then it fall to 8it/s.

On the 100 first sets I have the cProfile:

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
      100    0.983    0.010    0.983    0.010 {method 'read_element_data' of 'ansys.mapdl.reader._binary_reader.AnsysFile' objects}
  174/157    0.458    0.003    0.466    0.003 {built-in method _imp.create_dynamic}
      100    0.441    0.004    0.565    0.006 runPostPython.py:32(<listcomp>)
     2740    0.352    0.000    0.352    0.000 {built-in method nt.stat}
      100    0.124    0.001    0.140    0.001 rst.py:2321(<listcomp>)
      516    0.067    0.000    0.067    0.000 {built-in method io.open_code}

and for the all dataset

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     4231  206.065    0.049  206.065    0.049 {method 'read_element_data' of 'ansys.mapdl.reader._binary_reader.AnsysFile' objects}
     4231   36.187    0.009   44.205    0.010 runPostPython.py:32(<listcomp>)
     4231    7.058    0.002    7.932    0.002 rst.py:2309(<listcomp>)
   579647    4.292    0.000    6.807    0.000 runPostPython.py:18(mises1D)
        1    2.857    2.857  270.347  270.347 runPostPython.py:27(getMaxOverTime)
   588134    1.795    0.000    1.795    0.000 {method 'reduce' of 'numpy.ufunc' objects}
     8728    1.782    0.000    1.782    0.000 {method 'read_record' of 'ansys.mapdl.reader._binary_reader.AnsysFile' objects}

The append of the record to the pythonic list could be the problem (_binary_reader.pyx line 283) ?

from pymapdl-reader.

Tyrin451 avatar Tyrin451 commented on July 17, 2024

Never mind, sometimes the iteration rates drop and sometimes not... I don't clearly identify why.

I think this issue can be closed. Thank you !

from pymapdl-reader.

akaszynski avatar akaszynski commented on July 17, 2024

Read element data is remarkably inefficient and I've not put much effort in optimizing that since I've been under the assumption that most will use the nodal results (since it's one contiguous array of data).

200 seconds is pretty terrible. Let me look into this and see if there's a way to optimize it.

from pymapdl-reader.

akaszynski avatar akaszynski commented on July 17, 2024

Interestingly, I'm not seeing a huge variation between the array approach and the list approach. For example, the same data ('ENS') with the list and then the array methods on a 6.6 GB file:

>>> enum, edata, enode = rst.element_solution_data(0, 'ENS')
>>> enum, edata, enode = rst.element_stress(0)

Respective timing in seconds:

26.808578729629517
25.371294498443604

I'm also seeing some variation in the run times. I think it has to do with the random read operations when working with the result file. The way these results are organized is not conducive for efficient buffered reading.

Closing this with the merge of #32

from pymapdl-reader.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.