GithubHelp home page GithubHelp logo

Comments (7)

Stoops-ML avatar Stoops-ML commented on August 15, 2024 2

That’s a great tutorial and I’ve read it before, but that’s pretty specific for PyTorch models/assuming each solution is a new set of model weights, which doesn’t apply at all to what I’m using it for. Would be cool to see that kind of behaviour implemented in the base PyGAD class so it’s a little more extensible?

The tutorial is not PyTorch specific and can be implemented for PyGAD using one set of model weights. In the tutorial the author overrides the cal_pop_fitness() method so that all solutions within a generation are run in parallel using multiprocessing.Pool.map().

from geneticalgorithmpython.

Stoops-ML avatar Stoops-ML commented on August 15, 2024 1

You can parallelise the solutions in each generation as documented in PyGADs documentation here

from geneticalgorithmpython.

windowshopr avatar windowshopr commented on August 15, 2024 1

That’s a great tutorial and I’ve read it before, but that’s pretty specific for PyTorch models/assuming each solution is a new set of model weights, which doesn’t apply at all to what I’m using it for. Would be cool to see that kind of behaviour implemented in the base PyGAD class so it’s a little more extensible?

from geneticalgorithmpython.

windowshopr avatar windowshopr commented on August 15, 2024 1

I took a stab at creating what I needed, untested as of now, but will be checking on it in the next week or so. If it's working, I'll create a PR

from geneticalgorithmpython.

windowshopr avatar windowshopr commented on August 15, 2024 1

See #80

from geneticalgorithmpython.

windowshopr avatar windowshopr commented on August 15, 2024

So I've read over the article again, and see what you're saying, however it isn't working on my Windows machine.

I get the freeze_support() error message as the code isn't wrapped in the if __name__ == "__main__":, so I do that, but then get the error:

Traceback (most recent call last):
  File "C:\Users\chalu\AppData\Roaming\Python\Python37\lib\multiprocessing\pool.py", line 121, in worker
    result = (True, func(*args, **kwds))
  File "C:\Users\chalu\AppData\Roaming\Python\Python37\lib\multiprocessing\pool.py", line 44, in mapstar
    return list(map(*args))
  File "C:\Users\chalu\OneDrive\Desktop\Python_Scripts\Stock_RL_2021\stablebaselines_pygad.py", line 350, in fitness_wrapper
    return fitness_func(solution, 0)
  File "C:\Users\chalu\OneDrive\Desktop\Python_Scripts\Stock_RL_2021\stablebaselines_pygad.py", line 293, in fitness_func
    env = SubprocVecEnv([make_env(env, i) for i in range(num_cpu)])
  File "C:\Users\chalu\AppData\Roaming\Python\Python37\lib\site-packages\stable_baselines3\common\vec_env\subproc_vec_env.py", line 106, in __init__
    process.start()
  File "C:\Users\chalu\AppData\Roaming\Python\Python37\lib\multiprocessing\process.py", line 110, in start
    'daemonic processes are not allowed to have children'
AssertionError: daemonic processes are not allowed to have children
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "stablebaselines_pygad.py", line 403, in <module>
    ga_instance.run()
  File "C:\Users\chalu\AppData\Roaming\Python\Python37\lib\site-packages\pygad\pygad.py", line 1251, in run
    self.last_generation_fitness = self.cal_pop_fitness()
  File "stablebaselines_pygad.py", line 358, in cal_pop_fitness
    pop_fitness = pool.map(fitness_wrapper, self.population)
  File "C:\Users\chalu\AppData\Roaming\Python\Python37\lib\multiprocessing\pool.py", line 268, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "C:\Users\chalu\AppData\Roaming\Python\Python37\lib\multiprocessing\pool.py", line 657, in get
    raise self._value
AssertionError: daemonic processes are not allowed to have children

I'm assuming this is because I'm using stablebaselines3's SubprocVecEnv's function to create a subprocessed environment, even though I'm only setting the number of CPU's to 1 in that section anyway. But I will keep tweaking/remove that part of the stable baselines and see how I make out. Thanks!

from geneticalgorithmpython.

ahmedfgad avatar ahmedfgad commented on August 15, 2024

@windowshopr, Supporting parallel processing is indeed a very good feature to be supported internally in PyGAD!

As @Stoops-ML said, the tutorial might be helpful.

Because most of the time the bottleneck is in the fitness function (mutation does not worth parallel processing), this could be internally supported.

Thanks for your suggestions!

from geneticalgorithmpython.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.