GithubHelp home page GithubHelp logo

Resume a training about slm-lab HOT 4 CLOSED

kengz avatar kengz commented on July 23, 2024
Resume a training

from slm-lab.

Comments (4)

kengz avatar kengz commented on July 23, 2024 1

Hi @ingambe thanks for looking at this, and the fantastic showcase above!

We did not implement a resume function yet but it should be relatively simple and clean, since the enjoy mode already does most of what's needed for resuming in that it loads the necessary things except a few.
Here are a few requirements for doing so to ensure overall consistency:

  1. the command could work like train@{prename}, like how enjoy works. With this we don't introduce extra command and modify the core logic in code. The prename means the saved folder for a trial, and that contains a session.
  2. what to save and load: saving already saves everything required for resuming. Loading is the main thing to take care of here:
    • like normal construction of a trial, it should take the prename to a trial folder and load for all sessions. This can reuse the logic in retro_analysis.
    • pytorch model loading: already done in the resume function
    • load the body dataframes likebody.train_df and body.eval_df
    • set the environment's clock using the max frame in body.train_df. Once the clock is set, everything will be set in motion correctly because the lab follows a clock consistently. This means the graph plotting, learning rate decay, reporting etc. will resume like there was no interruption before.
    • one thing that's currently not saved but required is the agent's memory, but this can easily take up to over 7Gb per session.
  3. pass the CI build

If you open a PR I can also work with you to get the steps above implemented, or if you prefer to wait a bit I can also get to it sometime this week/next.

from slm-lab.

ingambe avatar ingambe commented on July 23, 2024

I've already started to work on this here
This is just a PoC for the moment and it needs, of course, to be unit tested
But before I go any further, I would like to know if my approach is correct and makes sense.

Approach

I've defined a new meta parameter called load_net
Which allows me to load a network in the same idea as model_prepath for enjoy spec (in the future maybe it would make more sense to have a relative path to the .pt model)
Then, when SLM tries to load the algorithm, if he found the load_net key in the agent spec, he loads it

"Test"

In order to check if this approach work, I have modify the demo spec (DQN Cartpole) in order to split it into two parts, one part which trains until 3000 frames and the other part which loads the previous network and train for another 7000 frames.

One-shot training

dqn_cartpole_t0_trial_graph_mean_returns_ma_vs_frames
Using the "demo.json" for 10 000 frames, we end up reaching approximatly 130 as the mean return.
Here is the result of the experiment:
one_shot.zip

Two-shot training

In order to see better the improvement and as the number of frames is reduced, the evaluation and log frequency as been dropped from 500 to 100.

First part

dqn_cartpole_t0_trial_graph_mean_returns_ma_vs_frames
As we can see we start from a low mean return (20) and reach 70 after 3000 frames.
Here is the result of the experiment:
two_shot_first_part.zip

Second part

dqn_cartpole_t0_trial_graph_mean_returns_ma_vs_frames
As we can see start at 70 (previous mean returns) and reach 140 after 7000 frames.
Here is the result of the experiment:
two_shot_second_part.zip

from slm-lab.

ingambe avatar ingambe commented on July 23, 2024

Hi @kengz

I've created a draft pull request #445 in order to work together on this.
As you suggest, I've created the train@{previous_experience} command, but I've combined it with the meta argument previously defined (because it may be good to be able to load a network from outside the lab in order to use transfer learning ? What do you think ?) and I'm struggling with the data frames loading and agent memory.

Thank you for your help

from slm-lab.

kengz avatar kengz commented on July 23, 2024

Implemented resume mode, see the linked PR above.

from slm-lab.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.