GithubHelp home page GithubHelp logo

isabelayepes / sherpa Goto Github PK

View Code? Open in Web Editor NEW

This project forked from sherpa-ai/sherpa

0.0 1.0 0.0 5.71 MB

Hyperparameter optimization that enables researchers to experiment, visualize, and scale quickly.

Home Page: http://parameter-sherpa.readthedocs.io/

License: GNU General Public License v3.0

Python 40.17% CSS 2.15% JavaScript 49.63% HTML 8.05%

sherpa's Introduction

SHERPA: A Python Hyperparameter Optimization Library

image

Build Status

image

SHERPA is a Python library for hyperparameter tuning of machine learning models. It provides:

  • hyperparameter optimization for machine learning researchers
  • it can be used with any Python machine learning library such as Keras, Tensorflow, PyTorch, or Scikit-Learn
  • a choice of hyperparameter optimization algorithms such as Bayesian optimization via GPyOpt (example notebook), Asynchronous Successive Halving (aka Hyperband) (example notebook), and Population Based Training (example notebook).
  • parallel computation that can be fitted to the user's needs
  • a live dashboard for the exploratory analysis of results.

Clone from GitHub to get the latest version or install via pip install parameter-sherpa. The documentation at http://parameter-sherpa.readthedocs.io/ provides tutorials on using the different optimization algorithms and installation instructions for parallel hyperparameter optimizations. Take a look at the demo video by clicking on the image below or read on to find out more.

We would love to hear what you think of Sherpa! Tell us how we can improve via our Feedback-Form.

image

If you use SHERPA in your research please cite:

@article{hertel2020sherpa,
   title={Sherpa: Robust Hyperparameter Optimization for Machine Learning},
   author={Lars Hertel and Julian Collado and Peter Sadowski and Jordan Ott and Pierre Baldi},
   journal={SoftwareX},
   volume={},
   number={},
   pages={},
   note={In press.}
   year={2020},
   note  ={Also arXiv:2005.04048. Software available at: https://github.com/sherpa-ai/sherpa},
   publisher={}
}

From Keras to Sherpa in 30 seconds

This example will show how to adapt a minimal Keras script so it can be used with SHERPA. As starting point we use the "getting started in 30 seconds" tutorial from the Keras webpage.

We start out with this piece of Keras code:

from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(units=64, activation='relu', input_dim=100))
model.add(Dense(units=10, activation='softmax'))
model.compile(loss='categorical_crossentropy',
          optimizer='sgd',
          metrics=['accuracy'])

We want to tune the number of hidden units via Random Search. To do that, we define one parameter of type Discrete. We also use the BayesianOptimization algorithm with maximum number of trials 50.

import sherpa
parameters = [sherpa.Discrete('num_units', [50, 200])]
alg = sherpa.algorithms.BayesianOptimization(max_num_trials=50)

We use these objects to create a SHERPA Study:

study = sherpa.Study(parameters=parameters,
                     algorithm=alg,
                     lower_is_better=True)

We obtain trials by iterating over the study. Each trial has a parameter attribute that contains the num_units parameter value. We can use that value to create our model.

for trial in study:
    model = Sequential()
    model.add(Dense(units=trial.parameters['num_units'],
                    activation='relu', input_dim=100))
    model.add(Dense(units=10, activation='softmax'))
    model.compile(loss='categorical_crossentropy',
              optimizer='sgd',
              metrics=['accuracy'])

    model.fit(x_train, y_train, epochs=5, batch_size=32,
              callbacks=[study.keras_callback(trial, objective_name='val_loss')])
    study.finalize(trial)

During training, objective values will be added to the SHERPA study via the callback. At the end of training study.finalize completes this trial. This means that no more observation will be added to this trial.

When the Study is created, SHERPA will display the dashboard address. If you put the address into your browser you will see the dashboard as shown below. As a next step you can take a look at this example of optimizing a Random Forest in sherpa/examples/randomforest.py.

Installation from PyPi

pip install parameter-sherpa

Installation from GitHub

Clone from GitHub:

git clone https://github.com/LarsHH/sherpa.git
export PYTHONPATH=$PYTHONPATH:`pwd`/sherpa

Install dependencies:

pip install pandas
pip install numpy
pip install scipy
pip install scikit-learn
pip install flask
pip install enum34  # if on < Python 3.4

You can run an example to verify SHERPA is working:

cd sherpa/examples/
python simple.py

Note that to run hyperparameter optimizations in parallel with SHERPA requires the installation of Mongo DB. Further instructions can be found in the Parallel Installation section of the documentation.

sherpa's People

Contributors

larshh avatar colladou avatar peterjsadowski avatar sherpa-ai avatar rvinas avatar nick-ai avatar anirudhacharya avatar davidreiman avatar hamogu avatar icunnyngham avatar jbboin avatar jbae11 avatar porimol avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.