A scikit-learn compatible neural network library that wraps PyTorch.
To see more elaborate examples, look here.
import numpy as np
from sklearn.datasets import make_classification
from torch import nn
from skorch import NeuralNetClassifier
X, y = make_classification(1000, 20, n_informative=10, random_state=0)
X = X.astype(np.float32)
y = y.astype(np.int64)
class MyModule(nn.Module):
def __init__(self, num_units=10, nonlin=nn.ReLU()):
super().__init__()
self.dense0 = nn.Linear(20, num_units)
self.nonlin = nonlin
self.dropout = nn.Dropout(0.5)
self.dense1 = nn.Linear(num_units, num_units)
self.output = nn.Linear(num_units, 2)
self.softmax = nn.Softmax(dim=-1)
def forward(self, X, **kwargs):
X = self.nonlin(self.dense0(X))
X = self.dropout(X)
X = self.nonlin(self.dense1(X))
X = self.softmax(self.output(X))
return X
net = NeuralNetClassifier(
MyModule,
max_epochs=10,
lr=0.1,
# Shuffle training data on each epoch
iterator_train__shuffle=True,
)
net.fit(X, y)
y_proba = net.predict_proba(X)
In an sklearn Pipeline:
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
pipe = Pipeline([
('scale', StandardScaler()),
('net', net),
])
pipe.fit(X, y)
y_proba = pipe.predict_proba(X)
With grid search:
from sklearn.model_selection import GridSearchCV
# deactivate skorch-internal train-valid split and verbose logging
net.set_params(train_split=False, verbose=0)
params = {
'lr': [0.01, 0.02],
'max_epochs': [10, 20],
'module__num_units': [10, 20],
}
gs = GridSearchCV(net, params, refit=False, cv=3, scoring='accuracy', verbose=2)
gs.fit(X, y)
print("best score: {:.3f}, best params: {}".format(gs.best_score_, gs.best_params_))
skorch also provides many convenient features, among others:
- Learning rate schedulers (Warm restarts, cyclic LR and many more)
- Scoring using sklearn (and custom) scoring functions
- Early stopping
- Checkpointing
- Parameter freezing/unfreezing
- Progress bar (for CLI as well as jupyter)
- Automatic inference of CLI parameters
- Integration with GPyTorch for Gaussian Processes
- Integration with Hugging Face 🤗
skorch requires Python 3.9 or higher.
You need a working conda installation. Get the correct miniconda for your system from here.
To install skorch, you need to use the conda-forge channel:
conda install -c conda-forge skorch
We recommend to use a conda virtual environment.
Note: The conda channel is not managed by the skorch maintainers. More information is available here.
To install with pip, run:
python -m pip install -U skorch
Again, we recommend to use a virtual environment for this.
If you would like to use the most recent additions to skorch or help development, you should install skorch from source.
To install skorch from source using conda, proceed as follows:
git clone https://github.com/skorch-dev/skorch.git
cd skorch
conda create -n skorch-env python=3.12
conda activate skorch-env
python -m pip install torch
python -m pip install -r requirements.txt
python -m pip install .
If you want to help developing, run:
git clone https://github.com/skorch-dev/skorch.git
cd skorch
conda create -n skorch-env python=3.12
conda activate skorch-env
python -m pip install torch
python -m pip install -r requirements.txt
python -m pip install -r requirements-dev.txt
python -m pip install -e .
py.test # unit tests
pylint skorch # static code checks
You may adjust the Python version to any of the supported Python versions.
For pip, follow these instructions instead:
git clone https://github.com/skorch-dev/skorch.git
cd skorch
# create and activate a virtual environment
python -m pip install -r requirements.txt
# install pytorch version for your system (see below)
python -m pip install .
If you want to help developing, run:
git clone https://github.com/skorch-dev/skorch.git
cd skorch
# create and activate a virtual environment
python -m pip install -r requirements.txt
# install pytorch version for your system (see below)
python -m pip install -r requirements-dev.txt
python -m pip install -e .
py.test # unit tests
pylint skorch # static code checks
PyTorch is not covered by the dependencies, since the PyTorch version you need is dependent on your OS and device. For installation instructions for PyTorch, visit the PyTorch website. skorch officially supports the last four minor PyTorch versions, which currently are:
- 2.3.1
- 2.4.1
- 2.5.1
- 2.6.0
However, that doesn't mean that older versions don't work, just that they aren't tested. Since skorch mostly relies on the stable part of the PyTorch API, older PyTorch versions should work fine.
In general, running this to install PyTorch should work:
python -m pip install torch
- @jakubczakon: blog post "8 Creators and Core Contributors Talk About Their Model Training Libraries From PyTorch Ecosystem" 2020
- @BenjaminBossan: talk 1 "skorch: A scikit-learn compatible neural network library" at PyCon/PyData 2019
- @githubnemo: poster for the PyTorch developer conference 2019
- @thomasjpfan: talk 2 "Skorch: A Union of Scikit learn and PyTorch" at SciPy 2019
- @thomasjpfan: talk 3 "Skorch - A Union of Scikit-learn and PyTorch" at PyData 2018
- @BenjaminBossan: talk 4 "Extend your scikit-learn workflow with Hugging Face and skorch" at PyData Amsterdam 2023 (slides 4)
- GitHub discussions: user questions, thoughts, install issues, general discussions.
- GitHub issues: bug reports, feature requests, RFCs, etc.
- Slack: We run the #skorch channel on the PyTorch Slack server, for which you can request access here.
skorch's People
Forkers
ottonemo soumith benjaminbossan githubnemo sriharsha0806 mmderakhshani juripaern0007 xk1411 ucasqcz jianfly mojtabah codeaudit chubbymaggie fundou mahdisadjadi soumyamittal sbhttcha ssameerr chikaobuah gomson atendra12 dntai hongyunnchen vishalbelsare salimmj lodurality shyamalschandra mlkorra nunofernandes-plight pmadhyastha tony32769 hbcbh1999 mkhoin shubhampachori12110095 anirband statml danresende elhamben catcatrun kalengo bjfanchen colinsongf cyzhao0709 zergey zhenyuanwei vanradd borasy yuan39 candleinwindsteve grzr huoming550 pursh2002 wjh70301 sksundaram-learning yinxx penguinkang farahanams bluesdog164 bomboradata ghostintheshellarise yoonhanho loic001 jayshah5696 rain1024 thomasjpfan aust-hansen shanggao96 magnumw dolphintear merajat robot-ai-machinelearning zhongkailv yanghaha11514 b2220333 stsievert radovankavicky gapdata mohanarunachalam baifengbai guanlongtianzi tpietruszka taketwo locussam vhcg77 heyifei stoneyang masseeh rosssong meitric thoughtworksinc cxhernandez afcarl duydq1230 anottergithubuser jiapei100 haofanqimingzi spott pydataman atlaj pkyoyeteraskorch's Issues
Add the most important docstrings.
CI service
There should be a CI service that checks new pull requests for errors.
*Examples* Add an example for using vanilla classifier and regressor
Improve attributes and methods section in docs
- Currently, no line breaks -> very long lines
- Methods are not linked
[Bug] Binary classification on cuda fails
This is a similar issue as with regression and 1-dimensional target data, namely that default_collate
unpacks the contents of the array (int64) and then the .cuda()
call fails on int64.
It does not happen with 2-dimensional arrays, but for binary classifications, we can't use an n x 1 array, since that conflicts with StratifiedKFold
.
*Callbacks* A callback that saves the model periodically.
Requirements:
- the callback should not only save the weights but also the training parameters (i.e. pickling the learner and using
torch.save
to save the model data) - option to disable 'best only' checking, we might want to do check-pointing at every epoch
Open questions:
- how much customization do we allow for selecting the 'best' run? Just make the key configurable (e.g.
key='valid_loss_best'
)? - do we want to provide formatting options for the data path (e.g.
{epoch}
or{unique_run_id}
)? Might be useful when doing grid search and runs would otherwise override checkpoints
Optional y in fit in general
NeuralNet.fit
should use y=None
by default to support arbitrary data loaders.
NeuralNetClassifier.fit
should require y
.
Add CI. Enforce code reviews?
Use terminal coloring library
Currently, color highlighting is homebrew. We could use a package instead, e.g. https://github.com/tartley/colorama.
Advantages:
- potentially more features
- claims to work on Windows
- no more custom code (Enum doesn't work in Python 2.7)
Disadvantages:
- one more dependency
Consolidate names
Callbacks: Generalize `yield_callbacks` into processing and output callbacks
Currently, net._yield_callbacks
discerns between PrintLog
and other callbacks with the effect that PrintLog
is added last to the list of callbacks so it has access to all processed values added by other callbacks. Maybe we should generalize this by classifying callbacks into two groups: processing callbacks and output callbacks.
Output callbacks (identified by inheriting from an abstract subclass of Callback
) are by default appended to the end of the callback list. This would pave the way for other output callbacks besides PrintLog
such as TensorBoard logging callbacks.
*Callbacks* A general purpose callback that sets a specific parameter on a given epoch.
Find a solution that works with the most common data types out of the box (numpy array, pytorch tensor, dict, pandas DataFrame?, list?, sparse matrix?).
*Examples* Add an example for using grid search, using a net in an sklearn pipeline, and other cases of interoperability with sklearn (possibly in a jupyter notebook).
Add an example for using an RNN
possibly in a jupyter notebook
Regressor: An estimator that reduces mean squared error by default.
Make initialization scheme consistent
Current state of different things (both = class and object supported, class = only class supported)
- callbacks (both)
- criterion (class)
- module (both)
- iterator_{train,test} (class)
- optimizer (class)
we should find a consistent scheme for this (either only initialized, always both, only class, ...)
`fit_params` is ignored
Investigate which options are stored here and what we should do with this parameter.
Prepare release 0.1.0
- VERSION
-
CHANGES.txt - LICENSE
- requirements.txt
- pip package
- git tag
- logo
- all issues tagged
r0.1.0
get_len has issues with nested lists
For example:
inferno.dataset.get_len([[(1,2),(2,3)],[(4,5)], [(7,8,9)]])
expected: 3
actual: ValueError: Dataset does not have consistent lengths.
Another example:
inferno.dataset.get_len([[(1,2),(2,3)],[(4,5)], [(7,8)]])
expected: 3
actual: 2 (length of tuples)
A workaround is to convert the list into a numpy array.
Add and use a *verbosity* parameter.
Remove ugly solution to 1-dim target data problem
We have some ugly pieces of code that are necessary to make our code work with default_collate
. They relate to the probem that default_collate
picks out values one at a time, which makes it hard to work with 1-dim arrays (e.g. to cast them to cuda).
The corresponding pieces of code are:
_prepare_target_for_loss
- The y dimensionality check in
NeuralNetRegressor
Cache pytorch installation in travis.yml
300MiB are a lot of stuff to download every time.
Scoring callback not on batch but on epoch
Currently, the Scoring
callback calculates the score on each batch and averages over all batches for the epoch score. For some scores, however, this leads to inaccurate results (e.g. AUC). It would be better to score on the whole validation set at once.
To achieve this, the callback could store all predictions from the batches and score on_epoch_finished
. It might be better, though, if the NeuralNet
did it, so that if we have more than one score that uses the predictions, the predictions don't need to be made twice.
Ensure that `to_var` receives `use_cuda` in all cases
Possibly by making use_cuda
a positional parameter instead of a keyword parameter.
Disable `use_cuda` after loading a model on a non-CUDA machine
Currently we warn that CUDA is not supported but the model still has use_cuda=True
.
*Callbacks* General purpose scoring callback.
Investigate whether a model trained on GPU can be loaded on CPU and vice versa, with pickle and state_dict. If that does not work, make it work.
*Callbacks* PrintLog: Prints progress of model training.
Callbacks don't work when passed unitialized
For example:
class Foo(inferno.callbacks.Callback):
def on_epoch_end(self, net, **kwargs):
pass
net = NeuralNet(..., callbacks=[Foo])
Error:
Traceback (most recent call last):
File "train.py", line 189, in <module>
pl.fit(corpus.train[:1000], corpus.train[:1000])
File "/home/ottonemo/anaconda3/lib/python3.6/site-packages/sklearn/model_selection/_search.py", line 945, in fit
return self._fit(X, y, groups, ParameterGrid(self.param_grid))
File "/home/ottonemo/anaconda3/lib/python3.6/site-packages/sklearn/model_selection/_search.py", line 550, in _fit
base_estimator = clone(self.estimator)
File "/home/ottonemo/anaconda3/lib/python3.6/site-packages/sklearn/base.py", line 69, in clone
new_object_params[name] = clone(param, safe=False)
File "/home/ottonemo/anaconda3/lib/python3.6/site-packages/sklearn/base.py", line 57, in clone
return estimator_type([clone(e, safe=safe) for e in estimator])
File "/home/ottonemo/anaconda3/lib/python3.6/site-packages/sklearn/base.py", line 57, in <listcomp>
return estimator_type([clone(e, safe=safe) for e in estimator])
File "/home/ottonemo/anaconda3/lib/python3.6/site-packages/sklearn/base.py", line 67, in clone
new_object_params = estimator.get_params(deep=False)
TypeError: get_params() missing 1 required positional argument: 'self'
Probable cause: get_params
recursively inspects all attributes of the wrapper instance including self.callbacks
which still contains the uninitialized callbacks. It then calls get_params
which does not work as it is not a static method.
Allow `Dataset` to take additional parameters
NeuralNet
currently only initializes Dataset
with X, y, use_cuda
but we may have more parameters. The user should be able to pass them the same way as for criterion
etc. (i.e. via the prefixes_
).
`predict` in `NeuralNet` class
predict
will currently take the argmax of dimension 1. This is very specific, despite the NeuralNet
class being intended for generic use cases. I see 2 solutions:
- take argmax of last dimension (thus assuming that outputs are probabilities, but not assuming a specific dimensionality)
- return the result from
forward
(thus making no assumption of what that is)
Possibility to *save* and *load* the module paramters only using pytorch's `state_dict`. Should probably take a file name or a file handler.
*Examples* Add an example how to handle data streams with Dataloader
The example should include
- how data streams such as generators can be used in conjunction with the Dataloader
- how to achieve train/test split with such a setup
Blank AssertionError when creating wrapper instance
When the wrapper is initialized with unknown keys the following AssertionError
is raised:
Code/skorch/skorch/net.py in __init__(self, module, criterion, optimizer, lr, gradient_clip_value, gradient_clip_norm_type, max_epochs, batch_size, iterator_train, iterator_valid, dataset, train_split, callbacks, cold_start, verbose, use_cuda, **kwargs)
234 assert not hasattr(self, key)
235 key_has_prefix = any(key.startswith(p) for p in self.prefixes_)
--> 236 assert key.endswith('_') or key_has_prefix
237 vars(self).update(kwargs)
238
AssertionError:
To reproduce this, initialize a wrapper with iterator_test__batch_size=32
as parameter. Since the correct key would be iterator_valid
this code fails with the aforementioned error.
There should be at least a detailed, helpful error message.
Impossible to use Scoring callback for existing values
I already computed scores in my loss function and now I want to score them so that I can print them per epoch. For example:
class MyNet(NeuralNetwork):
def get_loss(...):
self.history.record_batch('foo', 42)
net = MyNet(callbacks=[
inferno.callbacks.Scoring('foo'),
])
However, the Scoring
callback calls its score
method on each batch end and overwrites the value "foo"
and there is no way to properly disable this behavior.
There is a workaround though (but an ugly one):
def ignore_scorer(*_): raise KeyError()
net = MyNet(callbacks=[
inferno.callbacks.Scoring('foo', scoring=ignore_scorer),
])
We should cover this case.
Complete unfinished docstrings.
Basic documentation
- link to tutorials (notebooks)
- document how to extend skorch
- use correct rst markup in docstrings
- render using sphinx on github pages
Find a solution for L1, L2 loss (et al.)
There should be an easy way to add Lx-regularization.
Explicitly support dispatching to multiple GPUs
It should be possible to explicitly dispatch a model on multiple GPUs.
This probably affects data operations and .cuda()
calls.
Support for recursive parameter assignment on module
It would be helpful to have the ability to set parameters beyond module level (for sub-components of the module, for example):
class Seq2Seq:
def __init__(self, encoder, decoder, **kwargs):
self.encoder = encoder
self.decoder = decoder
class Encoder:
def __init__(self, num_hidden=100):
self.num_hidden = num_hidden
self.lin = nn.Linear(1, num_hidden)
ef = NeuralNet(
module=Seq2Seq(encoder=AttentionEncoderRNN, decoder=DecoderRNN),
module__encoder__num_hidden=23,
)
I would expect module.encoder.num_hidden
to be set to 23
. This should be robust with respect to the initializtion of the sub-module, for example if the encoder has elements that depend on the initialized value, those elements should be updated as well. In the given example, I would expect not only module.encoder.num_hidden
to be updated to 23
but also that module.encoder.lin.out_features
is updated (e.g. by re-initializing the whole module).
Suddenly lists
get_iterator
blows up at https://github.com/dnouri/inferno/blob/169e1a0/inferno/net.py#L310 in case sklearn CV split is used and a 1-dimensional torch tensor is fed for X and y.
For example:
pl = GridSearchCV(trainer, params)
pl.fit(corpus.train, corpus.train)
Method to set random state for all components
We need a method (possibly on the wrapper class) to initialize the random state for all components that are concerned with sampling. These include
- the model (e.g. weight init, dropout)
- DataLoader (batch shuffling)
- GridSearchCV split
Find a solution for train/valid splitting.
No way to use custom Sampler
There's currently no way to use a custom Sampler.
Call .cuda() on module if self.use_cuda=True
Currently, the module_
is not automatically moved to cuda even if use_cuda=True
. This is unexpected and should change.
This is what @ottonemo has to say about this:
I suppose so, yes. I was worried that it might interfere with settings that add parameters to the module after the point where we automatically apply .cuda() to the model which would result in these parameters to be excluded from the type conversion. One solution would be to do this conversion every time training starts (as is the case here) and mention in the documentation that there might be a case where the user has to call .cuda() on the model by themselves in certain cases.
In short my suggestion is: Implement self.module_.cuda() in on_train_begin of the base class and leave a comment somewhere (where?) in a docstring.
My suggestion: When a parameter is set on module_
, the module needs to be re-initialized using the initialize_module
method. We could move the .cuda()
call to the end of this method.
I have another fear, though. What if the user wants part of the module and data to be on cuda and part on cpu? I guess we need to make sure that as long as use_cuda=False
, we don't
Support grid search in parallel on GPUs
It would be nice to have n_jobs=2
on a system with 2 GPUs and both jobs are dispatched to one of the GPUs.
Make it possible to pass an *instantiated module* to the wrapper; the module should not be re-instantiated unless a module parameter is set (this would allow to pass pre-trained modules).
`check_history_slice` is useless
No longer needed.
Add a useful `__repr__` to base net class.
Other candidates for this are:
- CVSplit
- History?
- Dataset
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.