GithubHelp home page GithubHelp logo

mikhailkin / dataset Goto Github PK

View Code? Open in Web Editor NEW

This project forked from analysiscenter/batchflow

0.0 2.0 0.0 31.85 MB

Dataset helps you conveniently work with random or sequential batches of your data and define data processing and machine learning workflows even for datasets that do not fit into memory.

Home Page: https://analysiscenter.github.io/dataset/

License: Apache License 2.0

Python 100.00%

dataset's Introduction

License Python TensorFlow Run Status

Dataset

Dataset helps you conveniently work with random or sequential batches of your data and define data processing and machine learning workflows even for datasets that do not fit into memory.

For more details see the documentation and tutorials.

Main features:

  • flexible batch generaton
  • deterministic and stochastic pipelines
  • datasets and pipelines joins and merges
  • data processing actions
  • flexible model configuration
  • within batch parallelism
  • batch prefetching
  • ready to use ML models and proven NN architectures
  • convenient layers and helper functions to build custom models.

Basic usage

my_workflow = my_dataset.pipeline()
              .load('/some/path')
              .do_something()
              .do_something_else()
              .some_additional_action()
              .save('/to/other/path')

The trick here is that all the processing actions are lazy. They are not executed until their results are needed, e.g. when you request a preprocessed batch:

my_workflow.run(BATCH_SIZE, shuffle=True, n_epochs=5)

or

for batch in my_workflow.gen_batch(BATCH_SIZE, shuffle=True, n_epochs=5):
    # only now the actions are fired and data is being changed with the workflow defined earlier
    # actions are executed one by one and here you get a fully processed batch

or

NUM_ITERS = 1000
for i in range(NUM_ITERS):
    processed_batch = my_workflow.next_batch(BATCH_SIZE, shuffle=True, n_epochs=None)
    # only now the actions are fired and data is changed with the workflow defined earlier
    # actions are executed one by one and here you get a fully processed batch

Train a neural network

Dataset includes ready-to-use proven architectures like VGG, Inception, ResNet and many others. To apply them to your data just choose a model, specify the inputs (like the number of classes or images shape) and call train_model. Of course, you can also choose a loss function, an optimizer and many other parameters, if you want.

from dataset.models.tf import ResNet34

my_workflow = my_dataset.pipeline()
              .init_model('dynamic', ResNet34, config={
                          'inputs': {'images': {'shape': B('image_shape')},
                                     'labels': {'classes': 10, 'transform': 'ohe', 'name': 'targets'}}
                          'input_block/inputs': 'images'})
              .load('/some/path')
              .some_transform()
              .another_transform()
              .train_model('ResNet34', feed_dict={'images': B('images'), 'labels': B('labels')})
              .run(BATCH_SIZE, shuffle=True)

For more advanced cases and detailed API see the documentation.

Best practice magic

To improve model quality and accelerate training import best_practice module:

from dataset import best_practice

This will change some models defaults (for instance, batch norm momentum, ResNet block layouts, etc) we find more useful and consistently bringing better results (faster training and more accurate predictions).

Installation

Dataset module is in the beta stage. Your suggestions and improvements are very welcome.

Dataset supports python 3.5 or higher.

Python package

With modern pipenv

pipenv install git+https://github.com/analysiscenter/dataset.git#egg=dataset

With old-fashioned pip

pip3 install git+https://github.com/analysiscenter/dataset.git

After that just import dataset:

import dataset as ds

Git submodule

In many cases it might be more convenient to install dataset as a submodule in your project repository than as a python package.

git submodule add https://github.com/analysiscenter/dataset.git
git submodule init
git submodule update

If your python file is located in another directory, you might need to add a path to dataset:

import sys
sys.path.insert(0, "/path/to/dataset")
import dataset as ds

What is great about using a submodule that every commit in your project can be linked to its own commit of a submodule. This is extremely convenient in a fast paced research environment.

Relative import is also possible:

from .dataset import Dataset

Citing Dataset

Please cite Dataset in your publications if it helps your research.

DOI

Roman Kh et al. Dataset library for fast ML workflows. 2017. doi:10.5281/zenodo.1041203
@misc{roman_kh_2017_1041203,
  author       = {Roman Kh and et al},
  title        = {Dataset library for fast ML workflows},
  year         = 2017,
  doi          = {10.5281/zenodo.1041203},
  url          = {https://doi.org/10.5281/zenodo.1041203}
}

dataset's People

Contributors

roman-kh avatar gregoryivanov avatar anton-br avatar kirillemelyanov avatar

Watchers

James Cloos avatar Mikhail Kostyukov avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.