GithubHelp home page GithubHelp logo

cesarsouza / keras-sharp Goto Github PK

View Code? Open in Web Editor NEW
244.0 44.0 59.0 1.64 MB

Keras# initiated as an effort to port the Keras deep learning library to C#, supporting both TensorFlow and CNTK

License: Other

C# 99.99% Batchfile 0.01%
machine-learning deep-learning neural-networks tensorflow keras

keras-sharp's Introduction

Keras Sharp

Note: This project is about to get archived. For a better replacement, please take a look at TensorFlow.NET or Keras.NET.

Cheers and happy coding!
Cesar


Join the chat at https://gitter.im/keras-sharp/Lobby An ongoing effort to port most of the Keras deep learning library to C#.

Welcome to the Keras# project! We aim to bring a experience-compatible Keras-like API to C#, meaning that, if you already know Keras, you should not have to learn any new concepts to get up and running with Keras#. This is a direct, line-by-line port of the Keras project, meaning that all updates and fixes currently sent to the main Keras project should be simple and straightforward to be applied to this branch. As in the original project, we aim to support both TensorFlow and CNTK - but not Theano, as it has been recently discontinued in 2017.

Example

Consider the following Keras Python example originally done by Jason Brownlee, reproduced below:

from keras.models import Sequential
from keras.layers import Dense
import numpy

# fix random seed for reproducibility
numpy.random.seed(7)

# load pima indians dataset
dataset = numpy.loadtxt("pima-indians-diabetes.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]

# create model
model = Sequential()
model.add(Dense(12, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

# Fit the model
model.fit(X, Y, epochs=150, batch_size=10)

# evaluate the model
scores = model.evaluate(X, Y)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))

The same can be obtained using Keras# using:

// Load the Pima Indians Data Set
var pima = new Accord.DataSets.PimaIndiansDiabetes();
float[,] x = pima.Instances.ToMatrix().ToSingle();
float[] y = pima.ClassLabels.ToSingle();

// Create the model
var model = new Sequential();
model.Add(new Dense(12, input_dim: 8, activation: new ReLU()));
model.Add(new Dense(8, activation: new ReLU()));
model.Add(new Dense(1, activation: new Sigmoid()));

// Compile the model (for the moment, only the mean square 
// error loss is supported, but this should be solved soon)
model.Compile(loss: new MeanSquareError(), 
    optimizer: new Adam(), 
    metrics: new[] { new Accuracy() });

// Fit the model for 150 epochs
model.fit(x, y, epochs: 150, batch_size: 10);

// Use the model to make predictions
float[] pred = model.predict(x)[0].To<float[]>();

// Evaluate the model
double[] scores = model.evaluate(x, y);
Console.WriteLine($"{model.metrics_names[1]}: {scores[1] * 100}");

Upon execution, you should see the same familiar Keras behavior as shown below:

Keras Sharp during training

This is posssible because Keras# is a direct, line-by-line port of the Keras project into C#. A goal of this project is to make sure that porting existing code from its Python counterpart into C# can be done in no time or with minimum effort, if at all.

Backends

Keras# currently supports TensorFlow and CNTK backends. If you would like to switch between different backends:

KerasSharp.Backends.Current.Switch("KerasSharp.Backends.TensorFlowBackend");

or,

KerasSharp.Backends.Current.Switch("KerasSharp.Backends.CNTKBackend");

or,

If you would like to implement your own backend to your own preferred library, such as DiffSharp, just provide your own implementation of the IBackend interface and specify it using:

KerasSharp.Backends.Current.Switch("YourNamespace.YourOwnBackend");

Work-in-progress

However, please note that this is still work-in-progress. Not only Keras#, but also TensorFlowSharp and CNTK. If you would like to contribute to the development of this project, please consider submitting new issues to any of those projects, including us.

Contributing in development

If you would like to contribute to the project, please see: How to contribute to Keras#.

License & Copyright

The Keras-Sharp project is brought to you under the as-permissable-as-possible MIT license. This is the same license provided by the original Keras project. This project also keeps track of all code contributions through the project's issue tracker, and pledges to update all licensing information once user contributions are accepted. Contributors are asked to grant explicit copyright licensens upon their contributions, which guarantees this project can be used in production without any licensing-related worries.

This project is brought to you by the same creators of the Accord.NET Framework.

keras-sharp's People

Contributors

cesarsouza avatar gitter-badger avatar lobrien avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keras-sharp's Issues

Failure to using Conv2D with Tensorflow (CPU)

Hi @cesarsouza

Finally! expectations ended :) Thanks to your great work for brought Keras# (Like grate successful Accord.NET). Solution Keras Sharp on VS 2017 with Windows 10 is hard to build, but done it, now I'm trying to use Conv2D but got this:

6) Error : Tests.SequentialTest.sequential_guide_convnet
TensorFlow.TFException : Conv2D on data format NHWC requires the stride attribute to contain 4 values, but got: 2 for 'conv2d_1/Conv2D0' (op: 'Conv2D') with input shapes: [?,100,100,3], [3,3,3,32].

Also on Unit Tests project failed:
image

Best the wishes,
HZ :)

How do I open this solution?

I was unable to load this solution.
First I tried VS 2015, then realized it is built using VS2017 based on the documentation. so I installed VS2017 community. When I tried to open the solution with VS2017 it could not load any of the projects. It was only after I upgraded the initial installation to the latest version I was able to open it. But the docs say any version of 2017 should work.

Fails silently on tensorflowsharp 1.7

    public TensorFlowBackend()
    {
        this.tf = new TFGraph();

This new just dies if I try to target TFSharp 1.7. Are there some logs I can look at?

Thanks

Setup procedure (nuget

Hi,

I cannot find KerasSharp in https://www.nuget.org/.
Also, I cannot find any setup procedure.
How can I install KerasSharp (without compilation) to use it in my test c# project?

thank you,
Igor.

Contributing to Keras-Sharp

@cesarsouza Great work starting keras-sharp. I will have a more in depth look at the code as soon as time permits and start contributing.

You have already mentioned a lot of todos and a plan of action in the [keras-sharp-thread-in-tensorflowsharp]migueldeicaza/TensorFlowSharp#75.

What is the procedure for contributing so several people don't start working on the same issue. Create an issue based on the todo list and create a corresponding pull request?

loading models

Hi i created a model ( h5py) in python, can i load it in c#? or at least the weights?

Fail to run tests

Following the How to contribute in development wiki, I could not get the unit tests to run. I would get this message.

No test is available in C:\path\to\keras-sharp\Test.dll. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.

See: https://stackoverflow.com/q/34790339/251019

Also, the suggested Ctrl+R, Ctrl+T did not work.

Installing the NUGET package NUnit3TestAdapter fixed this for me.

  • Open the Package Manager Console
  • Run: Install-Package NUnit3TestAdapter
  • Set test Processor Architecture to X64: Test -> Test Settings -> Default Processor Architecture -> X64
  • Rebuild the tests: Build -> Rebuild Unit Test
  • Run all tests: Test -> Run > All Tests or press Ctrl+R, Ctrl+T

Can the wiki be updated with this information?

Run on .Net 4.5

Is there a version ported from the .Net Core to .Net 4.5 available?

Implement the Flatten layer

Note: Implementing the Flatten layer does not actually involve implement it from scratch.

What needs to be done is to navigate to https://github.com/fchollet/keras/tree/f65a56fb65062c8d14d215c9f4b1015b97cc5bf3/keras, find where the Flatten layer is implemented (in this case, core.py, line 452), and port its arguably simple code:

class Flatten(Layer):
    """Flattens the input. Does not affect the batch size.
    # Example
    ```python
        model = Sequential()
        model.add(Convolution2D(64, 3, 3,
                                border_mode='same',
                                input_shape=(3, 32, 32)))
        # now: model.output_shape == (None, 64, 32, 32)
        model.add(Flatten())
        # now: model.output_shape == (None, 65536)
    ```
    """

    def __init__(self, **kwargs):
        super(Flatten, self).__init__(**kwargs)
        self.input_spec = InputSpec(min_ndim=3)

    def compute_output_shape(self, input_shape):
        if not all(input_shape[1:]):
            raise ValueError('The shape of the input to "Flatten" '
                             'is not fully defined '
                             '(got ' + str(input_shape[1:]) + '. '
                             'Make sure to pass a complete "input_shape" '
                             'or "batch_input_shape" argument to the first '
                             'layer in your model.')
        return (input_shape[0], np.prod(input_shape[1:]))

    def call(self, inputs):
        return K.batch_flatten(inputs)

into C# using the same K backend already present in Keras Sharp.

Implement the Conv2D layer

Note: Implementing the Conv2D layer does not actually involve implement it from scratch.

What needs to be done is to navigate to https://github.com/fchollet/keras/tree/f65a56fb65062c8d14d215c9f4b1015b97cc5bf3/keras, find where the Flatten layer is implemented (in this case, convolutional.py, line 343), and port its code:

class Conv2D(_Conv):
    """2D convolution layer (e.g. spatial convolution over images).
    This layer creates a convolution kernel that is convolved
    with the layer input to produce a tensor of
    outputs. If `use_bias` is True,
    a bias vector is created and added to the outputs. Finally, if
    `activation` is not `None`, it is applied to the outputs as well.
    When using this layer as the first layer in a model,
    provide the keyword argument `input_shape`
    (tuple of integers, does not include the sample axis),
    e.g. `input_shape=(128, 128, 3)` for 128x128 RGB pictures
    in `data_format="channels_last"`.
    # Arguments
        filters: Integer, the dimensionality of the output space
            (i.e. the number output of filters in the convolution).
        kernel_size: An integer or tuple/list of 2 integers, specifying the
            width and height of the 2D convolution window.
            Can be a single integer to specify the same value for
            all spatial dimensions.
        strides: An integer or tuple/list of 2 integers,
            specifying the strides of the convolution along the width and height.
            Can be a single integer to specify the same value for
            all spatial dimensions.
            Specifying any stride value != 1 is incompatible with specifying
            any `dilation_rate` value != 1.
        padding: one of `"valid"` or `"same"` (case-insensitive).
        data_format: A string,
            one of `channels_last` (default) or `channels_first`.
            The ordering of the dimensions in the inputs.
            `channels_last` corresponds to inputs with shape
            `(batch, height, width, channels)` while `channels_first`
            corresponds to inputs with shape
            `(batch, channels, height, width)`.
            It defaults to the `image_data_format` value found in your
            Keras config file at `~/.keras/keras.json`.
            If you never set it, then it will be "channels_last".
        dilation_rate: an integer or tuple/list of 2 integers, specifying
            the dilation rate to use for dilated convolution.
            Can be a single integer to specify the same value for
            all spatial dimensions.
            Currently, specifying any `dilation_rate` value != 1 is
            incompatible with specifying any stride value != 1.
        activation: Activation function to use
            (see [activations](../activations.md)).
            If you don't specify anything, no activation is applied
            (ie. "linear" activation: `a(x) = x`).
        use_bias: Boolean, whether the layer uses a bias vector.
        kernel_initializer: Initializer for the `kernel` weights matrix
            (see [initializers](../initializers.md)).
        bias_initializer: Initializer for the bias vector
            (see [initializers](../initializers.md)).
        kernel_regularizer: Regularizer function applied to
            the `kernel` weights matrix
            (see [regularizer](../regularizers.md)).
        bias_regularizer: Regularizer function applied to the bias vector
            (see [regularizer](../regularizers.md)).
        activity_regularizer: Regularizer function applied to
            the output of the layer (its "activation").
            (see [regularizer](../regularizers.md)).
        kernel_constraint: Constraint function applied to the kernel matrix
            (see [constraints](../constraints.md)).
        bias_constraint: Constraint function applied to the bias vector
            (see [constraints](../constraints.md)).
    # Input shape
        4D tensor with shape:
        `(samples, channels, rows, cols)` if data_format='channels_first'
        or 4D tensor with shape:
        `(samples, rows, cols, channels)` if data_format='channels_last'.
    # Output shape
        4D tensor with shape:
        `(samples, filters, new_rows, new_cols)` if data_format='channels_first'
        or 4D tensor with shape:
        `(samples, new_rows, new_cols, filters)` if data_format='channels_last'.
        `rows` and `cols` values might have changed due to padding.
    """

    @interfaces.legacy_conv2d_support
    def __init__(self, filters,
                 kernel_size,
                 strides=(1, 1),
                 padding='valid',
                 data_format=None,
                 dilation_rate=(1, 1),
                 activation=None,
                 use_bias=True,
                 kernel_initializer='glorot_uniform',
                 bias_initializer='zeros',
                 kernel_regularizer=None,
                 bias_regularizer=None,
                 activity_regularizer=None,
                 kernel_constraint=None,
                 bias_constraint=None,
                 **kwargs):
        super(Conv2D, self).__init__(
            rank=2,
            filters=filters,
            kernel_size=kernel_size,
            strides=strides,
            padding=padding,
            data_format=data_format,
            dilation_rate=dilation_rate,
            activation=activation,
            use_bias=use_bias,
            kernel_initializer=kernel_initializer,
            bias_initializer=bias_initializer,
            kernel_regularizer=kernel_regularizer,
            bias_regularizer=bias_regularizer,
            activity_regularizer=activity_regularizer,
            kernel_constraint=kernel_constraint,
            bias_constraint=bias_constraint,
            **kwargs)
        self.input_spec = InputSpec(ndim=4)

    def get_config(self):
        config = super(Conv2D, self).get_config()
        config.pop('rank')
        return config

into C# using the same K backend already present in Keras Sharp. Note: It would also be necessary to port the base _Conv class using the same process.

Still active?

If this project is no longer active, what is the reason?

Was there not enough community support?
Did a new project supersede it?

Thank you

CNTK GPU use?

How to use CNTK GPU in SampleApp ?

Seems that the KerasSharp.Backends.Current.Switch has not the correct GPU type to work

btw Great work ๐Ÿ‘

Make the Compilation example in the Sequential guide pass

Keras' guide to the Sequential model lists the following example under the "Compilation" section:

# For a multi-class classification problem
model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])

# For a binary classification problem
model.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['accuracy'])

# For a mean squared error regression problem
model.compile(optimizer='rmsprop',
              loss='mse')

# For custom metrics
import keras.backend as K

def mean_pred(y_true, y_pred):
    return K.mean(y_pred)

model.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['accuracy', mean_pred])

Trying to run equivalent code snippets using the current code results in exceptions in multiple areas.

Make the first example for the Sequential model pass

Make the first example for a Sequential model given in https://keras.io/models/sequential/ pass as a unit test in the framework. The example is:

 model = Sequential()
 model.add(Dense(32, input_shape=(500,)))
 model.add(Dense(10, activation='softmax'))
 model.compile(optimizer='rmsprop',
       loss='categorical_crossentropy',
       metrics=['accuracy'])

and the unit test is located at https://github.com/cesarsouza/keras-sharp/blob/master/Tests/SequentialTest.cs#L28

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.