GithubHelp home page GithubHelp logo

migueldeicaza / tensorflowsharp Goto Github PK

View Code? Open in Web Editor NEW
3.1K 3.1K 581.0 8.38 MB

TensorFlow API for .NET languages

License: MIT License

C# 99.06% Makefile 0.02% F# 0.16% Perl 0.07% Python 0.69%
c-sharp dot-net f-sharp machine-learning mono tensorflow xamarin

tensorflowsharp's Introduction

Build Status Gitter

When to use TensorFlowSharp

TensorFlowSharp is a good runtime to run your existing models, and is mostly a straight binding to the underlying TensorFlow runtime. Most people will want to use a higher-level library for interfacing with TensorFlow.

The library was designed to blend in the .NET ecosystem and use the .NET naming conventions.

I strongly recommend that you use TensorFlow.NET which takes a different approach than TensorFlowSharp, it uses the Python naming convention and has a much broader support for the higher level operations that you are likely to need - and is also actively maintained.

TensorFlowSharp

TensorFlowSharp are .NET bindings to the TensorFlow library published here:

https://github.com/tensorflow/tensorflow

This surfaces the C API as a strongly-typed .NET API for use from C# and F#.

The API surfaces the entire low-level TensorFlow API, it is on par with other language bindings. But currently does not include a high-level API like the Python binding does, so it is more cumbersome to use for those high level operations.

You can prototype using TensorFlow or Keras in Python, then save your graphs or trained models and then load the result in .NET with TensorFlowSharp and feed your own data to train or run.

The current API documentation is here.

Using TensorFlowSharp

Installation

The easiest way to get started is to use the NuGet package for TensorFlowSharp which contains both the .NET API as well as the native libraries for 64-bit Linux, Mac and Windows using the CPU backend.

You can install using NuGet like this:

nuget install TensorFlowSharp

Or select it from the NuGet packages UI on Visual Studio.

On Visual Studio, make sure that you are targeting .NET 4.6.1 or later, as this package uses some features of newer .NETs. Otherwise, the package will not be added. Once you do this, you can just use the TensorFlowSharp nuget

Alternatively, you can download it directly.

Using TensorFlowSharp

Your best source of information right now are the SampleTest that exercises various APIs of TensorFlowSharp, or the stand-alone samples located in "Examples".

This API binding is closer design-wise to the Java and Go bindings which use explicit TensorFlow graphs and sessions. Your application will typically create a graph (TFGraph) and setup the operations there, then create a session from it (TFSession), then use the session runner to setup inputs and outputs and execute the pipeline.

Something like this:

using (var graph = new TFGraph ())
{
    // Load the model
    graph.Import (File.ReadAllBytes ("MySavedModel"));
    using (var session = new TFSession (graph))
    {
        // Setup the runner
        var runner = session.GetRunner ();
        runner.AddInput (graph ["input"] [0], tensor);
        runner.Fetch (graph ["output"] [0]);

        // Run the model
        var output = runner.Run ();

        // Fetch the results from output:
        TFTensor result = output [0];
    }
}

If your application is sensitive to GC cycles, you can run your model as follows. The Run method will then allocate managed memory only at the first call and reuse it later on. Note that this requires you to reuse the Runner instance and not to change the shape of the input data:

// Some input matrices
var inputs = new float[][,] {
    new float[,] { { 1, 2 }, { 3, 4 } },
    new float[,] { { 2, 4 }, { 6, 8 } }
};

// Assumes all input matrices have identical shape
var shape = new long[] { inputs[0].GetLongLength(0), inputs[0].GetLongLength(1) };
var size = inputs[0].Length * sizeof(float);

// Empty input and output tensors
var input = new TFTensor(TFDataType.Float, shape, size);
var output = new TFTensor[1];

// Result array for a single run
var result = new float[1, 1];

using (var graph = new TFGraph())
{
    // Load the model
    graph.Import(File.ReadAllBytes("MySavedModel"));
    using (var session = new TFSession(graph))
    {
        // Setup the runner
        var runner = session.GetRunner();
        runner.AddInput(graph["input"][0], input);
        runner.Fetch(graph["output"][0]);

        // Run the model on each input matrix
        for (int i = 0; i < inputs.Length; i++)
        {
            // Mutate the input tensor
            input.SetValue(inputs[i]);

            // Run the model
            runner.Run(output);

            // Fetch the result from output into `result`
            output[0].GetValue(result);
        }
    }
}

In scenarios where you do not need to setup the graph independently, the session will create one for you. The following example shows how to abuse TensorFlow to compute the addition of two numbers:

using (var session = new TFSession())
{
    var graph = session.Graph;

    var a = graph.Const(2);
    var b = graph.Const(3);
    Console.WriteLine("a=2 b=3");

    // Add two constants
    var addingResults = session.GetRunner().Run(graph.Add(a, b));
    var addingResultValue = addingResults.GetValue();
    Console.WriteLine("a+b={0}", addingResultValue);

    // Multiply two constants
    var multiplyResults = session.GetRunner().Run(graph.Mul(a, b));
    var multiplyResultValue = multiplyResults.GetValue();
    Console.WriteLine("a*b={0}", multiplyResultValue);
}

Here is an F# scripting version of the same example, you can use this in F# Interactive:

#r @"packages\TensorFlowSharp.1.4.0\lib\net471\TensorFlowSharp.dll"

open System
open System.IO
open TensorFlow

// set the path to find the native DLL
Environment.SetEnvironmentVariable("Path", 
    Environment.GetEnvironmentVariable("Path") + ";" + __SOURCE_DIRECTORY__ + @"/packages/TensorFlowSharp.1.2.2/native")

module AddTwoNumbers = 
    let session = new TFSession()
    let graph = session.Graph

    let a = graph.Const(new TFTensor(2))
    let b = graph.Const(new TFTensor(3))
    Console.WriteLine("a=2 b=3")

    // Add two constants
    let addingResults = session.GetRunner().Run(graph.Add(a, b))
    let addingResultValue = addingResults.GetValue()
    Console.WriteLine("a+b={0}", addingResultValue)

    // Multiply two constants
    let multiplyResults = session.GetRunner().Run(graph.Mul(a, b))
    let multiplyResultValue = multiplyResults.GetValue()
    Console.WriteLine("a*b={0}", multiplyResultValue)

Working on TensorFlowSharp

If you want to work on extending TensorFlowSharp or contribute to its development read the CONTRIBUTING.md file.

Please keep in mind that this requires a modern version of C# as this uses some new capabilities there. So you will want to use Visual Studio 2017.

Possible Contributions

Build More Tests

Would love to have more tests to ensure the proper operation of the framework.

Samples

The binding is pretty much complete, and at this point, I want to improve the API to be easier and more pleasant to use from both C# and F#. Creating samples that use Tensorflow is a good way of finding easy wins on the usability of the API, there are some here:

https://github.com/tensorflow/models

Packaging

Mobile: we need to package the library for consumption on Android and iOS.

Documentation Styling

The API documentation has not been styled, I am using the barebones template for documentation, and it can use some work.

Issues

I have logged some usability problems and bugs in Issues, feel free to take on one of those tasks.

Documentation

Much of the online documentation comes from TensorFlow and is licensed under the terms of Apache 2 License, in particular all the generated documentation for the various operations that is generated by using the tensorflow reflection APIs.

Last API update: Release 1.9

tensorflowsharp's People

Contributors

alexpantyukhin avatar andy-wilkinson avatar andykernahan avatar asimshankar avatar captainst avatar cesarsouza avatar csteegz avatar dieron avatar dorokhov avatar dsherret avatar dsyme avatar enricomi avatar ericstj avatar falahati avatar hackingsma avatar kevmal avatar lobrien avatar mattleibow avatar migueldeicaza avatar mlusiak avatar movgp0 avatar resnikb avatar saul avatar sergey-tihon avatar skotz avatar syn-mcj avatar uselesstoucan avatar vladimiroster avatar zeahmed avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorflowsharp's Issues

Should allow for jagged arrays as inputs to TFTensor.

Just had a taste of how common they are.

Currently our implicit conversion:

unsafe public static implicit operator TFTensor (Array array)

Only handles multi-dimensional arrays, does not handle arrays of arrays.

Replace long [] dims in API with TFShape where applicable.

Various places surface long [] dims as a way of specifying a tensor shape, now that we have a high-level type for it (TFShape), we should replace those uses with it and perhaps enforce that the shape is fully specified in those scenarios.

Possibly unnecessary or buggy branching when fetching jagged arrays?

Specifically here:

if (top < Int32.MaxValue) {
int itop = (int)top;
for (int i = 0; i < itop; i++) {
var childArray = FetchJaggedArray (t, dt, ref data, shape, level + 1);
if (target == null)
target = Array.CreateInstance (childArray.GetType (), shape [level]);
target.SetValue (childArray, i);
}
} else {
for (int l = 0; l < top; l++) {
var chidArray = FetchJaggedArray (t, dt, ref data, shape, level + 1);
if (target == null)
target = Array.CreateInstance (chidArray.GetType (), shape [level]);
target.SetValue (chidArray, l);
}
}

As far as I can tell, the only difference between these two branches is the casting of top (a long) to int and its subsequent assignment to itop. However, these variables are only used in a for loop with an int counter.

This leaves me with two questions:

  1. Is this branching necessary? Seems like we could just always use int for both "top" and the loop counter. I don't think there's support for arrays with more than Int32.MaxValue dimension in .Net. That said, throwing a "too long array" exception might also be acceptable, since this simply ignoring a chunk of the data also seems bad. I don't know tensorflow well enough to comment on the appropriate API behavior here.

  2. Is this code currently buggy? If top is, in fact, greater than the max value of an int32, won't "l" (the integer counter for the for loop in that branch) wrap and never reach it?

mobile compatibility (Similar to issue #37)

Hi Miguel,

I'm trying to get a reference to TensorFlowSharp in my Xamarin mobile development but I'm experiencing some issues.

The problem is that TensorFlowSharp doesn't build when targeting MONO / .NET 4.5, which is necessary to reference it in the mobile project.

Build error arises at line 675 since MemoryCopy was first introduced in .NET 4.6
Buffer.MemoryCopy ((void*)src, target, size, size);

Do you know any workaround to this?

Best regards,
Simon

TFTensor constructors

I probably should remove all the low-level constructors, and make them some sort of properly named factory method, and keep the constructors for the most common operations.

Xamarin mobile compatibility

Hi Miguel,

I know this project is still a work in progress and you have plans for this to be compatible with mobile. However, I was wondering when you would estimate this would be ready (at least for testing) in the mobile world on Xamarin.Android and Xamarin.iOS? And what would need to be progressed to get it to this stage?

TFTensor.GetValue ()

Add an overload that returns not a multi-dimensional array, but jagged arrays, as they are easier to manipulate.

Handle "ref" parameters

There are some 112 operations in TensorFlow that are not surfaced as high-level operations because the input or output parameters have the is_ref property set to true, for example:

BarrierInsertMany

Where the input_arg "handle" is a string/ref. See the file tensorflow/core/ops/ops.pbtxt for a textual dump of the API.

It is not clear what to do with these data types on the strong binding, neither Go or Java deal with them.

SkipInREF: AccumulatorApplyGradient parameters with is_ref: DT_STRING handle
SkipInREF: AccumulatorNumAccumulated parameters with is_ref: DT_STRING handle
SkipInREF: AccumulatorSetGlobalStep parameters with is_ref: DT_STRING handle
SkipInREF: AccumulatorTakeGradient parameters with is_ref: DT_STRING handle
SkipInREF: ApplyAdadelta parameters with is_ref: DT_INVALID var, DT_INVALID accum, DT_INVALID accum_update
SkipInREF: ApplyAdagrad parameters with is_ref: DT_INVALID var, DT_INVALID accum
SkipInREF: ApplyAdagradDA parameters with is_ref: DT_INVALID var, DT_INVALID gradient_accumulator, DT_INVALID gradient_squared_accumulator
SkipInREF: ApplyAdam parameters with is_ref: DT_INVALID var, DT_INVALID m, DT_INVALID v
SkipInREF: ApplyCenteredRMSProp parameters with is_ref: DT_INVALID var, DT_INVALID mg, DT_INVALID ms, DT_INVALID mom
SkipInREF: ApplyFtrl parameters with is_ref: DT_INVALID var, DT_INVALID accum, DT_INVALID linear
SkipInREF: ApplyGradientDescent parameters with is_ref: DT_INVALID var
SkipInREF: ApplyMomentum parameters with is_ref: DT_INVALID var, DT_INVALID accum
SkipInREF: ApplyProximalAdagrad parameters with is_ref: DT_INVALID var, DT_INVALID accum
SkipInREF: ApplyProximalGradientDescent parameters with is_ref: DT_INVALID var
SkipInREF: ApplyRMSProp parameters with is_ref: DT_INVALID var, DT_INVALID ms, DT_INVALID mom
SkipInREF: Assign parameters with is_ref: DT_INVALID ref
SkipInREF: AssignAdd parameters with is_ref: DT_INVALID ref
SkipInREF: AssignSub parameters with is_ref: DT_INVALID ref
SkipInREF: BarrierClose parameters with is_ref: DT_STRING handle
SkipInREF: BarrierIncompleteSize parameters with is_ref: DT_STRING handle
SkipInREF: BarrierInsertMany parameters with is_ref: DT_STRING handle
SkipInREF: BarrierReadySize parameters with is_ref: DT_STRING handle
SkipInREF: BarrierTakeMany parameters with is_ref: DT_STRING handle
SkipInREF: CountUpTo parameters with is_ref: DT_INVALID ref
SkipInREF: DestroyTemporaryVariable parameters with is_ref: DT_INVALID ref
SkipInREF: InitializeTable parameters with is_ref: DT_STRING table_handle
SkipInREF: InitializeTableFromTextFile parameters with is_ref: DT_STRING table_handle
SkipInREF: IsVariableInitialized parameters with is_ref: DT_INVALID ref
SkipInREF: LookupTableExport parameters with is_ref: DT_STRING table_handle
SkipInREF: LookupTableFind parameters with is_ref: DT_STRING table_handle
SkipInREF: LookupTableImport parameters with is_ref: DT_STRING table_handle
SkipInREF: LookupTableInsert parameters with is_ref: DT_STRING table_handle
SkipInREF: LookupTableSize parameters with is_ref: DT_STRING table_handle
SkipInREF: NegTrain parameters with is_ref: DT_FLOAT w_in, DT_FLOAT w_out
SkipInREF: QueueClose parameters with is_ref: DT_STRING handle
SkipInREF: QueueDequeue parameters with is_ref: DT_STRING handle
SkipInREF: QueueDequeueMany parameters with is_ref: DT_STRING handle
SkipInREF: QueueDequeueUpTo parameters with is_ref: DT_STRING handle
SkipInREF: QueueEnqueue parameters with is_ref: DT_STRING handle
SkipInREF: QueueEnqueueMany parameters with is_ref: DT_STRING handle
SkipInREF: QueueSize parameters with is_ref: DT_STRING handle
SkipInREF: ReaderNumRecordsProduced parameters with is_ref: DT_STRING reader_handle
SkipInREF: ReaderNumWorkUnitsCompleted parameters with is_ref: DT_STRING reader_handle
SkipInREF: ReaderRead parameters with is_ref: DT_STRING reader_handle, DT_STRING queue_handle
SkipInREF: ReaderReadUpTo parameters with is_ref: DT_STRING reader_handle, DT_STRING queue_handle
SkipInREF: ReaderReset parameters with is_ref: DT_STRING reader_handle
SkipInREF: ReaderRestoreState parameters with is_ref: DT_STRING reader_handle
SkipInREF: ReaderSerializeState parameters with is_ref: DT_STRING reader_handle
SkipInREF: RefEnter parameters with is_ref: DT_INVALID data
SkipInREF: RefExit parameters with is_ref: DT_INVALID data
SkipInREF: RefIdentity parameters with is_ref: DT_INVALID input
SkipInREF: RefMerge parameters with is_ref: DT_INVALID inputs
SkipInREF: RefNextIteration parameters with is_ref: DT_INVALID data
SkipInREF: RefSelect parameters with is_ref: DT_INVALID inputs
SkipInREF: RefSwitch parameters with is_ref: DT_INVALID data
SkipInREF: ScatterAdd parameters with is_ref: DT_INVALID ref
SkipInREF: ScatterDiv parameters with is_ref: DT_INVALID ref
SkipInREF: ScatterMul parameters with is_ref: DT_INVALID ref
SkipInREF: ScatterNdAdd parameters with is_ref: DT_INVALID ref
SkipInREF: ScatterNdSub parameters with is_ref: DT_INVALID ref
SkipInREF: ScatterNdUpdate parameters with is_ref: DT_INVALID ref
SkipInREF: ScatterSub parameters with is_ref: DT_INVALID ref
SkipInREF: ScatterUpdate parameters with is_ref: DT_INVALID ref
SkipInREF: SdcaShrinkL1 parameters with is_ref: DT_FLOAT weights
SkipInREF: SparseAccumulatorApplyGradient parameters with is_ref: DT_STRING handle
SkipInREF: SparseAccumulatorTakeGradient parameters with is_ref: DT_STRING handle
SkipInREF: SparseApplyAdadelta parameters with is_ref: DT_INVALID var, DT_INVALID accum, DT_INVALID accum_update
SkipInREF: SparseApplyAdagrad parameters with is_ref: DT_INVALID var, DT_INVALID accum
SkipInREF: SparseApplyAdagradDA parameters with is_ref: DT_INVALID var, DT_INVALID gradient_accumulator, DT_INVALID gradient_squared_accumulator
SkipInREF: SparseApplyCenteredRMSProp parameters with is_ref: DT_INVALID var, DT_INVALID mg, DT_INVALID ms, DT_INVALID mom
SkipInREF: SparseApplyFtrl parameters with is_ref: DT_INVALID var, DT_INVALID accum, DT_INVALID linear
SkipInREF: SparseApplyMomentum parameters with is_ref: DT_INVALID var, DT_INVALID accum
SkipInREF: SparseApplyProximalAdagrad parameters with is_ref: DT_INVALID var, DT_INVALID accum
SkipInREF: SparseApplyProximalGradientDescent parameters with is_ref: DT_INVALID var
SkipInREF: SparseApplyRMSProp parameters with is_ref: DT_INVALID var, DT_INVALID ms, DT_INVALID mom
SkipInREF: StackClose parameters with is_ref: DT_STRING handle
SkipInREF: StackPop parameters with is_ref: DT_STRING handle
SkipInREF: StackPush parameters with is_ref: DT_STRING handle
SkipInREF: StridedSliceAssign parameters with is_ref: DT_INVALID ref
SkipInREF: TensorArrayClose parameters with is_ref: DT_STRING handle
SkipInREF: TensorArrayConcat parameters with is_ref: DT_STRING handle
SkipInREF: TensorArrayGather parameters with is_ref: DT_STRING handle
SkipInREF: TensorArrayPack parameters with is_ref: DT_STRING handle
SkipInREF: TensorArrayRead parameters with is_ref: DT_STRING handle
SkipInREF: TensorArrayScatter parameters with is_ref: DT_STRING handle
SkipInREF: TensorArraySize parameters with is_ref: DT_STRING handle
SkipInREF: TensorArraySplit parameters with is_ref: DT_STRING handle
SkipInREF: TensorArrayUnpack parameters with is_ref: DT_STRING handle
SkipInREF: TensorArrayWrite parameters with is_ref: DT_STRING handle
SkipOutREF: Barrier parameters with is_ref: 
SkipOutREF: ConditionalAccumulator parameters with is_ref: 
SkipOutREF: FIFOQueue parameters with is_ref: 
SkipOutREF: FakeQueue parameters with is_ref: 
SkipOutREF: FixedLengthRecordReader parameters with is_ref: 
SkipOutREF: HashTable parameters with is_ref: 
SkipOutREF: IdentityReader parameters with is_ref: 
SkipOutREF: MutableDenseHashTable parameters with is_ref: 
SkipOutREF: MutableHashTable parameters with is_ref: 
SkipOutREF: MutableHashTableOfTensors parameters with is_ref: 
SkipOutREF: PaddingFIFOQueue parameters with is_ref: 
SkipOutREF: PriorityQueue parameters with is_ref: 
SkipOutREF: RandomShuffleQueue parameters with is_ref: 
SkipOutREF: SparseConditionalAccumulator parameters with is_ref: 
SkipOutREF: Stack parameters with is_ref: 
SkipOutREF: TFRecordReader parameters with is_ref: 
SkipOutREF: TemporaryVariable parameters with is_ref: 
SkipOutREF: TensorArray parameters with is_ref: 
SkipOutREF: TensorArrayGrad parameters with is_ref: 
SkipOutREF: TextLineReader parameters with is_ref: 
SkipOutREF: Variable parameters with is_ref: 
SkipOutREF: VariableV2 parameters with is_ref: 
SkipOutREF: WholeFileReader parameters with is_ref: 
SkipTYPE: SymbolicGradient due to attribute (func f) lacking a mapping to C#

Sample Test Status for Windows

This issue aims to track errors in Sample Test in Windows

Using this wndows dll version

VS2015 64bits
added: Microsoft.NET.Compilers to enable it working with System.Value.Tuple
t.AttributesTest(); //Fail

		public void AttributesTest ()
		{
			using (var x = new AttributeTest ()) {
				var shape1 = new TFShape (new long [] { 1, 3 });
				var shape2 = new TFShape ( 2, 4, 6 );
				var desc = x.Init ("list(shape)");
				desc.SetAttrShape ("v", new TFShape [] { shape1, shape2 });
				var op = desc.FinishOperation ();  //<===Fail here!
				//ExpectMeta (op, "v", 2, TFAttributeType.Shape, 5); 
                       }
					
		}

==> For Simple Test: it seems the only test that fails is : t.AttributesTest();

Add documentation on how to contribute

Hello there,

I know its super early to add such request as the repository has been created ~10h ago. However, I am deeply interested in contributing to this project. I have authored a general machine learning framework for .NET and I am also a researcher on computer vision currently working with Keras, TensorFlow and Theano. So I might know a thing or two on both the .NET and TensorFlow camps, and I am offering to help on whatever is needed (in my free time, though).

Cesar

TFTensor.GetValue appears to be transposing multidimensional arrays.

Hello,

TFTensor.GetValue appears to be transposing multidimensional arrays.

Specifically, I believe that FetchMultiDimensionalArray is updating the target array in column major order, whereas data is documented to be in row major order.

Version Used:

Library Version
libtensorflow.dll r1 (local Windows cmake build)
TensorFlowSharp.dll 39eb920

Reproduction:

The following program effectively roundtrips a 2-D array:

public static void Main(string[] args)
{
    var graph = new TFGraph();
    var input = graph.Placeholder(TFDataType.Int32);
    var output = graph.Identity(input);
    using (var session = new TFSession(graph))
    {
        var expected = new int[,] {{0, 1}, {2, 3}};

        var runner = session.GetRunner();
        runner.AddInput(input, expected).Fetch(output);

        var actual = (int[,])runner.Run()[0].GetValue();

        Console.WriteLine($"Expected: {RowOrderJoin(expected)}"); // 0, 1, 2, 3
        Console.WriteLine($"Actual:   {RowOrderJoin(actual)}");   // 0, 2, 1, 3
    }
}
private static string RowOrderJoin(Array array) => String.Join(", ", array.Cast<int>());

Expected Behaviour:

The expected and actual arrays are equivalent.

Actual Behaviour:

The actual array appears to be transposed.

Cheers.

TFGraph.ImportGraphDef

Currently we take a TFImportGraphDefOptions which has only one possible option, a string prefix, perhaps we should just take that string prefix for now.

TFTensor.{Zeros, Ones, Fill}

Add convenient methods to create TFTensor instances with zeros, ones, or specific values given a shape.

Additionally, the shape could be provided with a params int [] or a TFShape.

Note: while I implemented some skeleton methods in KerasSharp for this, I just realized they are wrong. The current binding for TFGraph.Const is the raw underlying binding, and we need to do more work to create these constants, the code in Python in constants.py seems to use protocol buffers to set it up.

incorrect win64 precompiled binary link

Your precompiled uploaded binaries

  • libtensorflow-cpu-darwin-x86_64-1.0.0-rc0.tar.gz
  • libtensorflow-gpu-darwin-x86_64-1.0.0-rc0.tar.gz
    are the same as the linux ones.

There are .so files inside instead of .dll

Figure out an idiom for TFOperation lifetime

The object is only valid as long as the TFGraph object exists, when the TFGraph goes away, the TFOperation goes away as well. So we need a way to either track all TFOperations in a TFGraph [1] or we need TFOperations to keep a pointer to the TFGraph and check if the obejct is still valid on each call, and if not zero out their handle/flag the object as useless.

[1] This is easy, provided that we never need to re-surface a TFGraph back into managed code from a native pointer, we just keep a list of all the objects created.

Windows

This is a great wrapper for C# and has worked well in my Ubuntu 16.04 partition when using the tensorflow shared object.

However, I was wondering if you know if there is any way to use Bazel to compile a libtensorflow.dll so that this could work with Windows too? I saw that you said to change the name of the shared object generated to '.dll', however this seems to produce a 'BadImageFormatException' in VS2015 Windows.

Thanks in advance

PS. I am targeting x64 and .Net FrameWork 4.6.1 when building this project.

Import InceptionV3 throws "Invalid GraphDef" error

Hi,
I've tried to import the InceptionV3 using the following code:
var model = File.ReadAllBytes(modelFile);
var graph = new TFGraph();
graph.Import(model, "");

-- Unhandled exception of type TFException "Invalid GraphDef"

It looks like for all graphs bigger than approx 70 Mb this exception is thrown.

Thx

Should TFSession expose a higher-level API?

The current binding exposes a couple of Run methods with far from ideal APIs, as inputs and input values are specified separately and are far from pleasant.

C

The CSession type in the C API exercise class instead opts for SetInputs, SetOuputs methods to remove the ugliness. Should this higher level construct be part of the default C# API?

Given that TFSession is not resurfaced from unmanaged pointers, there are no drawbacks to keeping managed state to assist.

Java

Similarly, the Java API exposes a "Runner()" method that surfaces a wrapper class that keeps track of the various inputs, outputs and targets. Instead of calling session.Run, you must call session.Runner() to get the runner object, then configure the properites, and then invoke Run() on the result.

Go

Go has a more limited API that does not surface the TFBuffer options, and additionally takes a dictionary for the outputs/outvalues

An unhandled exception of type 'System.BadImageFormatException' occurred in TensorFlowSharp.dll

Dear Sir,
Thank you for your wonderful work on Tensorflowsharp. I tried to run ExampleInceptionInference project following your guides, successfully obtain libtensorflow.dll under linux, and copy this model to debug folder under x64 of visual studio 2015 , after success with compiling/building work done, when ran "Tensorflow.cs" up to
public TFGraph () : base (TF_NewGraph ())
{
}
following exception message occured:
"An unhandled exception of type 'System.BadImageFormatException' occurred in TensorFlowSharp.dll
Additional information: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)"
could you please kindly fix it and help to point it out where I did wrong?
thank you very much indeed!

Spice up Generated API docs

Need to do some work to transform the quasy-markdown, custom markup language used in the embedded ProtocolBuffer docs into something that can be rendered properly into ECMA docs to make it nicer.

There is also some ascii artwork, we should detect this and generate some blockquote around it.

Introduce IRefCount

The complex graphs and nodes are a good chance to experiment with an IRefCount on TFGraph.

Add MonoDoc APIs

Currently the API is inline in the source code, this is just temporary until I figure out if I like the API names.

Once that happens, I will do a one-time import into Monodoc and drop the API docs from the source (except operations, which we can reimport easily as I do not care about overwriting).

TFTensor constructor

They currently take single-dimension array of values, with a dims[] array, the dims [] array should probably be hardcoded, and we can get rid of this extra nuisance.

That is, I believe this:

		public TFTensor (long [] dims, float [] data) : base (SetupTensor (TFDataType.Float, dims, data, size: 4)) { }

Should become:

		public TFTensor (float [] data) : base (SetupTensor (TFDataType.Float, dims, data, size: 4)) { }

And we should just copute the long [] dims inside SetupTensor.

Compile error on Ubuntu and Monodevelop.

System.Buffer does not contain definition for MemoryCopy in Tensor.cs line 678. My framework target is Mono/.NET4.5, it looks like this method is new to the .net framework, should I target Mono/ .NET 4.5 on Ubuntu for this project?

TensorFlow runtime precompiled binary for Windows

NetStandard

Can we target .Net Standard? I believe it is possible ๐Ÿ‘

Defensive TFOutput

Handle a few structs that can be invalid, like TFOutput to prevent it from calling unmanaged code with broken data and crash.

bazel no package error

Hello,

I want to build tensorflow dll but this command does not work for me:
bazel-0.4.4-windows-x86_64.exe build -c opt //tensorflow:libtensorflow.so

it gives the error:
ERROR: no such package 'tensorflow': BUILD file not found on package path.

What is that and how to fix it?

Thank you

Generated operation return types

Currently the generated operation return types have signatures that return the outputs as ref parameters, like this:

ref TFOutput one, ref TFOutput two,...

Or if there is a single output, instead of a TFOperation, we return a TFOutput.

Perhaps for consistency, we should have two sets of overloads, one that always return the TFOperation, with all ref parameters, and one that returns a TFOutput or a tuple of those (would need C# 7 to be nice enough)

Add Run(Expression)

In Python, this code is possible:

session.run(a+b)

This both produces the Add operation with the two parameters, but also the result is automatically fetched.

Generally, this requires a "context" to properly work, as in the more common scenario, you would not only be doing Add operations, but likely invoking other methods, and those methods currently live in the TFGraph class.

So either we do what the Python bindings do which is to have a global variable for a default context, and surface an API that operates on this global context or we end up with an API that is not as pretty looking as it could be.

High-level definition parameters

Should the high-level definition parameters be C#-ified?

Currently they have names like "orig_input", should I rename them "origInput"?

Fix discrepancy between TF_Operation and TF_OperationDescription

The latter evolves into the former when you call "FinishOperation()", and this is not currently taken into account.

What needs to happen when you call FinishOperation is that we should transfer the handle ownership (investigate if we need to delete the original handle or not).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.