Skip to content

This repository serves as a public issue tracker and documentation host for Gradient, full TensorFlow binding for .NET

License

Notifications You must be signed in to change notification settings

carloshuerta/Gradient

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 

Repository files navigation

Gradient

NuGet

Gradient is a fully-fledged, mostly typed binding to TensorFlow for .NET

He shall speak not reverbering injurance.
— Gradient-based Char-RNN trained on Shakespeare

Contents

Getting started

Installation

Install Python + Tensorflow

Before installing Gradient, you should ensure, that you have TensorFlow installed and working:

  1. Install Python 3.6 64-bit. If you have Visual Studio 2017+, it is possible to install it as a component. Otherwise, get one from https://www.python.org/downloads/ or your system’s package manager

  2. Install TensorFlow 1.10 using pip:

    1. Find python executable in the installation directory (VS installs to: C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\python.exe)

    2. Open command line to the directory, containing python

    3. Execute .\python -m pip install "tensorflow==1.10.*" or .\python -m pip install "tensorflow-gpu==1.10.*" if you want GPU acceleration via CUDA (NVidia only)

  3. Check the installation by launching python, and running import tensorflow. It should succeed.

Add Nuget package to your project

Gradient packages are published on Nuget, but will not be "listed" until about Tech Preview 5, which means you won’t be able to find it in the Nuget Package Manager. Nuget page lists the commands, necessary to install the package into your project. As of preview4 and the new .csproj format, the command is

dotnet add package Gradient --version 0.1.10-tech-preview4

If using the new package management features of .csproj, this could also be achieved by adding the following line to it:

<PackageReference Include="Gradient" Version="0.1.10-tech-preview4"  />

See the example project file here.

Note
If you are targeting .NET Framework 4.x, you must specify '--runtime win' for 'dotnet publish', and '--runtime win7-x64' for 'dotnet run'. It wont launch from Visual Studio.

First steps

Namespaces

In most cases, you will need to add using tensorflow; at the beginning of your file. In many cases you will also need using Gradient; and using numpy;.

using numpy;
using tensorflow;
using Gradient;

Logging

TensorFlow logging is separate from Gradient logging. This section discusses the later.

To enable Gradient logs, set appropriate properties of GradientLog static class, e.g.:

GradientLog.OutputWriter = Console.Out;

Old style TF

Prior to the recent changes, the main way to use TensorFlow was to contstruct a computation graph, and then run it in a session. Most of the existing examples will use this mode.

Constructing compute graph

Graph creation methods are located in the tf class from tensorflow namespace. For example:

var a = new dynamic[] { tf.constant(5.0, name: "a") };
var b = new dynamic[] { tf.constant(10.0, name: "b") };

var sum = new dynamic[] { tf.add(a, b, name: "sum") };
var div = new dynamic[] { tf.div(a, b, name: "div") };

In this case we wrap the resulting Tensor objects into an array, as many construction functions in Gradient like tf.add expect arrays (or rather IEnumerable), not individual Tensor values. (that might change in the future)

Running computation

Next, you need to create a Session to run your graph one or multiple times. Sessions allocate CPU, GPU and memory resources, and hold the states of variables.

Note
In GPU mode, TensorFlow will attempt to allocate all the GPU memory to itself at that stage, so ensure you don’t have any other programs extensively using it, or turn down TensorFlow memory allocation

Since TensorFlow sessions hold unmanaged resources, they have to be used similar to (but not identical to) IDisposable:

new Session().UseSelf(session => {
    ...do something with the session...
});

Now that you have a Session to work with, you can actually compute the values in the graph:

new Session().UseSelf(session => {
    Console.WriteLine($"a = {session.run(a)}");
    Console.WriteLine($"b = {session.run(b)}");
    Console.WriteLine($"a + b = {session.run(sum)}");
    Console.WriteLine($"a / b = {session.run(div)}");
});

Note, that Session.run also takes a sequence of Tensor-like objects.

The full code for this example is available at our samples repository

Porting Python code to Gradient + C#

In most cases converting Python code, that uses TensorFlow, should be as easy as using C# syntax instead of Python one:

  • add new to class constructor calls: Class()new Class().

Its easy to spot class construction vs simple function calls in Python: by convention function names there start with a lower case letter like min, while in class names the first letter is capitalized: Session

  • to pass named paramters, use : instead of =: make_layer(kernel_bias=2.0)make_layer(kernel_bias: 2.0)

  • to get a subrange of a Tensor , use C# 8 syntax (if available): tensor[1..-2]tensor[1..^2]. A single element can be addressed as usual: tensor[1]

Names of classes and functions

Generally, Gradient follows TensorFlow Python API naming. There are, though, language-based differences:

  • in Python modules (roughly equivalent to namespaces) can directly contain functions. In .NET every function must be a part of some type. For that reason Gradient exposes static classes, named after the innermost module name to contain module functions and properties (but not classes). For example, Python’s tensorflow.contrib.data module has a correspoding C# class tensorflow.contrib.data.data. So an equivalent of Python’s tensorflow.contrib.data.group_by_window would be tensorflow.contrib.data.data.group_by_window. This mostly applies to the unofficial APIs.

  • most of the official API’s functions and properties (but not classes) are exposed via a special class tensorflow.tf. Combined with using tensorflow; this enables invoking TensorFlow functions as neatly as: tf.placeholder(…​), tf.keras.activations.relu(…​), etc

there is also a similar class numpy.np for NumPy functions

  • class names and namespaces are mostly the same as in Python API. E.g. tf.Session is in tensorflow namespace, and can be instantiated via new tensorflow.Session(…​) or simply new Session(…​) with using tensorflow;

  • some APIs have multiple aliases, like tf.add. At the moment of writing this post, only one of the aliases was exposed by Gradient. Usually the first one.

  • in case of name conflicts (e.g. C# does not allow both shape property and set_shape method in the same class), one of the conflicting names is exposed with suffix _. For example: set_shape_, which should be easy to find in IDE autocomplete list.

  • (very rare) due to the way Gradient works, non-official classes, functions and properties might be exposed via unexpected namespaces. IDE should be able to help find classes (by suggesting to add an appropriate using namespace;). For functions and properties, one might try to find the class, corresponding to their containing module (see the example with tensorflow.contrib.data above, you could search for the data class). Another less convenient alternative is to use Visual Studio’s Object Explorer.

  • (rare) some classes and functions, exposed by TensorFlow might only be exposed as function-typed properties. For example, ConfigProto, that is used to configure tf.Session does not have a correspoing class in Gradient. To create an instance of ConfigProto, you must call its constructor via ConfigProto property in config_pb2 class: config_pb2.ConfigProto()

Parameter and return types

Gradient tries hard to expose statically-typed API, but the underlying TensorFlow code is inherently dynamic. In many cases Gradient over-generalizes or under-generalizes underlying parameter and return types.

When the parameter type is over-generalized, it simply means you loose a hint as to what can actually be passed. Gradient’s parameter may be IEnumerable<object>, but the function can reject anything except a Set<int>. In these cases you can either refer to the official documentation, or quickly try it, and see if the error you get explains what the function actually expects.

Dynamic overloads

TL;DR; when you can’t pass something, replace tf.func_name(…​)tf.func_name_dyn(…​), and new Class(…​)Class.NewDyn(…​).

When the parameter or return type is under-generalized, you will not be able to use Gradient’s statically-typed API. A function parameter may say, that it only accepts int and bool, but you know from documentation/sample, that you have to pass a Tensor. Another common example is when Gradient thinks the parameter must be of a derived class, when a base class would actually also be ok. For example, parameter cell might be of type LSTMCell, but actually you should be able to pass any RNNCell, where class LSTMCell: RNNCell. Do not try converting the value you want to pass to the expected type. It will not work. For these cases Gradient provides dynamic API alongside statically-typed one.

Every function from original API will have an untyped overload, whose name ends with _dyn. All its parameters intentionally allow anything to be passed (type object). It also returns a dynamic type.

Same applies to properties. For each SomeType property{get;set;} there’s a dynamic property_dyn{get;set;}.

Every class with constructors will have an untyped static factory method, named NewDyn, which allows you to call class constructor similar to untyped function overloads in the previos paragraph.

Please, report to this issue tracker, if you have to call dynamic overloads a lot to get your model running. We will try to fix that in the next version.

In some cases even that is not enough. If you need to call a method or access a property of an instance of some class, and that method/property is not exposed by Gradient, convert the instance to dynamic, and try to call it that way. See https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/types/using-type-dynamic

Passing functions

Many TensorFlow (and hence Gradient) APIs accept functions as parameters. If the parameter type is known to be a function, Gradient will show it as Gradient.PythonFunctionContainer.

There are two ways to get an instance of it: pass Gradient functions back, or pass .NET function.

Passing TensorFlow functions back to TensorFlow

TL;DR; suffix your function with _fn.

Most NN layers expect an activation argument, which specifies the neuron activation function. TensorFlow defines many activation functions one would want to use in both modern and old-style APIs. The "original" one is called sigmoid as is availabe as tf.sigmoid. Modern networks often use some variant of ReLU (tf.nn.relu). You can call both directly from Gradient like this: tf.sigmoid(tensor), but in most cases you need to pass them to activation parameter as PythonFunctionContainer.

To do that you can simply get a pre-wrapped instance by adding _fn suffix to the function name.

For example: tf.layer.dense(activation: tf.sigmoid_fn).

Passing C# functions to Gradient

To get an instace of PythonFunctionContainer from a C# function, use static method PythonFunctionContainer.Of<T1, …​, TResult>(func or lambda). You will have to specify function argument types in place of <T1, …​, TResult>.

Python with blocks, C#'s using

TL;DR; replace with new Session(…​) as sess: sess.do_stuff()new Session(…​).UseSelf(sess ⇒ sess.do_stuff())

TensorFlow API, being built on Python, use special enter and exit methods for the same purpose .NET has IDisposable. Problem is: in general they do not map directly to each other. For that reason every Gradient class, that declares those special methods in TensorFlow, also exposes .Use and .UseSelf methods. In most cases it is easiest to use .UseSelf(self ⇒ do_something(self)) as shown in the sample above. However, there might be rare special cases, when .Use(context ⇒ do_something(context)) has to be used. The difference is that obj.UseSelf always passes obj back to the lambda, while obj.Use might actually generate a new object of potentially completely different type.

Think of .Use and .UseSelf as Gradient’s best attempt at reproducing using(var session = new Session(…​)) {} statement.

A full example on how to use .UseSelf can be found in samples

Exceptions

This feature is still in development.

NumPy

Since most TensorFlow samples use NumPy, Gradient includes a limited subset under numpy namespace.

Recommendations

  • import both tensorflow and numpy namespaces:

using tensorflow;
using numpy;

tf.placeholder(...);
np.array(...);
  • if you extensively use some API set under tf., use using static tf.API_HERE;

using static tf.keras;
...
var model = models.load_model(...);
new Dense(kernel_regularizer: regularizers.l2(...));
  • many Gradient functions return dynamic. Whenever possible, immediately cast it to the concrete type. It will help to maintain the code. Concrete type is always known at runtime and can be seen in the debugger, or accessed via object.GetType() method. Most methods in tf. actually return Tensor.

Tensor hidden = tf.layers.dense(input, hiddenSize, activation: tf.sigmoid_fn);
  • avoid directly using classes in Gradient, SharPy.Runtime, and Python.Runtime. They are Gradient’s implementation details, which might be changed in the future versions.

Limitations of the current tech preview

This section may be outdated

Can’t inherit Gradient classes

While nothing will stop you from inheriting Gradient classes in .NET, any new or overriden members will not be visible to TensorFlow. You may implement corresponding interfaces in .NET, but don’t inherit anything from any classes in Gradient, tensorflow, or numpy namespaces.

Tips and Tricks

C# 8

Gradient supports the neat indexing feature of C# 8: if you are using Visual Studio 2019 Preview+ or the .NET Core SDK 3 Preview+, you can set appropriate language level like this in the project file: <LangVersion>8.0</LangVersion>

Then you can access numpy arrays with the new syntax, for example: arr[3..^4], which means "take a range from element at index 3, that excludes the last 4 elements".

Blogs, Blog Posts & 3rd-party Samples

What’s new

Preview 5:

  • support for indexing Tensor objects via dynamic

  • allow using specific Python environment via GradientSetup.UsePythonEnvironment

  • numerous fixes in the interop layer

  • GPT-2 sample

Preview 4:

  • MacOS and Ubuntu support (with others possibly working too) on .NET Core

  • documentation included for function and parameter tooltips

  • fixed inability to call static class methods

Preview 3

  • fixed inability to reenter TensorFlow from a callback

Preview 2:

  • dynamically typed overloads, that enable fallback for tricky signatures

  • a common interface for tf.Variable and tf.Tensor

  • enabled enumeration over TensorFlow collection types

FAQ

Why not TensorFlowSharp?

TensorFlowSharp Gradient

Load TensorFlow models

Train existing models

Create new models with low-level API

Create new models with high-level API

Dependencies

TF

TF + Python

TensorBoard integration

Estimators

Dataset manipulation via tf.data

tf.contrib

Commercial support

Why not TensorFlow.NET?

TensorFlow.NET goal is full reimplementation of TensorFlow in C#. However, as of April 2019 only a very small set of APIs actually has implementations. Many function and classes are defined without bodies and do nothing. The state of specific APIs is not tracked anywhere, and that can create a lot of confusion. For example, there is an AdamOptimizer class, but it does not actually have any implementation, apart from the constructor, meaning it wont actually use Adam, or work at all.

About

This repository serves as a public issue tracker and documentation host for Gradient, full TensorFlow binding for .NET

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published