Skip to content

Layers Library Reference

Frank Seide edited this page Oct 25, 2016 · 25 revisions

CNTK predefines a number of common "layers," which makes it very easy to write simple networks that consist of standard layers layered on top of each other. Layers are function objects that can be used like regular 'Function's but hold learnable parameters and have an additional pair of () to pass construction parameters or attributes.

For example, this is the network description for a simple 1-hidden layer model using the Dense() layer:

h = Dense(1024, activation=relu) (features)
p = Dense(9000, activation=softmax) (h)

which can then, e.g., be used for training against a cross-entropy criterion:

ce = cross_entropy(p, labels)

If your network is a straight concatenation of operations (many are), you can use the alternative Sequential() notation:

from layers import *
my_model = Sequential ([
    Dense(1024, activation=relu},
    Dense(9000, activation=softmax}
])

and invoke it like this:

p = my_model (features)

Example models

The following shows a slot tagger that embeds a word sequence, processes it with a recurrent LSTM, and then classifies each word:

from layers import *
from models import *
tagging_model = Sequential ([
    Embedding(150),         # embed into a 150-dimensional vector
    Recurrence(LSTM(300)),  # forward LSTM
    Dense(labelDim)          # word-wise classification
])

And the following is a simple convolutional network for image recognition:

conv_net = Sequential ([
    # 3 layers of convolution and dimension reduction by pooling
    Convolution((5,5), 32, pad=True, activation=relu),
    MaxPooling((3,3), strides=(2,2)),
    Convolution((5,5), 32, pad=True, activation=relu),
    MaxPooling((3,3), strides=(2,2)),
    Convolution((5,5), 64, pad=True, activation=relu),
    MaxPooling((3,3), strides=(2,2)),
    # 2 dense layers for classification
    Dense(64, activation=relu),
    Dense(10)
])

Parameter sharing

If you assign a layer to a variable and use it in multiple places, the parameters will be shared. If you say

lay = Dense(1024, activation=sigmoid)
h1 = lay(x)
h2 = lay(h1)  # same weights as `h1`

h1 and h2 will share the same parameters, as lay() is the same function in both cases. In the above case this is probably not what was desired, so be aware. If both invocations of lay() above are meant to have different parameters, remember to define two separate instances, for example lay1 = Dense(...) and lay2 = Dense(...).

So why this behavior? Layers allow to share parameters across sections of a model. Consider a DSSM model which processes two input images, say doc and query identically with the same processing chain, and compares the resulting hidden vectors:

image_to_vec = Sequential (
    Convolution((5,5), 32, pad=True, activation=relu),
    MaxPooling((3,3), strides=(2,2)),
    Convolution((5,5), 64, pad=True, activation=relu),
    MaxPooling((3,3), strides=(2,2)),
    Dense(64, activation=relu),
    Dense(10)
)
z_doc   = image_to_vec (doc)
z_query = image_to_vec (query)  # same model as for z_doc
sim = CosDistance(zdoc, z_query)

where image_to_vec is the part of the model that converts images into flat vector. image_to_vec is a function object that in turn contains several function objects (e.g. three instances of Convolution()). image_to_vec is instantiated once, and this instance holds the learnable parameters of all the included function objects. Both invocations of model() will share these parameters in application, and their gradients will be the sum of both invocations.

Lastly, note that if in the above example query and doc must have the same dimensions, since they are processed through the same function object, and that function object's first layer has its input dimension inferred to match that of both query and doc. If their dimensions differ, then this network is malformed, and dimension inference/validation will fail with an error message.

Implementation note

Many layers are wrappers around underlying CNTK primitives, along with the respective required learnable parameters. For example, Convolution() wraps the convolution() primitive. The benefits of using layers are:

  • layers contain learnable parameters of the correct dimension
  • layers are composable (cf. Sequential())

Dense()

Factory function to create a fully-connected layer. Dense() takes an optional non-linearity.

Dense(shape, init=init_default_or_glorot_uniform, activation=activation_default_or_None,
      input_rank=None, map_rank=None,
      bias=bias_default_or_True, init_bias=init_bias_default_or_0)

Parameters

  • shape: output dimension of this layer
  • activation (default: None: pass a function here to be used as the activation function, such as activation=relu
  • input_rank: if given, number of inferred axes to add to weight (map_rank must not be given)
  • map_rank: if given, expand weight matrix to leave exactly map_rank axes (input_rank must not be given)
  • init: initializer descriptor for the weights, e.g. glorot_uniform(). See here for a full list of random-initialization options.
  • bias: if False, do not include a bias parameter
  • init_bias: initializer for the bias

Return Value

A function that implements the desired fully-connected layer. See description.

Description

Use these factory functions to create a fully-connected layer. Use Dense() if you would like an activation function to be included, otherwise Dense().

Each these factory functions create a function object that contains a learnable weight matrix and, unless bias=False, a learnable bias. The function object can be used like a function, which implements one of these formulas:

Dense(...) (v) = activation (v @ W + b)
Dense(...) (v) = v @ W + b

where W is a weight matrix of dimension ((dimension of v), shape), b is the bias of dimension (outdim,), and the resulting value has dimension (or tensor dimensions) as given by shape.

Tensor support

If the returned function is applied to an input of a tensor rank > 1, e.g. a 2D image, W will have the dimension (..., (second dimension of input), (first dimension of input), shape).

On the other hand, shape can be a vector that specifies tensor dimensions, for example (10,10). In that case, W will have the dimension ((dimension of input), ..., shape[1], shape[0]), and b will have the tensor dimensions (..., shape[1], shape[0]).

CNTK's matrix product will interpret these extra output or input dimensions as if they were flattened into a long vector. For more details on this, see the documentation of Times()

Example:

h = Dense(1024, activation=sigmoid) (v)

or alternatively:

layer = Dense(1024, activation=sigmoid)
h = layer(v)

Convolution()

Creates a convolution layer with optional non-linearity.

Convolution(rf_shape, num_filters=None,
            activation=activation_default_or_None,
            init=init_default_or_glorot_uniform,
            pad=pad_default_or_False,
            strides=1,
            bias=bias_default_or_True,
            init_bias=init_bias_default_or_0)

Parameters

  • rf_shape: shape of receptive field of filter, e.g. (5,5) for a 2D filter (not including the input feature-map depth)
  • num_filters: number of output channels (number of filters)
  • activation: optional non-linearity, e.g. activation=relu
  • init: initializer descriptor for the weights, e.g. glorot_uniform(). See here for a full list of random-initialization options.
  • pad: if False (default), then the filter will be shifted over the "valid" area of input, that is, no value outside the area is used. If pad is True on the other hand, the filter will be applied to all input positions, and values outside the valid region will be considered zero.
  • strides: increment when sliding the filter over the input. E.g. (2,2) to reduce the dimensions by 2
  • bias: if False, do not include a bias parameter
  • init_bias: initializer for the bias

Return Value

A function that implements the desired convolution operation.

Description

Use these factory functions to create a convolution layer.

The resulting layer applies a convolution operation on an N-dimensional tensor. The caller specifies the spatial extend of the filter. A set of filters for a given receptive field (e.g. (5,5)) is correlated with every location of the input (e.g. a (480, 640)-sized image). Assuming padding is enabled (pad) and strides are 1, this will generate an output region of the same dimension ((480, 640)).

Typically, many filters are applied at the same time. num_filters specifies the number, so for every input location, an entire vector of num_filters is produced. For our example above, setting num_filters to 64 would in a (64, 480, 640)-sized tensor. That last axis is also called the channel dimension or the number of feature maps.

When convolution is applied to an input with an channel dimension, each filter will also consist of vectors of the input's channel dimension. E.g. when applying convolution with a specified spatial filter extent of (5,5) to a (3, 480, 640)-sized color image, each filter will be a (3, 5, 5)] tensor.

All num_filters filters stacked together is called the kernel. In our example, the kernel shape will be (64, 3, 5, 5).

The following summarizes the relationship between the various dimensions and shapes:

input shape     : (              (#input channels), (spatial dims) )
receptive field : (                                 (rf_shape)     )
output shape    : ( num_filters,                    (spatial dims) )
kernel shape    : ( num_filters, (#input channels), (rf_shape)     )

which in our example are:

input shape     : (              3, 480, 640 )
receptive field : (                   5, 5   )
output shape    : ( num_filters,    480, 640 )
kernel shape    : ( num_filters, 3,   5, 5   )

Padding

If padding is not enabled, then the output region will be reduced by the boundary locations to which the full filter extent cannot be applied. E.g. applying a (5,5)-extent filter to an image without padding, the outermost 2 rows and columns of pixels would cause the filter to be applied out of bounds. Hence, Convolution() will reduce the dimensions accordingly.

An (480, 640) image convolved with a (5,5) filter without padding will leave a (476, 636)-sized output region.

Strides

The strides parameters specify the increment of filters. Stride values greater than one will lead to a sub-sampling of the output region. E.g. filtering a (480, 640) image with a stride of (2,2) will result in a (240, 320)-sized region with padding, and (238, 318) without padding.

Notes

This layer is a wrapper around the convolution() primitive.

The filter kernel parameters' name as shown in the log's validation section will end in .W.

Example:

c = Convolution((3,3), 64, pad=True, strides=(1,1), bias=False) (x)

MaxPooling(), AveragePooling()

Factory functions to create a max- or average-pooling layer.

MaxPooling(rf_shape, strides=1, pad=False)
AveragePooling(rf_shape, strides=1, pad=False)

Parameters

  • rf_shape: receptive field (region) to pool over, e.g. (2,2) (not including the input feature-map depth)
  • strides: increment when sliding the pool over the input. E.g. (2,2) to reduce the dimensions by 2
  • pad: if False (default), then the pool will be shifted over the "valid" area of input, that is, no value outside the area is used. If pad is True on the other hand, the pool will be applied to all input positions, and values outside the valid region will be considered zero. For average pooling, count for average does not include padded values.

Return Value

A function that implements the desired pooling layer.

Description

Use this factory function to create a pooling operation. Use MaxPooling() to compute the maximum over the values in the pool area, and AveragePooling() to take their average.

The pooling operation slides a receptive field, or pool window, over the input, and computes either the maximum or the average of the values in the respective window.

This operation is structurally very similar to convolution, except that the operation applied to the sliding window is of a different nature.

All considerations regarding input dimensions, padding, and strides apply identically, so please see Convolution() for more detail.

Example:

p = MaxPooling((3,3), strides=(2,2)) (c)

Embedding()

Embedding(shape=None, init=None, weights=None)

Parameters

  • shape: the dimension of the desired embedding vector
  • init: if given, initializer descriptor for the weights to be learned. See here for a full list of initialization options.
  • weights (numpy array): if given, embeddings are not learned but specified by this array (which could be, e.g., loaded from a file) and not updated further during training

Return Value

A function that implements the embedding layer. See description.

Description

"Embedding" refers to representing words or other discrete items by dense continuous vectors. This layer assumes that the input is in one-hot form. E.g., for a vocabulary size of 10,000, each input vector is expected to have dimension 10,000 and consist of zeroes except for one position that contains a 1. The index of that location is the index of the word or item it represents.

In CNTK, the corresponding embedding vectors are stored as columns of a matrix. Hence, mapping an input word to its embedding is implemented as a matrix product. For this to be very efficient, it is important that the input vectors are stored in sparse format.

Fun fact: The gradient of an embedding matrix has the form of gradient vectors that are only non-zero for words seen in a minibatch. Since for realistic vocabularies of tens or hundreds of thousands, the vast majority of columns would be zero, CNTK implements has a specific optimization to represent the gradient in "column-sparse" form.

Known issue: The above-mentioned column-sparse gradient form is currently not supported by our 1-bit SGD parallelization technique. Please use the block-momentum technique instead.

Example

A learned embedding that represents words from a vocabulary of 87636 as a 300-dimensional vector:

input = Input(87636, is_sparse=True)  # word sequence, as one-hot vector, sparse format
embEn = Embedding(300) (input)        # embed word as a 300-dimensional continuous vector

In addition to is_sparse=True, one should also declare an input as sparse in the reader config block. Here is an example of reading sparse text input with the CNTKTextFormatReader:

source = MinibatchSource(CTFDeserializer('en2fr.ctf', StreamDefs(
    input   = StreamDef(field='E', shape=87636, is_sparse=True),
    labels  = StreamDef(field='F', shape=98624, is_sparse=True)
)))

If, instead, the embedding vectors already exist and should be loaded from a file, it would look like this:

input = Input(87636, is_sparse=True)     # word sequence, as one-hot vector, sparse format
embEn = Embedding(300, weights=np.load_txt('embedding-en.txt')) (w) # embedding from disk

where the file 'embedding-en.txt' would be expected to consist of 87,636 text rows, each of which consisting of 300 space-separated numbers.

Recurrence()

Factory function to create a single-layer or multi-layer recurrence.

Recurrence(over, go_backwards=False, initial_state=initial_state_default_or_None)

Parameters

  • over: the 'Function' to recur over, for example LSTM()
  • go_backwards (optional): if True, the recurrence is run backwards
  • initial_state (optional, default 0): initial value of the hidden variable that is recurred over. Currently, this cannot have a dynamic axis.

Return Value

A function that implements the desired layer that applies/apply a recurrent model, such as an LSTM, to its input sequence. This layer maps an input sequence to a sequence of hidden states of the same length.

Description

This implements the recurrence to be applied to a sequence of inputs. This operation automatically handles variable-length input. The initial value of the hidden state and cell are 0 unless specified by initial_state.

Applying this layer to an input sequence will return the sequence of the hidden states of the 'Function' to recur over (in case of an LSTM, the LSTM's memory cell's value is not returned). The returned sequence has the same length as the input. If only the last state is desired, as in sequence-classification or some sequence-to-sequence scenarios, use select_last to extract the last item's hidden state only. (In a backward recurrence, you would use select_first.)

To create a bidirectional model with Recurrence(), use two layers, one with goBackwards=True, and splice() the two outputs together.

Example

A simple text classifier, which runs a word sequence through a recurrence and then passes the last hidden state of the LSTM to a softmax classifer, could have this form:

w = Input(...)                           # word sequence (one-hot vectors)
e = Embedding(150)(w)                    # embed as a 150-dimensional dense vector
h = Recurrent(LSTM(300))(e)              # left-to-right LSTM with hidden and cell dim 300
t = select_last(h)                       # extract last hidden state
z = Dense(10000, activation=softmax)(t)  # softmax classifier

To create a bidirectional one-layer LSTM (e.g. using half the hidden dimension compared to above), use this:

h_fwd = Recurrent(LSTM(150))(e)
h_bwd = Recurrent(LSTM(150), go_backwards=True)(e)
h = splice ([h_fwd, h_bwd])

LSTM()

Factory function to create a stateless LSTM 'Function', typically for use with Recurrence().

LSTM(shape, cell_shape=None, use_peepholes=use_peepholes_default_or_False,
     init=init_default_or_glorot_uniform, init_bias=init_bias_default_or_0,
     enable_self_stabilization=enable_self_stabilization_default_or_False)

Parameters

  • shape: dimension of the output
  • cell_shape (optional): the dimension of the LSTM's cell. Normally this is identical to shape. If a different value is given, an additional linear projection will be inserted to convert from the cell dimension to the output.
  • use_peepholes (optional): if True, then use peephole connections in the LSTM
  • init: initializer descriptor for the weights. See here for a full list of initialization options.
  • enable_self_stabilization (optional): if True, insert a "stabilizer" operation similar to Stabilizer()

Return Value

A 'Function' that implements stateless Long-Short-Term-Memory, typically for use with Recurrence()

Description

This creates a Function object that implements the LSTM.

Example

See Recurrence().

Delay()

Factory function to create a layer that delays its input.

Delay(T=1, initial_state=None)

Parameters

  • T: the number of time steps to delay. To access future values, use a negative value
  • initial_state (optiona, default=0): value to use for the delayed frames at the boundaries

Return Value

A function that implements the desired delay operation.

Description

This operation delays an input sequence by T steps (default 1). This useful, for example, to turn a word sequence into a sequence of overlapping word triples.

Consider an input sequence "a b c b", which shall be encoded as a sequence of 3-dimensional one-hot vectors as follows:

1 0 0
0 1 0
0 0 1
0 1 0

Here, every row is a one-hot vector and corresponds to a word. Applying Delay(T=1) to this input will generate this sequence:

0 0 0
1 0 0
0 1 0
0 0 1

All tokens get delayed by one, and the first position gets filled in as a 0 vector. Likewise, using Delay(T=-1) (negative delay) will give access to the future values, and pad from the right with a zero:

0 1 0
0 0 1
0 1 0
0 0 0

Notes

This layer is a wrapper around the past_value() and future_value() primitives.

Example

The following shows how to stack three neighbor words into a trigram vector:

x  = ...                   # input value, e.g. a N-dimensional one-hot vector
xp = Delay() (x)           # previous value
xn = Delay(T-1) (x)        # next value (negative delay)
tg = splice ([xp, x, xn])  # concatenate all into a 3N-dimensional three-hot vector

BatchNormalization(), LayerNormalization(), Stabilizer()

Factory functions to create layers for batch normalization, layer normalization, and self-stabilization.

BatchNormalization(map_rank=None,
                   init_scale=1,
                   normalization_time_constant=5000, blend_time_constant=0,
                   epsilon=0.00001, use_cntk_engine=True):
LayerNormalization(initial_scale=1, initial_bias=0)
Stabilizer(steepness=4)

Parameters

BatchNormalization:

  • map_rank: if given then normalize only over this many dimensions. E.g. 1 to tie all (h,w) in a (C, H, W)-shaped input. Currently allowed values are None (no pooling) and 1 (pooling across all pixel positions of an image)
  • normalization_time_constant (default 5000): time constant in samples of the first-order low-pass filter that is used to compute mean/variance statistics for use in inference
  • initial_scale: initial value of scale parameter
  • epsilon: small value that gets added to the variance estimate when computing the inverse
  • use_cntk_eEngine: if True, use CNTK's native implementation. If false, use CuDNN's implementation (GPU only).

LayerNormalization:

  • initial_scale: initial value of scale parameter
  • initial_bias: initial value of bias parameter

Stabilizer:

  • steepness: sharpness of the knee of the softplus function

Return Value

A function that implements a layer that performs the normalization operation.

Description

BatchNormalization() implements the technique described in paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift (Sergey Ioffe, Christian Szegedy). It normalizes its inputs for every minibatch by the minibatch mean/variance, and de-normalizes it with a learned scaling factor and bias.

In inference, instead of using minibatch mean/variance, batch normalization uses a long-term running mean/var estimate. This estimate is computed during training by low-pass filtering minibatch statistics. The time constant of the low-pass filter can be modified by the normalizationTimeConstant parameter. We recommend to start with the default of (5000), but experiment with other values, typically on the order of several thousand to tens of thousand.

LayerNormalization() implements Layer Normalization (Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E. Hinton). It normalizes each input sample by subtracting the mean across all elements of the sample, and then dividing by the standard deviation over all elements of the sample.

Stabilizer() implements a self-stabilizer per Self-stabilized deep neural network (P. Ghahremani, J. Droppo). This simple but effective technique multiplies its input with a learnable scalar (but unlike layer normalization, it does not first normalize the input, nor does it subtract a mean). Note that compared to the original paper, which proposes a linear scalar beta or an exponential one Exp (beta), we found it beneficial to use a sharpened softplus operation per the second author's suggestion, which avoids both negative values and instability from the exponential.

Notes

BatchNormalization() is a wrapper around the BatchNormalization() primitive. LayerNormalization() and Stabilizer() are expressed directly in BrainScript.

Example

A typical layer in a convolutional network with batch normalization:

my_layer(x, depth, init) =
{
    c = Convolution(depth, (5,5), pad=True, init=init) (x)
    b = BatchNormalization(spatialRank = 2) (c)    #####
    r = relu (b)
    p = MaxPooling((3,3), strides=(2,2)) (r)
).p

Sequential()

Composes an list of functions into a new function that calls these functions one after another ("forward function composition").

Sequential(layers)

Parameters

layers: a list of functions, e.g. [ LinearLayer(1024), sigmoid ]

Return value

This function returns another Function. That returned function takes one argument, and returns the result of applying all given functions in sequence to the input.

Description

Sequential() is a powerful operation that allows to compactly express a very common situation in neural networks where an input is processed by propagating it through a progression of layers. You may be familiar with it from other neural-network toolkits.

Sequential() takes an array of functions as its argument, and returns a new function that invokes these function in order, each time passing the output of one to the next. Consider this example:

FGH = Sequential ([F, G, H])
y = FGH (x)

The FGH function defined above means the same as

y = H(G(F(x))) 

This is known as "function composition", and is especially convenient for expressing neural networks, which often have this form:

     +-------+   +-------+   +-------+
x -->|   F   |-->|   G   |-->|   H   |--> y
     +-------+   +-------+   +-------+

which is perfectly expressed by Sequential ([F, G, H]). (An even shorter alternative way of writing it is (F >> G >> H).)

Lastly, please be aware that the following expression:

layer1 = Dense(1024)
layer2 = Dense(1024)
z = Sequential([layer1, layer2])(x)

means something different from:

layer = Dense(1024)
z = Sequential([layer, layer])(x)

In the latter form, the same function with the same shared set of parameters is applied twice, while in the former, the two layers have separate sets of parameters.

Example

Standard 4-hidden layer feed-forward network as used in the earlier deep-neural network work on speech recognition:

my_model = Sequential ([
    Dense(2048, activation=sigmoid),  # four hidden layers
    Dense(2048, activation=sigmoid), 
    Dense(2048, activation=sigmoid), 
    Dense(2048, activation=sigmoid), 
    Dense(9000, activation=softmax)   # note: last layer is a softmax 
)
features = Input(40)
p = my_model(features)

LayerStack()

Repeats a layer multiple times.

LayerStack(N, constructor)

Parameters

N: number of repetitions constructor: a lambda with 0 or 1 argument that creates the layer

Return value

This function returns another Function. That returned function takes one argument, and returns the result of applying the repeated layers to the input, where each layer is a separate object with a distinct set of model parameters.

Description

LayerStack() creates a sequential model by repeatedly executing a constructor lambda passed to it; that is, you pass a Python function that creates a layer, e.g. using the Python lambda syntax.

For example, creating a stack of 3 Dense layers of identical shape:

     +------------+   +------------+   +------------+
x -->| Dense(128) |-->| Dense(128) |-->| Dense(128) |--> y
     +------------+   +------------+   +------------+

is as easy as:

model = LayerStack(3, lambda: Dense(128))

Note that because you pass in a lambda for creating the layer, each layer will be separately constructed. This is important, because this ensures that all layers have their own distinct set of model parameters.

That constructor lambda can optionally take one parameter, the layer counter. E.g. if the output dimension should double in each layer,

     +------------+   +------------+   +------------+
x -->| Dense(128) |-->| Dense(256) |-->| Dense(512) |--> y
     +------------+   +------------+   +------------+

the one-parameter lambda form allows you to say this (notice the lambda i, which defines a function that takes one parameter named i):

model = LayerStack(3, lambda i: Dense(128 * 2**i))

or this:

dims = [128,256,512]
model = LayerStack(3, lambda i: Dense(dims[i]))

Example

The following creates a 9-hidden-layer VGG-style model. VGG is a popular architecture for image recognition:

with default_options(activation=relu):
    model = Sequential([
        LayerStack(3, lambda i: [  # lambda with one parameter
            Convolution((3,3), [64,96,128][i], pad=True),  # depth depends on i
            Convolution((3,3), [64,96,128][i], pad=True),
            MaxPooling((3,3), strides=(2,2))
        ]),
        LayerStack(2, lambda : [   # lambda without parameter
            Dense(1024),
            Dropout(0.5)
        ]),
        Dense(num_classes, activation=None)
    ])

The resulting model will have this structure:

VGG9
input: image
conv3-64
conv3-64
max3
conv3-96
conv3-96
max3
conv3-128
conv3-128
max3
FC-1024
dropout0.5
FC-1024
dropout0.5
FC-10
output: object
Clone this wiki locally