Click here to Skip to main content
15,867,686 members
Articles / Programming Languages / Python

In Depth LSTM Implementation using CNTK on .NET Platform

Rate me:
Please Sign up or sign in to vote.
5.00/5 (1 vote)
22 Aug 2019CPOL11 min read 12.7K   4   3
Implementation of the LSTM recurrent neural network in CNTK shown in detail

In this blog post, the implementation of the LSTM recurrent neural network in CNTK will be shown in detail. The implementation will cover LSTM implementation based on Hochreiter & Schmidhuber (1997) paper which can be found here. The great blog post about LSTM can also be found at colah’s blog, that explains in detail the structure of the LSTM cell, as well as some of the most used LSTM variants. In this blog post, the LSTM recurrent network will be implemented using CNTK, a deep learning tool using C# programming language and .NET Core platform. Also, in case you want to see how to use pure C# without any additional library for LSTM implementation, you can see the great MSDN article: Test Run – Understanding LSTM Cells Using C# By James McCaffrey.

Whole implementation of LSTM RNN is part of ANNdotNET – deep learning tool on .NET platform. More information about the project can be found at GitHub Project page: github.com/bhrnjica/anndotnet.

Introduction to LSTM Recurrent Network

Classic neural networks are built on the fact that data doesn’t have any order when entering into the network, and the output depends only on the input features. In case when the output depends on features and previous outputs, the classic feed forward neural network cannot help. The solution for such problem may be neural network which recursively provides the previous outputs. This kind of network is called recurrent neural network RNN, and it was introduced by the Hopfields in the 1980s, and later popularized when the back-propagation algorithm was improved in the beginning of 1990s. Simple concept of the recurrent neural network can be shown in the following image.

Image 1

The current output of the recurrent network is defined by the current input Xt, and also on states related on the previous network outputs ht-1, ht-2,

The concept of the recurrent neural network is simple and easy to implement, but the problem raises during the training phase due to unpredictable gradient behavior. During the training phase, gradient problem of neural network can be summarized in two categories: the vanishing and the exploding gradient.

The recurrent neural network is based on back-propagation algorithm, specially developed for the recurrent ANN, which is called back-propagation through time, BPTT. In vanishing gradient problem parameters updates are proportional to the gradient of the error, which in most cases negligibly small, and results that the corresponding weights are constant and stop the network from further training.

On the other hand, exploding gradient problem refers to the opposite behavior, where the updates of weights (gradient of the cost function) became large in each back-propagation step. This problem is caused by the explosion of the long-term components in the recurrent neural network.

The solution to the above problems is specific design of the current network called Long Short-Term Memory, LSTM. One of the main advantages of the LSTM is that it can provide a constant error flow. In order to provide a constant error flow, the LSTM cell contains set of memory blocks,which have the ability to store the temporal state of the network. The LSTM also has special multiplicative units called gates that control the information flow.

The LSTM cell consists of:

  • input gate – which controls the flow of the input activations into the memory cell
  • output gate which controls the output flow of the cell activation
  • forget gate, which filters the information from the input and previous output and decides which one should be remembered or forgotten and dropped out.

Besides three gates, the LSTM cell contains cell update which is usually tanh layer to be part of the cell state.

In each LSTM cell, the three variables are coming into the cell:

  • the current input xt,
  • previous output ht-1 and
  • previous cell state ct-1.

On the other hand, from each LSTM cell, two variables are getting out:

  • the current output ht and
  • the current cell state ct.

Graphical representation of the LSTM cell is shown in the following image.

Image 2

In order to implement LSTM recurrent network, first the LSTM cell should be implemented. The LSTM cell has three gates, and two internal states, which should be determined in order to calculate the current output and current cell state.

The LSTM cell can be defined as neural network where the input vector Image 3 in time t, maps to the output vector Image 5, through the calculation of the following layers:

  • the forget gate sigmoid layer for the time t, ft is calculated by the previous output ht-1 the input vector xt, and the matrix of weights from the forget layer Wf with addition of corresponded bias bi:

Image 6

  • the input gate sigmoid layer for the time t, it is calculated by the previous output ht-1 the input vector xt, and the matrix of weights from the input layer Wi with addition of corresponded bias bi:

Image 7

  • the cell state in time t, Ct is calculated from the forget gate ft and the previous cell state Ct-1. The result is summed wth the input gate it and the cell update state {\widetilde{c}}_t, that is tanh layer calculated by the previous output ht-1 the input vector xt, and the weight matrix for the cell with addition of corresponded bias bi:

Image 9

  • the output gate sigmoid layer for the time t, ot is calculated by the previous output ht-1, the input vector xt, and the matrix of weights from the output layer Wo with addition of corresponded bias bi:

Image 10

The final stage of the LSTM cell is current output ht calculation. The current output is calculated with the multiplication operation \otimes between output gate layer and tanh layer of the current cell state Ct .

Image 12

The current output ht, has passed through the network as the previous state for the next LSTM cell, or as the input for neural network output layer.

LSTM with Peephole Connection

One of the LSTM variants which is implemented in python based CNTK is LSTM with peephole connection which is first introduced by Gers & Schmidhuber (2000). LSTM with peephole connection let each gate (forget, input and output) look at the cell state.

Image 13

Now the gates with peephole connection can be expressed so that the started terms of each gates are extended with additional matrix of Ct.So, the forget gate with peephole can be expressed:

Image 14

Similarly, the input gate and the output gate with peephole connection are expressed as:

Image 15,

Image 16.

With peephole connection, LSTM cell get additional matrix for each gate and the number of LSTM parameters are increased by additional 3mXm parameters, where m – is output dimension.

Implementation of LSTM Recurrent Network

The CNTK is Microsoft open source library for deep learning written in C++, but it can be run from various programming languages: Python,C#, R, Java. In order to use the library in C#, the CNTK related Nugget package has to be installed, and the project must be built for 64bit architecture.

  1. So open Visual Studio 2017 and create a simple .NET Core Console application.
  2. Then install CNTK GPU Nugget package to your recently created console application.

Image 17

Once the startup project is created, the LSTM CNTK implementation can be started.

Implementation of the LSTM Cell

As stated previously, the implementation presented in this blog post is originally implemented in ANNdotNET – open source project for deep learning on .NET platform. It can be found on the official GitHub project page.

The LSTM recurrent network starts by implementation of the LSTMCell class. The LSTMCell class is derived from the NetworkFoundation class which implements basic neural network operations. The basic operations are implemented through the implementation of the following methods:

  • Bias – bias parameters implementation
  • Weights – implementation of the weights parameters
  • Layer – implementation of the classic fully connected linear layer
  • AFunction – applying activation function on the layer

NetworkFoundation class is shown in the next code snippet:

C#
///////////////////////////////////////////////////////////////////////////////////////////
// ANNdotNET - Deep Learning Tool on .NET Platform                                                      
// Copyright 2017-2018 Bahrudin Hrnjica                                                                                                                                       //
// This code is free software under the MIT License                                     //
// See license section of https://github.com/bhrnjica/anndotnet/blob/master/LICENSE.md  //
//                                                                                      
// Bahrudin Hrnjica                                                                     
// bhrnjica@hotmail.com                                                                 
// Bihac, Bosnia and Herzegovina                                                         //
// http://bhrnjica.net                                                                  
//////////////////////////////////////////////////////////////////////////////////////////
using CNTK;
using NNetwork.Core.Common;

namespace NNetwork.Core.Network
{
public class NetworkFoundation
{

public Variable Layer(Variable x, int outDim, DataType dataType, 
                      DeviceDescriptor device, uint seed = 1 , string name="")
{
    var b = Bias(outDim, dataType, device);         
    var W = Weights(outDim, dataType, device, seed, name);

    var Wx = CNTKLib.Times(W, x, name+"_wx");
    var l = CNTKLib.Plus(b,Wx, name);

    return l;
}

public Parameter Bias(int nDimension, DataType dataType, DeviceDescriptor device)
{
    //initial value
    var initValue = 0.01;
    NDShape shape = new int[] { nDimension };
    var b = new Parameter(shape, dataType, initValue, device, "_b");
    //
    return b;
}

public Parameter Weights(int nDimension, DataType dataType, 
                         DeviceDescriptor device, uint seed = 1, string name = "")
{
    //initializer of parameter
    var glorotI = CNTKLib.GlorotUniformInitializer(1.0, 1, 0, seed);
    //create shape the dimension is partially known
    NDShape shape = new int[] { nDimension, NDShape.InferredDimension };
    var w = new Parameter(shape, dataType, glorotI, device, name=="" ? "_w" : name);
    //
    return w;
}

public Function AFunction(Variable x, Activation activation, string outputName="")
{
    switch (activation)
    {
        default:
        case Activation.None:
            return x;
        case Activation.ReLU:
            return CNTKLib.ReLU(x, outputName);
        case Activation.Softmax:
            return CNTKLib.Sigmoid(x, outputName);
        case Activation.Tanh:
            return CNTKLib.Tanh(x, outputName);
    }
}
}}

As can be seen, methods implement basic neural buildingblocks, which can be applied to any network type. Once the NetworkFoundation baseclass is implemented, the LSTM cell class implementation starts by defining three properties and custom constructor, that is shown in the following code snippet:

C#
///////////////////////////////////////////////////////////////////////////////////////////
// ANNdotNET - Deep Learning Tool on .NET Platform
// Copyright 2017-2018 Bahrudin Hrnjica
//
// This code is free software under the MIT License
//
// See license section of https://github.com/bhrnjica/anndotnet/blob/master/LICENSE.md  
//
//
// Bahrudin Hrnjica
// bhrnjica@hotmail.com
// Bihac, Bosnia and Herzegovina                                                         
//
// http://bhrnjica.net
//////////////////////////////////////////////////////////////////////////////////////////
using CNTK;
using NNetwork.Core.Common;

namespace NNetwork.Core.Network.Modules
{
public class LSTM : NetworkFoundation
{
    public Variable X { get; set; } //LSTM Cell Input
    public Function H { get; set; } //LSTM Cell Output
    public Function C { get; set; } //LSTM Cell State

public LSTM(Variable input, Variable dh, Variable dc, DataType dataType, 
            Activation actFun, bool usePeephole, bool useStabilizer, uint seed, 
            DeviceDescriptor device)
{
    //create cell state
    var c = CellState(input, dh, dc, dataType, actFun, usePeephole, 
                      useStabilizer, device, ref seed);

    //create output from input and cell state
    var h = CellOutput(input, dh, c, dataType, device, useStabilizer, 
                       usePeephole, actFun, ref seed);

    //initialize properties
    X = input;
    H = h;
    C = c;
}

Properties X, H and C, hold current values of the LSTM cell,once the LSTM object is created. The LSTM constructor takes several arguments:

  • the first three are variables for the input, previous output and previous cell state
  • the activation function of the cell update layer

The constructor also contains two arguments for creation of a different LSTM variant: peepholes, and self-stabilization, and few other self-explained arguments. The LSTM constructor creates cell state and output by calling CellState and CellOutput methods respectively. Their mplementation of those methods is shown in the next code snippet:

C#
public Function CellState(Variable x, Variable ht_1, Variable ct_1, DataType dataType, 
    Activation activationFun, bool usePeephole, bool useStabilizer, 
                              DeviceDescriptor device, ref uint seed)
{
    var ft = AGate(x, ht_1, ct_1, dataType, usePeephole, useStabilizer, 
                   device, ref seed, "ForgetGate");
    var it = AGate(x, ht_1, ct_1, dataType, usePeephole, 
                   useStabilizer, device, ref seed, "InputGate");
    var tan = Gate(x, ht_1, ct_1.Shape[0], dataType, device, ref seed);

    //apply Tanh (or other) to gate
    var tanH = AFunction(tan, activationFun, "TanHCt_1" );

    //calculate cell state
    var bft = CNTKLib.ElementTimes(ft, ct_1,"ftct_1");
    var bit = CNTKLib.ElementTimes(it, tanH, "ittanH");

    //cell state
    var ct = CNTKLib.Plus(bft, bit, "CellState");
    //
    return ct;
}

public Function CellOutput(Variable input, Variable ht_1, Variable ct, 
                           DataType dataType, DeviceDescriptor device, 
    bool useStabilizer, bool usePeephole, Activation actFun ,ref uint seed)
{
    var ot = AGate(input, ht_1, ct, dataType, usePeephole, useStabilizer, 
                   device, ref seed, "OutputGate");

    //apply activation function to cell state
    var tanHCt = AFunction(ct, actFun, "TanHCt");

    //calculate output
    var ht = CNTKLib.ElementTimes(ot, tanHCt,"Output");

    //create output layer in case different dimensions between cell and output
    var c = ct;
    Function h = null;
    if (ht.Shape[0] != ct.Shape[0])
    {
        //rectified dimensions by adding linear layer
        var so = !useStabilizer? ct : Stabilizer(ct, device);
        var wx_b = Weights(ht_1.Shape[0], dataType, device, seed++);
        h = wx_b * so;
    }
    else
        h = ht;

    return h;
}

The above methods have been implemented by using previously defined gates and blocks. The method AGate creates LSTM gate. The method is called two times in order to create forget and input gates. Then the Gate method is called in order to create linear layer for the update cell state. The activation function is provided as the constructor argument. Implementation of AGate and Gate functions is shown in the following code snippet:

C#
public Variable AGate(Variable x, Variable ht_1, Variable ct_1, 
                      DataType dataType, bool usePeephole,
    bool useStabilizer, DeviceDescriptor device, ref uint seed, string name)
{
    //cell dimension
    int cellDim = ct_1.Shape[0];
    //define previous output with stabilization of if defined
    var h_prev = !useStabilizer ? ht_1 : Stabilizer(ht_1, device);

    //create linear gate
    var gate = Gate(x, h_prev, cellDim, dataType, device, ref seed);
    if (usePeephole)
    {
        var c_prev = !useStabilizer ? ct_1 : Stabilizer(ct_1, device);
        gate = gate + Peep(c_prev, dataType, device, ref seed);
    }
    //create forget gate
    var sgate = CNTKLib.Sigmoid(gate, name);
    return sgate;
}

private Variable Gate(Variable x, Variable hPrev, int cellDim,
                            DataType dataType, DeviceDescriptor device, ref uint seed)
{
    //create linear layer
    var xw_b = Layer(x, cellDim, dataType, device, seed++);
    var u = Weights(cellDim, dataType, device, seed++,"_u");
    //
    var gate = xw_b + (u * hPrev);
    return gate;
}

As can be seen, AGate calls the Gate method in order to create linear layer, and then applies the activation function.

In order to create LSTM variant with peephole connection, as well as LSTM with self-stabilization, two additional methods are implemented. The peephole connection is explained previously. The implementation of Stabilizer methods is based on the implementation found at C# examples on the CNTK github page, with minor modification and re-factorization.

C#
internal Variable Stabilizer(Variable x, DeviceDescriptor device)
{
    //define floating number
    var f = Constant.Scalar(4.0f, device);

    //make inversion of prev. value
    var fInv = Constant.Scalar(f.DataType, 1.0 / 4.0f);

    //create value of 1/f*ln (e^f-1)
    double initValue = 0.99537863;

    //create param with initial value
    var param = new Parameter(new NDShape(), f.DataType, initValue, device, "_stabilize");

    //make exp of product scalar and parameter
    var expValue = CNTKLib.Exp(CNTKLib.ElementTimes(f, param));

    //
    var cost = Constant.Scalar(f.DataType, 1.0) + expValue;

    var log = CNTKLib.Log(cost);

    var beta = CNTKLib.ElementTimes(fInv, log);

    //multiplication of the variable layer with constant scalar beta
    var finalValue = CNTKLib.ElementTimes(beta, x);

    return finalValue;
}

internal Function Peep(Variable cstate, DataType dataType, 
                       DeviceDescriptor device, ref uint seed)
{
    //initial value
    var initValue = CNTKLib.GlorotUniformInitializer(1.0, 1, 0, seed);

    //create shape which for bias should be 1xn
    NDShape shape = new int[] { cstate.Shape[0] };

    var bf = new Parameter(shape, dataType, initValue, device, "_peep");

    var peep = CNTKLib.ElementTimes(bf, cstate);
    return peep;
}

The Peep method is based on the previous description in the blog post, that simply adds the additional set of parameters which includes the previous cell state into Gates.

Implementation of the LSTM Recurrent Network

Once we have the LSTM cell implementation, it is easy to implement recurrent network based on LSTM. Previously, the LSTM is defined with three input variables: input and two previous state variables. Those previous states should be defined not as real variables but as placeholders, and should be changed dynamically for each iteration. So, the recurrent network starts by defining placeholders of previous output and previous cell state. Then the LSTMcell object is created. Once the LSTM is created, the actual values are replaced by the previous values by calling the CNTK method PastValue. Then the placeholders are replaced with the past values of the variables. At the end, the method returns the CNTK Function object, which can be one of two cases, which is controlled by the returnSequence argument:

  • first case where the method returns the full sequence
  • second case where the method returns the last element of the sequence
C#
/////////////////////////////////////////////////////////////////////////////////////////
// ANNdotNET - Deep Learning Tool on .NET Platform                                      
// Copyright 2017-2018 Bahrudin Hrnjica                                                 
//
// This code is free software under the MIT License
// See license section of  https://github.com/bhrnjica/anndotnet/blob/master/LICENSE.md
//
// Bahrudin Hrnjica
// bhrnjica@hotmail.com
// Bihac, Bosnia and Herzegovina
// http://bhrnjica.net
//
////////////////////////////////////////////////////////////////////////////////////////
using CNTK;
using NNetwork.Core.Common;
using NNetwork.Core.Network.Modules;
using System;
using System.Collections.Generic;

namespace NNetwork.Core.Network
{
public class RNN
{
public static Function RecurrenceLSTM(Variable input, int outputDim, 
     int cellDim, DataType dataType, DeviceDescriptor device, bool returnSequence=false,
    Activation actFun = Activation.TanH, bool usePeephole = true, 
                        bool useStabilizer = true, uint seed = 1)
{
    if (outputDim <= 0 || cellDim <= 0)
        throw new Exception("Dimension of LSTM cell cannot be zero.");
    //prepare output and cell dimensions 
    NDShape hShape = new int[] { outputDim };
    NDShape cShape = new int[] { cellDim };

    //create placeholders
    //Define previous output and previous cell state as placeholder 
    //which will be replaced with past values later
    var dh = Variable.PlaceholderVariable(hShape, input.DynamicAxes);
    var dc = Variable.PlaceholderVariable(cShape, input.DynamicAxes);

    //create lstm cell
    var lstmCell = new LSTM(input, dh, dc, dataType, actFun, 
                            usePeephole, useStabilizer, seed, device);

    //get actual values of output and cell state
    var actualDh = CNTKLib.PastValue(lstmCell.H);
    var actualDc = CNTKLib.PastValue(lstmCell.C);

    // Form the recurrence loop by replacing the dh and dc placeholders 
    // with the actualDh and actualDc
    lstmCell.H.ReplacePlaceholders(new Dictionary<Variable, 
                          Variable> { { dh, actualDh }, { dc, actualDc } });

    //return value depending of type of LSTM layer
    if (returnSequence)
        return lstmCell.H;
    else
        return CNTKLib.SequenceLast(lstmCell.H); 
}
}}

As can be seen, the RNN class contains only one static method, which returns the CNTK Function object which contains the recurrent network with LSTM cell. The method takes several arguments: input variable, dimension of the output of the recurrent network, dimension of the LSTM cell, and the additional arguments for creation different variants of the LSTM cell.

Implementation of Test Application

Now that the full LSTM based recurrent network is implemented, we are going to provide the test application that can test basic LSTM functionality. The application contains two test methods in order to check:

  • number of LSTM parameters, and
  • output and cell states of the LSTM cell for two iterations

Testing the Correct Number of the Parameters

The first method implements validation of the correct numberof LSTM parameters. The LSTM cell has three kinds of matrices: U and W and bfor each LSTM component: forget, input and output gate, and cell update.
Let's assume the number of input dimension is n, and the number of output is m. Also, let's assume that dimension number of the cell is equal to output dimension. We can defined the following matrices:

  • U matrix with dimensions of mxn
  • W matrix with dimensions of mxm
  • B matrix (vector) with dimensions 1xm

In total, the LSTM has P_{\left(LSTM\right)}=4\bullet\left(m^2+m\bullet n+m\right).

In case the LSTM has peephole connection, the number of parameters is increased with additional C matrix with 1xm parameters.

In total, the LSTM with peephole connection has P_{\left(LSTM\right)}=4\bullet\left(m^2+m\bullet n+m\right)+3\bullet1\bullet m. The test method is implemented for n=3, and m=4, so the total number of parameters for default LSTM cell is P(n)=4(9+6+3)=4*18=72. With peephole connection the LSTM cell has P(n)= 4(9+6+3)+3*1*4 = 4*18+3*16 = 72+12=84.

In case the LSTM cell is defined with self-stabilization parameter, the additional 4xm parameters are defined.

Now that we defined parameter number for pure LSTM, with peephole and self-stabilization, we can implement test methods based on n=3 and m=4:

C#
[TestMethod]
public void LSTM_Test_Params_Count()
{
    //define values, and variables
    Variable x = Variable.InputVariable(new int[] { 3 }, DataType.Float, "input");
    Variable y = Variable.InputVariable(new int[] { 4 }, DataType.Float, "output");

    //Number of LSTM parameters
    var lstm1 = RNN.RecurrenceLSTM(x,4,4, DataType.Float,device, Activation.Tanh,true,true,1);

    var ft = lstm1.Inputs.Where(l=>l.Uid.StartsWith("Parameter")).ToList();
    var consts = lstm1.Inputs.Where(l => l.Uid.StartsWith("Constant")).ToList();
    var inp = lstm1.Inputs.Where(l => l.Uid.StartsWith("Input")).ToList();

    //bias params
    var bs = ft.Where(p=>p.Name.Contains("_b")).ToList();
    var totalBs = bs.Sum(v => v.Shape.TotalSize);
    Assert.AreEqual(totalBs,12);
    //weights
    var ws = ft.Where(p => p.Name.Contains("_w")).ToList();
    var totalWs = ws.Sum(v => v.Shape.TotalSize);
    Assert.AreEqual(totalWs, 24);
    //update
    var us = ft.Where(p => p.Name.Contains("_u")).ToList();
    var totalUs = us.Sum(v => v.Shape.TotalSize);
    Assert.AreEqual(totalUs, 36);
    

    var totalOnly = totalBs + totalWs + totalUs;
    var totalWithSTabilize = totalOnly + totalst;
    var totalWithPeep = totalOnly + totalPh;

    var totalP = totalOnly + totalst + totalPh;
    var totalParams = ft.Sum(v=>v.Shape.TotalSize);
    Assert.AreEqual(totalP,totalParams);
}

Testing the Output and Cell State Values

In this test, the network parameters input, previous output and cell states are setup. The result of this test is whether the LSTM cell returns correct output and cell state values for first and second iteration. The implementation of this test is shown on the following code snippet:

C#
public void LSTM_Test_WeightsValues()
{
    //define values, and variables
    Variable x = Variable.InputVariable(new int[] { 2 }, DataType.Float, "input");
    Variable y = Variable.InputVariable(new int[] { 3 }, DataType.Float, "output");

    //data 01
    var x1Values = Value.CreateBatch<float>(new NDShape(1, 2), 
                   new float[] { 1f, 2f }, device);
    var ct_1Values = Value.CreateBatch<float>(new NDShape(1, 3), 
                     new float[] { 0f, 0f, 0f }, device);
    var ht_1Values = Value.CreateBatch<float>(new NDShape(1, 3), 
                     new float[] { 0f, 0f, 0f }, device);

    var y1Values = Value.CreateBatch<float>(new NDShape(1, 3), 
                   new float[] { 0.0629f, 0.0878f, 0.1143f }, device);

    //data 02
    var x2Values = Value.CreateBatch<float>(new NDShape(1, 2), 
                   new float[] { 3f, 4f }, device);
    var y2Values = Value.CreateBatch<float>(new NDShape(1, 3), 
                   new float[] { 0.1282f, 0.2066f, 0.2883f }, device);

    //Create LSTM Cell with predefined previous output and prev cell state
    Variable ht_1 = Variable.InputVariable(new int[] { 3 }, DataType.Float, "prevOutput");
    Variable ct_1 = Variable.InputVariable(new int[] { 3 }, DataType.Float, "prevCellState");
    var lstmCell = new LSTM(x, ht_1, ct_1, DataType.Float, 
                   Activation.Tanh, false, false, 1, device);            

    var ft = lstmCell.H.Inputs.Where(l => l.Uid.StartsWith("Parameter")).ToList();
    var pCount = ft.Sum(p => p.Shape.TotalSize);
    var consts = lstmCell.H.Inputs.Where(l => l.Uid.StartsWith("Constant")).ToList();
    var inp = lstmCell.H.Inputs.Where(l => l.Uid.StartsWith("Input")).ToList();

    //bias params
    var bs = ft.Where(p => p.Name.Contains("_b")).ToList();
    var pa = new Parameter(bs[0]);
    pa.SetValue(new NDArrayView(pa.Shape, new float[] { 0.16f, 0.17f, 0.18f }, device));
    var pa1 = new Parameter(bs[1]);
    pa1.SetValue(new NDArrayView(pa1.Shape, new float[] { 0.16f, 0.17f, 0.18f }, device));
    var pa2 = new Parameter(bs[2]);
    pa2.SetValue(new NDArrayView(pa2.Shape, new float[] { 0.16f, 0.17f, 0.18f }, device));
    var pa3 = new Parameter(bs[3]);
    pa3.SetValue(new NDArrayView(pa3.Shape, new float[] { 0.16f, 0.17f, 0.18f }, device));
            
    //set value to weights parameters
    var ws = ft.Where(p => p.Name.Contains("_w")).ToList();
    var ws0 = new Parameter(ws[0]);
    var ws1 = new Parameter(ws[1]);
    var ws2 = new Parameter(ws[2]);
    var ws3 = new Parameter(ws[3]);
    (ws0).SetValue(new NDArrayView(ws0.Shape, 
         new float[] { 0.01f, 0.03f, 0.05f, 0.02f, 0.04f, 0.06f }, device));
    (ws1).SetValue(new NDArrayView(ws1.Shape, 
         new float[] { 0.01f, 0.03f, 0.05f, 0.02f, 0.04f, 0.06f }, device));
    (ws2).SetValue(new NDArrayView(ws2.Shape, 
         new float[] { 0.01f, 0.03f, 0.05f, 0.02f, 0.04f, 0.06f }, device));
    (ws3).SetValue(new NDArrayView(ws3.Shape, 
         new float[] { 0.01f, 0.03f, 0.05f, 0.02f, 0.04f, 0.06f }, device));
            
    //set value to update parameters
    var us = ft.Where(p => p.Name.Contains("_u")).ToList();
    var us0 = new Parameter(us[0]);
    var us1 = new Parameter(us[1]);
    var us2 = new Parameter(us[2]);
    var us3 = new Parameter(us[3]);
    (us0).SetValue(new NDArrayView(us0.Shape, 
      new float[] {  0.07f, 0.10f, 0.13f, 0.08f, 0.11f, 0.14f, 0.09f, 0.12f, 0.15f }, device));
    (us1).SetValue(new NDArrayView(us1.Shape, 
      new float[] {  0.07f, 0.10f, 0.13f, 0.08f, 0.11f, 0.14f, 0.09f, 0.12f, 0.15f }, device));
    (us2).SetValue(new NDArrayView(us2.Shape, 
      new float[] {  0.07f, 0.10f, 0.13f, 0.08f, 0.11f, 0.14f, 0.09f, 0.12f, 0.15f }, device));
    (us3).SetValue(new NDArrayView(us3.Shape, 
      new float[] {  0.07f, 0.10f, 0.13f, 0.08f, 0.11f, 0.14f, 0.09f, 0.12f, 0.15f }, device));

    //evaluate 
    //Evaluate model after weights are setup
    var inV = new Dictionary<Variable, Value>();
    inV.Add(x, x1Values);
    inV.Add(ht_1, ht_1Values);
    inV.Add(ct_1, ct_1Values);

    //evaluate output when previous values are zero
    var outV11 = new Dictionary<Variable, Value>();
    outV11.Add(lstmCell.H, null);
    lstmCell.H.Evaluate(inV, outV11, device);
            
    //test  result values
    var result = outV11[lstmCell.H].GetDenseData<float>(lstmCell.H);
    Assert.AreEqual(result[0][0], 0.06286034f);//
    Assert.AreEqual(result[0][1], 0.0878196657f);//
    Assert.AreEqual(result[0][2], 0.114274308f);//

    //evaluate cell state
    var outV = new Dictionary<Variable, Value>();
    outV.Add(lstmCell.C, null);
    lstmCell.C.Evaluate(inV, outV, device);

    var resultc = outV[lstmCell.C].GetDenseData<float>(lstmCell.C);
    Assert.AreEqual(resultc[0][0], 0.114309229f);//
    Assert.AreEqual(resultc[0][1], 0.15543206f);//
    Assert.AreEqual(resultc[0][2], 0.197323829f);//

    //evaluate second value, with previous values as previous state
    //setup previous state and output
    ct_1Values = Value.CreateBatch<float>(new NDShape(1, 3), 
          new float[] { resultc[0][0], resultc[0][1], resultc[0][2] }, device);
    ht_1Values = Value.CreateBatch<float>(new NDShape(1, 3), 
          new float[] { result[0][0], result[0][1], result[0][2] }, device);

    //Prepare for the evaluation
    inV = new Dictionary<Variable, Value>();
    inV.Add(x, x2Values);
    inV.Add(ht_1, ht_1Values);
    inV.Add(ct_1, ct_1Values);

    outV11 = new Dictionary<Variable, Value>();
    outV11.Add(lstmCell.H, null);
    lstmCell.H.Evaluate(inV, outV11, device);

    //test  result values
    result = outV11[lstmCell.H].GetDenseData<float>(lstmCell.H);
    Assert.AreEqual(result[0][0], 0.128203377f);//
    Assert.AreEqual(result[0][1], 0.206633776f);//
    Assert.AreEqual(result[0][2], 0.288335562f);//

    //evaluate cell state
    outV = new Dictionary<Variable, Value>();
    outV.Add(lstmCell.C, null);
    lstmCell.C.Evaluate(inV, outV, device);

    //evaluate cell state with previous value
    resultc = outV[lstmCell.C].GetDenseData<float>(lstmCell.C);
    Assert.AreEqual(resultc[0][0], 0.227831185f);//
    Assert.AreEqual(resultc[0][1], 0.3523231f);//
    Assert.AreEqual(resultc[0][2], 0.4789199f);//
}

In this article, the implementation of the LSTM cell is presented in detail from the theory and implementation. Also, the article contains two test methods in order to prove the correctness of the implementation. The result values of the output and the cell states are compared with manually calculated values.

History

  • 22nd August, 2019: Initial version
This article was originally posted at https://bhrnjica.net?p=7598

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Software Developer (Senior)
Bosnia and Herzegovina Bosnia and Herzegovina
Bahrudin Hrnjica holds a Ph.D. degree in Technical Science/Engineering from University in Bihać.
Besides teaching at University, he is in the software industry for more than two decades, focusing on development technologies e.g. .NET, Visual Studio, Desktop/Web/Cloud solutions.

He works on the development and application of different ML algorithms. In the development of ML-oriented solutions and modeling, he has more than 10 years of experience. His field of interest is also the development of predictive models with the ML.NET and Keras, but also actively develop two ML-based .NET open source projects: GPdotNET-genetic programming tool and ANNdotNET - deep learning tool on .NET platform. He works in multidisciplinary teams with the mission of optimizing and selecting the ML algorithms to build ML models.

He is the author of several books, and many online articles, writes a blog at http://bhrnjica.net, regularly holds lectures at local and regional conferences, User groups and Code Camp gatherings, and is also the founder of the Bihac Developer Meetup Group. Microsoft recognizes his work and awarded him with the prestigious Microsoft MVP title for the first time in 2011, which he still holds today.

Comments and Discussions

 
QuestionTransformer Nets Pin
Member 42255843-Oct-21 20:36
Member 42255843-Oct-21 20:36 
QuestionAcronym meaning Pin
tbayart26-Aug-19 1:49
professionaltbayart26-Aug-19 1:49 
AnswerRe: Acronym meaning Pin
Bahrudin Hrnjica22-Nov-19 2:58
professionalBahrudin Hrnjica22-Nov-19 2:58 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.