Long Short-Term Memory Network Tutorials and Examples for Beginners
Long Short-Term Memory Network (LSTM) was firstly introduced
by Hochreiter and Schmidhuber in 1997, which is a Variant RNN and contains three gates: forget gate, input gate and output gate.
In this page, we write some tutorials and examples on Long Short-Term Memory Network, you can learn how to use this network by following our tutorials.
In order to improve the performance of lstm model in deep learning, we can use stacked lstm. In this tutorial, we will introduce the stacked lstm for deep learning beginners.
To improve lstm and bilsm, you should implement them by your own tensorflow code. In this tutorial, we will discuss why the performance of your custom lstm or bilstm model are worse than tf.nn.dynamic_rnn() and tf.nn.bidirectional_dynamic_rnn().
In this tutorial, we will introduce how the tf.nn.bidirectional_dynamic_rnn() process variable length sequence, which is very useful to help you understand this function and build your custom model.
Nested LSTM network is one of improved LSTM model, which has better performance than classic LSTM. In this tutorial, we will introduce it for lstm network beginners.
LSTM peephole conncections is one of improvements for classic LSTM network. In this tutorial, we will introduce the difference between LSTM peephole conncections and classic LSTM.
In this tutorial, we discuss how LSTM Weight and Bias are initialized when initializer is None in TensorFlow. We can modify our custom lstm to make its performace same to tensorflow LSTM network.