Sequence models enable various sequence predictions that are inherent different to other more traditional predictive modeling techniques or supervised learning approaches. In contrast to mathematical sets often used, the ‘sequence‘ model imposes an explicit order on the input/output data that needs to be preserved in training and/or inference. These models are driven by application goals and include sequence prediction, sequence classification, sequence generation, and sequence-to-sequence prediction. All these models can be developed by using so-called Long Short-Term Memory (LSTM) models. Please refer to our article on a LSTM Neural Network for more pieces of information.
The above approach of model categorization is typically based on different inputs/outputs to/from the sequence models. This is best explained via a practical ‘standard dataset’ perspective. In this perspective, the order of samples is not important and training/testing datasets and their samples have often no explicit order. Hence they are rather typical ‘mathematical sets’. In contrast looking via a practical ‘sequence dataset‘ perspective, it becomes clear that the order of samples is important. In this perspective on the datasets, the sequence model learning/inference needs this order.
Sequence Models Details
Have a look at the following video with more details: