Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting
The Informer model uses a ProbSparse self-attention mechanism and a distilling operation to efficiently handle long time series sequences, improving both prediction accuracy and computational efficiency. It significantly outperforms traditional and state-of-the-art deep learning models on multiple datasets, especially for longer-term predictions.