PAPERZILLA
Crunching Academic Papers into Bite-sized Insights.
About
Sign Out
← Back to papers

Physical SciencesComputer ScienceSignal Processing

Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting
SHARE
Overview
Paper Summary
Conflicts of Interest
Identified Weaknesses
Rating Explanation
Good to know
Topic Hierarchy
File Information
Paper Summary
Paperzilla title
Look Ma, No Lag! Predicting the Future (of Time Series) with Informer
The Informer model uses a ProbSparse self-attention mechanism and a distilling operation to efficiently handle long time series sequences, improving both prediction accuracy and computational efficiency. It significantly outperforms traditional and state-of-the-art deep learning models on multiple datasets, especially for longer-term predictions.
Possible Conflicts of Interest
The authors acknowledge funding from CAAI-Huawei MindSpore Open Fund. While this doesn't necessarily imply a conflict of interest, it is worth noting given Huawei's involvement in AI and cloud computing.
Identified Weaknesses
Limited discussion of limitations
The paper lacks a comprehensive discussion on the limitations of the proposed approach. While the authors briefly mention potential biases in predictions and the computational demands of their model, they do not explore these issues in sufficient depth. For instance, the sensitivity of the model to noisy data or missing values is not examined. This makes it difficult to assess the robustness and generalizability of Informer in practical applications.
Limited evaluation on diverse time series data
The paper does not adequately address the performance of Informer on different types of time series data. The experiments are primarily focused on electricity and climate data, which exhibit relatively smooth and regular patterns. It is unclear how well Informer would perform on time series with more complex characteristics, such as those with high seasonality or abrupt changes. This limits the generalizability of the findings.
Insufficient comparison with existing sparsity techniques
The proposed ProbSparse self-attention mechanism is not thoroughly compared with existing sparsity techniques for Transformer models. While the authors mention some related work, they do not provide a direct comparison of ProbSparse with other state-of-the-art methods on the same datasets. This makes it difficult to evaluate the novelty and effectiveness of the proposed sparsity mechanism.
Rating Explanation
The paper presents a novel and efficient Transformer-based model for long sequence time-series forecasting. The proposed Informer model addresses the limitations of traditional Transformers in handling long sequences, making it suitable for real-world applications. The experiments demonstrate significant improvements over existing methods on various datasets. However, the paper could benefit from a more thorough discussion of limitations and comparisons with other sparsity techniques.
Good to know
This is our free standard analysis. Paperzilla Pro fact-checks every citation, researches author backgrounds and funding sources, and uses advanced AI reasoning for more thorough insights.
Explore Pro →
Topic Hierarchy
File Information
Original Title:
Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting
File Name:
17132.pdf
[download]
File Size:
4.65 MB
Uploaded:
July 14, 2025 at 05:19 PM
Privacy:
🌐 Public
© 2025 Paperzilla. All rights reserved.

If you are not redirected automatically, click here.