


The decline of transformers in time series prediction and the rise of time series embedding methods, as well as anomaly detection and classification have also made progress
The entire field has made progress in several different aspects in 2022. This article will Try to cover some of the more promising and key papers that have emerged in the past year or so, as well as the Flow Forecast [FF] forecasting framework.
Time Series Forecasting
1. Are Transformers Really Effective for Time Series Forecasting?
https://www.php.cn/link/bf4d73f316737b26f1e860da0ea63ec8
Transformer related research compares Autoformer, Pyraformer, Fedformer, etc., their effects and problems
With the emergence of models such as Autoformer (Neurips 2021), Pyraformer (ICLR 2022), Fedformer (ICML 2022), EarthFormer (Neurips 2022), and Non-Stationary Transformer (Neurips), time series The Transformer family of predictive architectures continues to grow). But the ability of these models to accurately predict data and outperform existing methods remains in question, especially in light of new research (which we'll discuss later).
Autoformer: Extended and improved performance of the Informer model. Autoformer features an automatic correlation mechanism that enables the model to learn temporal dependencies better than standard attention. It aims to accurately decompose the trend and seasonal components of temporal data.
Pyraformer: The author introduces the "Pyramid Attention Module (PAM), in which the inter-scale tree structure summarizes the features at different resolutions, and the intra-scale adjacent connections pair different ranges Modeling the temporal dependence of time series.”
Fedformer: This model focuses on capturing global trends in time series data. The authors propose a seasonal trend decomposition module designed to capture the global characteristics of time series.
Earthformer: Probably the most unique of these papers, it focuses specifically on predicting Earth systems such as weather, climate, and agriculture. A new cuboid attention architecture is introduced. This paper should have great potential, because many classic Transformers have failed in research on river and flash flood prediction.
Non-Stationary Transformer: This is the latest paper using transformer for prediction. The authors aim to better tune the Transformer to handle non-stationary time series. They employ two mechanisms: destabilizing attention and a series of stabilizing mechanisms. These mechanisms can be plugged into any existing Transformer model, and the author has tested that plugging them into Informer, Autoformer, and traditional Transformer can improve performance (in the appendix, it is also shown that it can improve the performance of Fedformer).
Evaluation methodology of the paper: Similar to Informer, all these models (except Earthformer) are evaluated on electricity, transportation, finance and weather datasets. Mainly evaluated based on the mean square error (MSE) and mean absolute error (MAE) indicators:
This paper is very good, but it only compares papers related to Transformer. In fact, it should be compared with simpler methods, such as simple linear regression, LSTM/GRU, or even tree models such as XGB. Another thing is that they should not be limited to some standard data sets, because I have not seen good performance on other time series related data sets. For example, informers have huge problems accurately predicting river flow, and their performance is often poor compared to LSTM or even ordinary Transformers.
In addition, because unlike computer vision, image dimensions remain at least constant, time series data can vary greatly in length, periodicity, trend, and seasonality, so a larger range of data sets is required.
In the review for OpenReview's Non-Stationary Transformer, one commenter also expressed these issues, but it was downvoted in the final meta-review:
"Since the model belongs Transformer field, and Transformer has previously shown state-of-the-art performance in many tasks, I think there is no need to compare with other 'family' methods."
This is a very problematic argument, and leads to Research lacks applicability to the real world. As we all know: XGB's overwhelming advantage in tabular data has not changed, so what's the point of Transformer working behind closed doors? Surpassed every time and was beaten every time.
As someone who values state-of-the-art methods and innovative models in practice, when I spent months trying to get a so-called "good" model to work, I found out in the end that its performance was not as good as Simple linear regression, what's the point of these few months? What’s the point of this so-called good” model?
All transformer papers suffer from the same problem of limited evaluation. We should demand more rigorous comparisons and clear explanation of shortcomings from the beginning. A complex The model may not always initially outperform the simple model, but this needs to be explicitly stated in the paper, rather than glossed over or simply assumed that this is not the case.
But the paper is still very good, e.g. Earthformer was evaluated on the MovingMNIST data set and the N-body MNIST data set. The author used it to verify the effectiveness of cuboid attention and evaluated its precipitation immediate forecast and El Niño cycle forecast. I think it is a good one Example, integrate physical knowledge into a model architecture with attention, and then design good tests.
2. Are Transformers Effective for Time Series Forecasting (2022)?
https://www.php.cn/link/bf4d73f316737b26f1e860da0ea63ec8
This paper explores the ability of Transformer to predict data versus baseline methods. Results This somewhat reaffirms that Transformers often perform worse than simpler models and are difficult to tune. A couple of interesting points from the paper: Attention and found: "The performance of Informer grows with progressive simplification, indicating that self-attention schemes and other complex modules are unnecessary, at least for existing LTSF benchmarks"
- Investigated adding backtracking whether windowing (look-back window) would improve the performance of Transformers and found: "The performance of SOTA Transformers dropped slightly, indicating that these models only capture similar temporal information from adjacent time series sequences. ”
- explored whether positional embeddings really capture the temporal order of time series well. They did this by randomly shuffling the input sequence into a Transformer. They found this on several datasets This reorganization did not affect the results (this coding is very troublesome).
- In the past few years, countless time series experiments of the Transformer model have resulted in unsatisfactory results in the vast majority of cases. In For a long time, we thought we must have done something wrong, or missed some small implementation details. All of these were considered ideas for the next SOTA model. But this paper has a consistent idea: If a Simple models outperform Transformers, should we continue to use them? Are all Transformers inherently flawed, or is it just the current mechanism? Should we go back to architectures like lstm, GRU, or simple feedforward models? These questions I don't even know the answer, but the overall impact of this paper remains to be seen. So far, I think the answer might be to take a step back and focus on learning efficient time series representations. After all, BERT initially succeeded in forming well in the NLP context representation.
https://www.php.cn/link/ab22e28b58c1e3de6bcef48d3f5d8b4a
This article evaluates the model's performance on five real-world datasets, including Server Machine Dataset, Pooled Server Metrics, Soil Moisture Active Passive and NeurIPS-TS (which itself consists of five different datasets). While one might be skeptical of this model, especially regarding the second paper's point of view, this assessment is quite rigorous. Neurips-TS is a recently created dataset specifically designed to provide more rigorous evaluation of anomaly detection models. This model does seem to improve performance compared to simpler anomaly detection models.
The authors propose a unique unsupervised Transformer that performs well on a plethora of anomaly detection datasets. This is one of the most promising papers in the field of time series Transformers in the past few years. Because prediction is more challenging than classification or even anomaly detection because you are trying to predict a huge range of possible values multiple time steps into the future. So much research has focused on prediction, while ignoring classification or anomaly detection. Should we start simple for Transformer?
4. WaveBound: Dynamic Error Bounds for Stable Time Series Forecasting (Neurips 2022):
##https://www.php.cn/link/ae95296e27d7f695f891cd26b4f37078
The paper introduces a new form of regularization that can improve the training of deep time series prediction models (especially the transformers mentioned above). The authors evaluate by inserting it into the existing transformer LSTNet model. They found that it significantly improved performance in most cases. Although they only tested the Autoformer model and not newer models like Fedformer. New forms of regularization or loss functions are always useful because they can usually be plugged into any existing time series model. If you combine Fedformer non-stationary mechanisms with Wavebound, you might beat simple linear regression in performance :). Time Series RepresentationAlthough Transformer does not perform well in the prediction direction, Transformer has made a lot of progress in creating useful time series representations. I think this is an impressive new area in the field of time series deep learning that should be explored more deeply. 5. TS2Vec: Towards Universal Representation of Time Series (AAAI 2022)https://www.php.cn/link/7690dd4db7a92524c684e3191919eb6b
TS2Vec is a general framework for learning time series representation/embedding. The paper itself is somewhat dated, but it did start a trend of time series representation learning papers. Evaluated using representations for prediction and anomaly detection, the model outperforms many models such as Informer and Log Transformer. 6、Learning Latent Seasonal-Trend Representations for Time Series Forecasting(Neurips 2022)https://www.php.cn/link/0c5534f554a26f7aeb7c780e12bb1525
https://www.php.cn/link/791d3a0048b9c200dceca07f99ddd178
This is a paper published at ICLR earlier in 2022, which is very similar to LaST in terms of learning season and trend representation. Since LaST has largely replaced its performance, I won't go into too much description here. But the link is above for those who want to read it.Other interesting papers
8. Domain Adaptation for Time Series Forecasting via Attention Sharing(ICML 2022)
https://www.php.cn/link /d4ea5dacfff2d8a35c0952291779290d
Prediction is a challenge for DNN when there is a lack of training data. This paper uses a shared attention layer for domains with rich data, and then separate modules for target domains.
The proposed model is evaluated using synthetic and real data sets. In a synthetic environment, cold-start learning and few-shot learning were tested and their models were found to outperform plain Transformer and DeepAR. For the real dataset Kaggle retail dataset was adopted and the model significantly outperformed the baseline in these experiments.
Cold start, few samples, and limited learning are extremely important topics, but few papers deal with time series. This model provides an important step toward addressing some of these issues. This means they can be evaluated on more diverse limited real-world data sets and compared to more baseline models. The benefit of fine-tuning or regularization is that it can be adjusted for any architecture.
9、When to Intervene: Learning Optimal Intervention Policies for Critical Events (Neurips 2022)
https://www.php.cn/link/f38fef4c0e4988792723c29a0bd3ca98
Although this is not a "typical" time series paper, I chose to include it in this list because the focus of the paper is on finding the best time to intervene before a machine fails. This is called OTI or Optimal Time to Intervention
One of the problems with evaluating OTI is the accuracy of the underlying survival analysis (if it is incorrect, the assessment will also be incorrect). The authors evaluated their model against two static thresholds, found that it performed well, and plotted the expected performance and hit-to-fail ratio for different policies.
This is an interesting problem and the authors propose a novel solution, with one commenter on Openreview stating: "If there was a graph showing the trade-off between failure probability and expected intervention time, the experiment might It will be more convincing, so that people can intuitively see the shape of this trade-off curve."
Recent Datasets/Benchmarks
The last is the benchmark for testing the data set
Monash Time Series Forecasting Archive (Neurips 2021): This archive is intended to form a "master list" of different time series datasets and provide a more authoritative benchmark. The repository contains over 20 different datasets spanning multiple industries including health, retail, ridesharing, demographics, and more.
https://www.php.cn/link/5d7009220a974e94404889274d3a9553
Subseasonal Forecasting Microsoft (2021): This is a publicly released data set by Microsoft , designed to promote the use of machine learning to improve subseasonal forecasts (e.g., two to six weeks ahead). Sub-seasonal forecasts help government agencies better prepare for weather events and farmers’ decisions. Microsoft has included several benchmark models for this task, and in general deep learning models perform quite poorly compared to other methods. The best DL model is a simple feedforward model, and Informer performs very poorly.
https://www.php.cn/link/c3cbd51329ff1a0169174e9a78126ee1
Revisiting Time Series Outlier Detection: This article reviews many existing anomalies/ outlier detection datasets, and 35 new synthetic datasets and 4 real-world datasets are proposed for benchmarking.
https://www.php.cn/link/03793ef7d06ffd63d34ade9d091f1ced
The open source timing prediction framework FF
Flow Forecast is An open source time series prediction framework, which includes the following models:
Vanilla LSTM (LSTM), SimpleTransformer, Multi-Head Attention, Transformer with a linear decoder, DARNN, Transformer XL, Informer, DeepAR, DSANet, SimpleLinearModel Wait
This is a good source of model code for learning to use deep learning for time prediction. If you are interested, you can take a look.
https://www.php.cn/link/fea33a31df7d05a276193d32621ecbe4
Summary
In the past two years, we has seen the rise and possible decline of Transformers in time series forecasting and the rise of time series embedding methods, with additional breakthroughs in anomaly detection and classification.
But for deep learning time series: interpretability, visualization and benchmarking methods are still lacking, because where the model is executed and where performance failures occur is very important. Additionally, more forms of regularization, preprocessing, and transfer learning to improve performance may appear in the future.
Maybe Transformer is good for time series prediction (maybe not). Just like VIT, Transformer may still be considered useless without the emergence of Patch. We will also continue to pay attention to the development or replacement of Transformer in time series.
The above is the detailed content of A review of research progress of deep learning in time series prediction and classification in 2022. For more information, please follow other related articles on the PHP Chinese website!

1 前言在发布DALL·E的15个月后,OpenAI在今年春天带了续作DALL·E 2,以其更加惊艳的效果和丰富的可玩性迅速占领了各大AI社区的头条。近年来,随着生成对抗网络(GAN)、变分自编码器(VAE)、扩散模型(Diffusion models)的出现,深度学习已向世人展现其强大的图像生成能力;加上GPT-3、BERT等NLP模型的成功,人类正逐步打破文本和图像的信息界限。在DALL·E 2中,只需输入简单的文本(prompt),它就可以生成多张1024*1024的高清图像。这些图像甚至

Wav2vec 2.0 [1],HuBERT [2] 和 WavLM [3] 等语音预训练模型,通过在多达上万小时的无标注语音数据(如 Libri-light )上的自监督学习,显著提升了自动语音识别(Automatic Speech Recognition, ASR),语音合成(Text-to-speech, TTS)和语音转换(Voice Conversation,VC)等语音下游任务的性能。然而这些模型都没有公开的中文版本,不便于应用在中文语音研究场景。 WenetSpeech [4] 是

“Making large models smaller”这是很多语言模型研究人员的学术追求,针对大模型昂贵的环境和训练成本,陈丹琦在智源大会青源学术年会上做了题为“Making large models smaller”的特邀报告。报告中重点提及了基于记忆增强的TRIME算法和基于粗细粒度联合剪枝和逐层蒸馏的CofiPruning算法。前者能够在不改变模型结构的基础上兼顾语言模型困惑度和检索速度方面的优势;而后者可以在保证下游任务准确度的同时实现更快的处理速度,具有更小的模型结构。陈丹琦 普

由于复杂的注意力机制和模型设计,大多数现有的视觉 Transformer(ViT)在现实的工业部署场景中不能像卷积神经网络(CNN)那样高效地执行。这就带来了一个问题:视觉神经网络能否像 CNN 一样快速推断并像 ViT 一样强大?近期一些工作试图设计 CNN-Transformer 混合架构来解决这个问题,但这些工作的整体性能远不能令人满意。基于此,来自字节跳动的研究者提出了一种能在现实工业场景中有效部署的下一代视觉 Transformer——Next-ViT。从延迟 / 准确性权衡的角度看,

3月27号,Stability AI的创始人兼首席执行官Emad Mostaque在一条推文中宣布,Stable Diffusion XL 现已可用于公开测试。以下是一些事项:“XL”不是这个新的AI模型的官方名称。一旦发布稳定性AI公司的官方公告,名称将会更改。与先前版本相比,图像质量有所提高与先前版本相比,图像生成速度大大加快。示例图像让我们看看新旧AI模型在结果上的差异。Prompt: Luxury sports car with aerodynamic curves, shot in a

译者 | 李睿审校 | 孙淑娟近年来, Transformer 机器学习模型已经成为深度学习和深度神经网络技术进步的主要亮点之一。它主要用于自然语言处理中的高级应用。谷歌正在使用它来增强其搜索引擎结果。OpenAI 使用 Transformer 创建了著名的 GPT-2和 GPT-3模型。自从2017年首次亮相以来,Transformer 架构不断发展并扩展到多种不同的变体,从语言任务扩展到其他领域。它们已被用于时间序列预测。它们是 DeepMind 的蛋白质结构预测模型 AlphaFold

人工智能就是一个「拼财力」的行业,如果没有高性能计算设备,别说开发基础模型,就连微调模型都做不到。但如果只靠拼硬件,单靠当前计算性能的发展速度,迟早有一天无法满足日益膨胀的需求,所以还需要配套的软件来协调统筹计算能力,这时候就需要用到「智能计算」技术。最近,来自之江实验室、中国工程院、国防科技大学、浙江大学等多达十二个国内外研究机构共同发表了一篇论文,首次对智能计算领域进行了全面的调研,涵盖了理论基础、智能与计算的技术融合、重要应用、挑战和未来前景。论文链接:https://spj.scien

说起2010年南非世界杯的最大网红,一定非「章鱼保罗」莫属!这只位于德国海洋生物中心的神奇章鱼,不仅成功预测了德国队全部七场比赛的结果,还顺利地选出了最终的总冠军西班牙队。不幸的是,保罗已经永远地离开了我们,但它的「遗产」却在人们预测足球比赛结果的尝试中持续存在。在艾伦图灵研究所(The Alan Turing Institute),随着2022年卡塔尔世界杯的持续进行,三位研究员Nick Barlow、Jack Roberts和Ryan Chan决定用一种AI算法预测今年的冠军归属。预测模型图


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Atom editor mac version download
The most popular open source editor

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft
