A Review for Pre-Trained Transformer-Based Time Series Forecasting Models
2023 IEEE 64th International Scientific Conference on Information Technology and Management Science of Riga Technical University (ITMS 2023): Proceedings 2023
Yunus Emre Midilli, Sergejs Paršutins

Transformer-based models have proven their superiority against recurrent networks in time series forecasting. Enhancing transformer-based forecasting models via pretraining tasks is a novel approach in the literature. In this paper, we are reviewing the most recent papers about pretraining aspects of time series as well as pretraining tasks that are used in transformer-based architectures.


Atslēgas vārdi
contrastive learning | forecasting | masked auto-encoder | pretraining | transformer
DOI
10.1109/ITMS59786.2023.10317721
Hipersaite
https://ieeexplore.ieee.org/document/10317721

Midilli, Y., Paršutins, S. A Review for Pre-Trained Transformer-Based Time Series Forecasting Models. No: 2023 IEEE 64th International Scientific Conference on Information Technology and Management Science of Riga Technical University (ITMS 2023): Proceedings, Latvija, Riga, 5.-6. oktobris, 2023. Piscataway: IEEE, 2023, 1.-6.lpp. ISBN 979-8-3503-7030-0. e-ISBN 979-8-3503-7029-4. ISSN 2771-6953. e-ISSN 2771-6937. Pieejams: doi:10.1109/ITMS59786.2023.10317721

Publikācijas valoda
English (en)
RTU Zinātniskā bibliotēka.
E-pasts: uzzinas@rtu.lv; Tālr: +371 28399196