Smooth Perturbations in Time Series Adversarial Attacks: Challenges and Defense Strategies

Christian Sorensen, Mikkel Jensen

Adversarial attacks on time series data have gained increasing attention due to their potential to undermine the robustness of machine learning models. These attacks often manipulate input data with the goal of causing misclassification, misprediction, or degradation of model performance. This paper investigates time series adversarial attacks, focusing on smooth perturbations that are difficult to detect. We explore the characteristics of these smooth perturbations and review various defense approaches designed to mitigate their impact. Our analysis highlights the challenges and potential solutions in enhancing the robustness of time series models against adversarial threats.

8

Abstract views:

3

Downloads:

hh-index

0

Citations