Adversarial attacks on time series data have gained increasing attention due to their potential to undermine the robustness of machine learning models. These attacks often manipulate input data with the goal of causing misclassification, misprediction, or degradation of model performance. This paper investigates time series adversarial attacks, focusing on smooth perturbations that are difficult to detect. We explore the characteristics of these smooth perturbations and review various defense approaches designed to mitigate their impact. Our analysis highlights the challenges and potential solutions in enhancing the robustness of time series models against adversarial threats.
Abstract views:
Downloads:
hh-index
Citations
inLibrary — is a scientific electronic library built on the paradigm of open science (Open Science), the main tasks of which are the popularization of science and scientific activities, public quality control of scientific publications, the development of interdisciplinary research, a modern institute of scientific review, increasing the citation of Uzbek science and building a knowledge infrastructure.
CONTACTS:
Republic of Uzbekistan, Tashkent, Parkent street 51, floor 2