CYBER SECURITY IN THE AGE OF ARTIFICIAL INTELLIGENCE

Аннотация

This article explores the evolving relationship between artificial intelligence and cybersecurity. It explains how AI improves threat detection, response automation, and predictive analysis, while also introducing new risks such as adversarial attacks, data poisoning, deepfakes, and AI-powered phishing. The text emphasizes the need for ethical guidelines, human oversight, and international cooperation to maintain a secure digital environment in the age of intelligent technologies.

Тип источника: Журналы
Годы охвата с 2023
inLibrary
Google Scholar
Выпуск:
119-120
0

Скачивания

Данные скачивания пока недоступны.
Поделиться
Djumakulova Shaxlo Davlyatovna, Quyliyev Sardorbek Abdixoliq ugli. (2025). CYBER SECURITY IN THE AGE OF ARTIFICIAL INTELLIGENCE. Международный журнал научных исследователей, 12(1), 119–120. извлечено от https://www.inlibrary.uz/index.php/ijsr/article/view/130400
0
Цитаты
Crossref
Сrossref
Scopus
Scopus

Аннотация

This article explores the evolving relationship between artificial intelligence and cybersecurity. It explains how AI improves threat detection, response automation, and predictive analysis, while also introducing new risks such as adversarial attacks, data poisoning, deepfakes, and AI-powered phishing. The text emphasizes the need for ethical guidelines, human oversight, and international cooperation to maintain a secure digital environment in the age of intelligent technologies.


background image

INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCHERS

ISSN: 3030-332X Impact factor: 8,293

Volume 12, issue 1, June 2025

https://wordlyknowledge.uz/index.php/IJSR

worldly knowledge

Index:

google scholar, research gate, research bib, zenodo, open aire.

https://scholar.google.com/scholar?hl=ru&as_sdt=0%2C5&q=wosjournals.com&btnG

https://www.researchgate.net/profile/Worldly-Knowledge

https://journalseeker.researchbib.com/view/issn/3030-332X

119

CYBER SECURITY IN THE AGE OF ARTIFICIAL INTELLIGENCE

Djumakulova Shaxlo Davlyatovna,

Quyliyev Sardorbek Abdixoliq ugli

Computer science teachers of the Academic Lyceum of

Termez State University of Engineering and Agrotechnology

Annotation

. This article explores the evolving relationship between artificial intelligence and

cybersecurity. It explains how AI improves threat detection, response automation, and

predictive analysis, while also introducing new risks such as adversarial attacks, data poisoning,

deepfakes, and AI-powered phishing. The text emphasizes the need for ethical guidelines,

human oversight, and international cooperation to maintain a secure digital environment in the

age of intelligent technologies.

Keywords

: artificial intelligence, cybersecurity, adversarial attacks, phishing, deepfake, data

poisoning, threat detection, machine learning, cyber threats, AI security, automation, ethical AI,

digital privacy, international regulation, SI-based attacks.

Artificial Intelligence (AI) has become an important tool in today's digital security

challenges. At the same time, SI technologies themselves are creating new types of risks. The

relationship between cybersecurity and artificial intelligence is complex: on the one hand, the

fight against cyberattacks is increasing with the help of AI, and on the other, hackers are using

AI tools to create more advanced threats. This has fundamentally changed the modern security

environment. First of all, AI has increased the ability to automatically detect threats, analyze

them in real time, and prevent them. For example, machine learning algorithms help detect

cyberattacks at an early stage by detecting unusual behavior in the network. Unlike traditional

security systems, AI is constantly learning, improving itself, and adapting to new types of

threats.

However, this very self-learning feature also creates new vulnerabilities in AI-based

security systems. For example, hackers can deliberately misdirect an AI system through so-

called “adversarial attacks.” These attacks use specially designed data sets to confuse the AI

​ ​ model, causing the system to make incorrect decisions. This is especially a threat to

systems that provide facial recognition, access control, or financial security. In addition,

hackers themselves have begun to use AI tools. For example, automated phishing campaigns

and social media-adapted phishing emails are becoming more credible with AI. This makes it

more difficult for ordinary users to detect and eliminate the threat. Disinformation campaigns,

“deepfake” videos, and voice spoofing using AI are also new challenges in cybersecurity. Such

tools can be used to commit crimes such as political manipulation, defamation of an individual

or organization, and fraud.

The database (dataset), which is an important part of the operation of artificial

intelligence, can also become a weak point. If this data is manipulated, the AI ​ ​ will make

the wrong decision. Therefore, a type of threat called “data poisoning” requires serious


background image

INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCHERS

ISSN: 3030-332X Impact factor: 8,293

Volume 12, issue 1, June 2025

https://wordlyknowledge.uz/index.php/IJSR

worldly knowledge

Index:

google scholar, research gate, research bib, zenodo, open aire.

https://scholar.google.com/scholar?hl=ru&as_sdt=0%2C5&q=wosjournals.com&btnG

https://www.researchgate.net/profile/Worldly-Knowledge

https://journalseeker.researchbib.com/view/issn/3030-332X

120

attention in the field of cybersecurity. Also, the excessive autonomy of AI-based security

systems poses a problem. For example, if an AI-based security system automatically blocks

access to the system or marks the wrong user as “dangerous”, an environment where

technology controls people will arise. This raises serious questions about ethics and human

rights. At the international level, the issue of regulating cybersecurity operations based on

artificial intelligence has not been resolved. Different countries are pursuing their own policies

in this regard, and these technologies are becoming an integral part of national security. This

indicates the need for global agreements and ethical standards.

In the future, defense systems developed with the help of AI will be even more perfect:

they will be able to predict hacking attacks in advance, immediately assess the risk and take

specific measures. But human oversight, ethical constraints, transparent algorithms, and

international standards play a crucial role. Otherwise, AI-based cybersecurity systems could

become self-regulating, closed systems that do not require human intervention, putting the

information society at risk. The evolution of artificial intelligence in cybersecurity has not only

been a technological revolution, but also a paradigm shift in security. Now, no digital system

can provide robust security without the help of AI. However, the transformation of this

technology into an effective weapon against cyberattacks, in turn, makes it a target.

One of the main principles of cybersecurity, "proactive defense", is now implemented

through AI. Previously, security systems responded only after an attack occurred. Now, AI

allows you to predict threats, analyze behavior and take countermeasures before damage occurs.

This is especially important for the financial sector, banking system, healthcare systems and

government infrastructure. For example, "SIEM" (Security Information and Event Management)

systems developed based on artificial intelligence analyze millions of log records in the

network in real time and identify signals of potential attacks. They detect deviations from

normal user behavior and alert the administrator or take automatic action. Also, “User and

Entity Behavior Analytics” (UEBA) technologies use AI algorithms to track user behavior and

detect malicious activity. AI defines this as a threat if a user logs in at a different time than

usual, connects from an unknown device, or downloads a large amount of data.

However, despite these capabilities, AI technologies have their own vulnerabilities.

Under the term AI Supply Chain Attacks, hackers are now attacking the models used to create

AI algorithms or their databases. They can add malicious data to the model training process and

confuse it. As a result, AI-based security systems begin to malfunction. Another dangerous

situation is the proliferation of AI as a Service (AIaaS) platforms. Through these platforms, any

user can access AI services. Hackers can use such services to plan attacks, test them, and detect

“zero-day” vulnerabilities. This makes artificial intelligence-based attacks popular and

affordable.

References

:

1. Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach (4th ed.).

Pearson.

2. Brundage, M., et al. (2018). The Malicious Use of Artificial Intelligence: Forecasting,

Prevention, and Mitigation.

3. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.

Библиографические ссылки

Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.

Brundage, M., et al. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.

Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.

Barreno, M., et al. (2006). Can Machine Learning Be Secure? ACM Symposium on Information, Computer and Communications Security.