https://www.inlibrary.uz/index.php/tajet/issue/feed The American Journal of Engineering and Technology 2025-08-24T09:56:05+08:00 Current Research Journals editor@theamericanjournals.com Open Journal Systems <p>The Editorial Board, which is augmented by regional editors in approximately twenty countries, robustly hails contributions globally. It is the express objective of The American Journal of Engineering and Technology<em> (TAJET) </em>to persuade the prospect of Engineering and Technology and its reception globally by publishing ground-breaking contributions on all facets of Engineering and Technology endeavor.</p> https://www.inlibrary.uz/index.php/tajet/article/view/134915 Big Data Analytics in Information Systems Research: Current Landscape and Future Prospects Focus: Data science, cloud platforms, real-time analytics in IS 2025-08-19T02:13:21+08:00 Yeasin Arafat yeasin@theamericanjournals.com Dhiraj Kumar Akula dhirajkumarakula14@gmail.com Yaseen Shareef Mohammed yaseen@theamericanjournals.com Gazi Mohammad Moinul Haque gazi@theamericanjournals.com Mahzabin Binte Rahman mahzabin@theamericanjournals.com Asif Syed asif@theamericanjournals.com <p>The emergence of information systems (IS) research and big data analytics (BDA) represents a paradigm shift in the world of organization choice that is shifting the field to one that is dominated by big data and technologies that enable it. This paper offers an in-depth discussion of the present situation and future of the BDA in IS, especially the revolutionary opportunities of data science, cloud-based computing environments, and real-time analytics. The study will undergo a mixed-method research design as it combines the approaches of bibliometric analysis and qualitative synthesis of 1,136 peer-reviewed articles published between 2013 and 2024 on the key academic databases. Quantitative patterns suggest a steep increase in the research of IS with machine learning, predictive modelling and cloud-based analytics architecture. Sectoral analysis shows that there are extensive and intensive application of the sector across diversified domains such as healthcare, finance, manufacturing and public administration where the real-time consideration of analytics has become very crucial in providing responsiveness and agility. The thematic forecasting plays up on areas of future expansion such as explainable AI, federated learning and quantum-enhanced analytics all of which are closely associated with the continuing development of the cloud infrastructure and advanced data science procedures and methods. Even with methodological development, there remains a problem of algorithmic transparency, data cross-sector interoperability and governance. This paper presents a prospective research agenda to the IS field and future practitioners, a point to consider in the future is an interdisciplinary cooperation, an ethically responsible development, and a strategic incorporation of scalable analytics platforms. The originality of the studies in the paper is that it is conducted on an empirical basis and has a specific narrow focus of the technology enabling the next generation of data-dependent information systems.</p> 2025-08-18T00:00:00+08:00 Copyright (c) 2025 Yeasin Arafat, Dhiraj Kumar Akula, Yaseen Shareef Mohammed, Gazi Mohammad Moinul Haque, Mahzabin Binte Rahman, Asif Syed https://www.inlibrary.uz/index.php/tajet/article/view/135402 System Analysis of Monitoring and Logging Tools in Web Environments 2025-08-24T09:56:05+08:00 Anastasiia Perih anastasiia@theamericanjournals.com <p>The article is devoted to a systematic analysis of modern monitoring and logging tools widely used in web application environments, such as Prometheus, Grafana, Datadog, AWS CloudWatch, ELK stack (Elasticsearch, Logstash, Kibana), and New Relic. Its relevance lies in the necessity to efficiently manage telemetry data generated by complex and large-scale web systems to ensure reliability and performance. The novelty consists in a detailed comparative evaluation of tools regarding their architectures, capabilities, deployment models, and integration ecosystems. Special attention is paid to practical scenarios in which each solution demonstrates clear advantages or encounters limitations. The primary goal of this research is to provide comprehensive guidance on selecting optimal monitoring tools tailored to specific organizational needs. Methods applied include analysis of recent vendor documentation, industry reports, user surveys, and technical benchmarks. The conclusion summarizes best-fit scenarios for each tool. The article will be particularly useful for DevOps engineers, web architects, and system administrators.</p> 2025-08-23T00:00:00+08:00 Copyright (c) 2025 Anastasiia Perih https://www.inlibrary.uz/index.php/tajet/article/view/135355 Adaptation of Cinematographic Techniques for Mobile and Vertical Video Formats 2025-08-22T16:09:58+08:00 Herasymiuk Heorhii heorhii@theamericanjournals.com <p>The article describes how, in the context of the rapid growth of video consumption on mobile devices and the dominance of portrait screen orientation (9:16), classical cinematographic aspect ratios and techniques must be reconsidered and adapted. While traditional aspect ratios (4:3, 2.39:1) reflected historical technological and aesthetic constraints, the modern user experience dictates new rules of composition, blocking, editing, and technical implementation arising from the characteristics of touch controls and continuous scrolling. The study aims to identify and systematize the principles for applying classical cinematographic techniques in the mobile vertical format, to substantiate their effectiveness, and to develop a new grammar of visual storytelling for the narrow frame. The relevance of this work is driven by the unprecedented growth in mobile video traffic and changes in audience engagement models, which demand scholarly reflection on the aesthetic and technical transformations. Its novelty lies in the comprehensive integration of historical-technical analysis, ergonomic experiments, neurophysiological measurements, and content-analytic data to build a unified model of vertical video language. The methodological foundation combines comparative format analysis, ergonomic tests, a systematic review of recommendations on composition and camera movement, as well as empirical EEG-response studies and statistical analysis of audience behavior on TikTok and Reels. Results show that the central vertical axis of the frame serves as the path of least resistance for gaze and interaction; the rule of thirds transforms into vertical dynamics; and the Z-axis and dolly-in/out techniques become crucial for dramaturgy. Vertical split-screen significantly increases retention;gyrostabilized POV reduces fatigue; light gradients and pinpoint color accents guide attention from top to bottom; and binaural and tactile sound expand perception of the narrow frame. The study’s conclusions establish the theoretical and practical basis of a new cinematographic grammar for mobile vertical video: each modality—composition, camera movement, editing, lighting, color, sound, and interface effects—interacts to precisely direct attention within the 9:16 frame. This article will be particularly helpful to video production specialists, mobile app UX designers, and media aesthetics researchers.</p> 2025-08-21T00:00:00+08:00 Copyright (c) 2025 Herasymiuk Heorhii https://www.inlibrary.uz/index.php/tajet/article/view/135354 Data Security in Multi-Tenant Clusters 2025-08-22T16:09:56+08:00 Megha Aggarwal aggarwal@theamericanjournals.com <p>This article presents a comprehensive analysis of the set of threats that are characteristic of heterogeneous Kubernetes deployments. The work aims to systematize and examine these threats, as well as to develop an integrated security model suitable for practical implementation. The methodological foundation consisted of a rigorous literature review encompassing both academic papers and engineering reports from major cloud providers. Special attention was given to publications on container isolation, inter-pod network policy, secrets management, and data encryption protocols. Based on this analysis, a multi-layer threat map is presented, detailing the attack vectors at each layer. The proposed protective measures are integrated into a unified DevSecOps lifecycle framework and can be automated within CI/CD pipelines. The conclusions drawn and the model developed are intended for security engineers, DevOps teams, and cloud platform architects who need to design and maintain multi-tenant Kubernetes clusters with a guaranteed level of data protection.</p> 2025-08-21T00:00:00+08:00 Copyright (c) 2025 Megha Aggarwal https://www.inlibrary.uz/index.php/tajet/article/view/135353 Methods for Data Recovery from Damaged and Inaccessible RAID Arrays 2025-08-22T16:09:49+08:00 Stanislav Yermolov yermolov@theamericanjournals.com <p>This work provides a systematization and critical analysis of existing methodologies for recovering information from damaged or inaccessible Redundant Array of Independent Disks (RAID) arrays. The relevance of the study is determined by the fact that the reliability of corporate storage directly affects the continuity of business processes and the stability of government operations. The objective of the research is to conduct a comprehensive review of algorithmic approaches to data recovery with a focus on automated identification of key array configuration parameters and reconstruction of information at the logical level. In particular, traditional methods based on analysis of metadata and block placement tables are examined, as well as modern techniques employing entropy-based assessment of bit distributions, detection of file system signatures, and application of heuristic machine learning models. It is noted that the combination of automatic recognition of RAID parameters (level, striping algorithm, block size) with in-depth analysis of internal file system structure minimizes operator intervention and significantly increases the likelihood of successful data retrieval even in the absence of complete configuration information. This work will be useful for IT data recovery engineers, information security and digital forensics specialists, and researchers addressing reliability and fault tolerance of modern storage systems.</p> 2025-08-21T00:00:00+08:00 Copyright (c) 2025 Stanislav Yermolov https://www.inlibrary.uz/index.php/tajet/article/view/135016 IOT and Wearable Technology in Patient Monitoring: Business Analytics Applications for Real-Time Health Management 2025-08-19T23:23:03+08:00 Maham Saeed mahamsaeed0007@gmail.com Keya Karabi Roy keya@theamericanjournals.com Kami Yangzen Lama kami@theamericanjournals.com Mustafa Abdullah Azzawi mustafa@theamericanjournals.com Yeasin Arafat yeasin@theamericanjournals.com <p>The intersection of the Internet of Things (IoT) and wearable devices is transforming patient monitoring because they allow the provision of data-driven, uninterrupted, and remote healthcare services. The paper examines real-time health management and decision-making clinical and operational situations on how these technologies, combined with business analytics frameworks, can improve real-time health management and decision-making. Carrying out a synthesis of the current breakthroughs and large-scale deployments throughout the worldwide health system, the paper explores the operational synergy of smart medical devices and analytical platforms in care outcome optimization, response time decrease, and resource utilization. This study employs a data-driven observational study design to analyze high-frequency physiological measurements recorded by wearable sensors and connected medical devices in a range of chronic and acute care conditions. business analytics tools are applied to the collected data in order to isolate actionable business insights, spot anomalies, and enable predictive risk modelling. The study methodology focuses on real-time data capture, patient stratification and cross-sectional system performance measures assessment. The results indicate a considerable positive shift in early intervention abilities, patient adherence, and operational effectiveness, proving that real-time analytics based on IoT-connected wearables can decrease the number of hospitalizations and fine-tune treatment plans. Another influential obstacle pointed out in the study is data privacy, device interoperability, and the digital divide. The study has its contribution to the emerging domain of healthcare informatics, as it presented a scalable and replicable model of implementing IoT and analytics in patient monitoring. It meets the existing literature gaps by uniting the technological, clinical, and business standpoints and offering practical insights that can be used by health IT leaders, policymakers, and clinicians who want to transform the care delivery models using smart and data-driven solutions.</p> 2025-08-18T00:00:00+08:00 Copyright (c) 2025 Maham Saeed, Keya Karabi Roy, Kami Yangzen Lama, Mustafa Abdullah Azzawi, Yeasin Arafat https://www.inlibrary.uz/index.php/tajet/article/view/135015 Cybersecurity Strategies in Healthcare It Infrastructure: Balancing Innovation and Risk Management 2025-08-19T23:22:28+08:00 Kami Yangzen Lama kami@theamericanjournals.com Maham Saeed mahamsaeed0007@gmail.com Keya Karabi Roy keya@theamericanjournals.com MD Abutaher Dewan abutaher@theamericanjournals.com <p>Due to the accelerated digitalization in the healthcare industry, clinical operations and the process of delivering care to patients have changed with the introduction of Electronic Health Records (EHRs), telemedicine platforms, cloud computing, and Internet of Medical Things (IoMT). This technological adaptation has however created cybersecurity vulnerabilities that are essential to the confidentiality, integrity, and availability of sensitive health information. In this paper, the author explores the twin dilemma of a contemporary healthcare institution: how to drive technology-related innovation and at the same time successfully mitigate cyber risks. Adopting a data-driven approach, the research synthesizes empirical evidence from recent cyber incidents, analyzes the effectiveness of global cybersecurity frameworks such as NIST and HIPAA, and evaluates emerging technologies' roles in risk mitigation. The methodology is based on the mixed-methods design, which consists of the case studies, incident data examination, and expert interviews, to provide the depth of analysis and practical significance. Findings indicate that despite the prospects of high-tech approaches to protection, including AI-based threat detection and blockchain-based data integrity, they require the support of solid governance policies, organizational training, and dynamic risk management models to achieve efficient protection. The received findings highlight the fact that strategic alignment of innovation and security is possible and, moreover, necessary to achieve the sustainability of digital healthcare transformation. The proposed study is novel since few studies have holistically approached the issue of cybersecurity strategies by providing a technological and organizational approach to the problem and providing recommendations that can be put into action by CIOs, policymakers and healthcare administrators. By filling in the gap between innovation and protection, the paper adds to an increasingly number of literatures that underlines the urgency of making cybersecurity a built-in aspect of the healthcare IT infrastructure.</p> 2025-08-18T00:00:00+08:00 Copyright (c) 2025 Kami Yangzen Lama, Maham Saeed, Keya Karabi Roy, MD Abutaher Dewan https://www.inlibrary.uz/index.php/tajet/article/view/134918 Kubernetes for Data Engineering: Orchestrating Reliable ETL Pipelines in Production 2025-08-19T02:13:30+08:00 supriya gandhari gandharisupriya85@gmail.com <p>In the current data driven world, organizations are handling larger and more complex datasets to facilitate decision-making, personalization, and real-time insights. This process is centralized with Extract, Transform, Load (ETL) pipelines, which are essential for gathering data from various sources and preparing it for analysis. Although traditional methods of ETL orchestration typically constructed with monolithic schedulers or Cron-based scripts have functioned well historically, they often struggle to meet contemporary demands like dynamic scaling, high availability, cloud-native deployment, and clear observability.<br />Kubernetes, which was initially designed to manage stateless microservices, has now evolved into a flexible platform capable of handling complex, stateful workloads, including data pipelines. Its capability to be declarative, fault tolerant, and a rich ecosystem of native components such as Jobs, CronJobs, StatefulSets, and ConfigMaps can be a compelling approach for orchestrating ETL pipelines that are both scalable and easy to maintain. By utilizing Kubernetes, data teams can containerize each stage of their pipeline, isolate resource management, and enhance operational clarity which results in reduction in pipeline execution times of up to 40% and infrastructure cost savings between 25% and 35% through autoscaling and optimization of spot instances.<br />This paper investigates the effective application of Kubernetes in data engineering for orchestrating production-level ETL workflows. We go deep into using fundamental Kubernetes constructs for scheduling and fault recovery and examine how they integrate with orchestration frameworks such as Apache Airflow, Argo Workflows, and Dagster. Through a detailed review of academic research, industry case studies, and practical design patterns, we evaluate the advantages and disadvantages of Kubernetes in real-world data processing scenarios.<br />We also discuss ongoing issues such as the operational burden, challenges in ensuring data quality, and the steep learning curve linked to adopting Kubernetes. Despite these issues, our results indicate that Kubernetes provides a strong and future-ready framework for developing modular, reliable, and cloud-portable data pipelines, marking it as a crucial component in the advancement of modern data engineering infrastructure.</p> 2025-08-16T00:00:00+08:00 Copyright (c) 2025 supriya gandhari https://www.inlibrary.uz/index.php/tajet/article/view/134917 Cybersecurity Risk Management in the Age of Digital Transformation: A Systematic Literature Review 2025-08-19T02:13:28+08:00 Gazi Mohammad Moinul Haque gazi@theamericanjournals.com Dhiraj Kumar Akula dhirajkumarakula14@gmail.com Yaseen Shareef Mohammed yaseen@theamericanjournals.com Asif Syed asif@theamericanjournals.com Yeasin Arafat yeasin@theamericanjournals.com <p>The cloud, the IoT, AI, and elaborate advances in data analytics have radically changed organizational business models and practices, in terms of digital transformation of various industries. The high rate of technological change, however, has also led to an increase of cybersecurity threats, which have high levels of threats to information integrity, privacy and continuity of operation. The following paper consists of a systematically reviewed literature analysis of the existing cybersecurity risk management practices within the framework of digital transformation, which analyzes the evidence between the year 2014 and 2024 throughout the major scholarly databases. At the end of the selection out of 2,311 sources considered, 87 peer-reviewed studies, based on the PRISMA guidelines, were chosen to answer the question of risk typology, assessment methodology, and mitigation frameworks. Among other trends outlined in the review, there is a significant transition to the paradigm of proactive risk management and focus on real-time management, AI-optimized threat identification, and cross-sector security cooperation. It lumps cybersecurity risk into technical risks, human risks, procedural risks and third-party risks and it exposes risks faced by the sectors, particularly the healthcare center, the finance and the energy systems. More so, the findings define the low application of quantitative risk assessment models in the practice even though they have been found useful in the academic research. The review reveals marked deficiencies in longitudinal risk assessment, sectoral bias in failure to establish literature especially in the developing economies. This paper can effectively form a critical basis of future research and strategic policy-making by summarizing the existing state of knowledge and determining urgent challenges. It provides a well-grounded evidence framework to organizations willing to develop adaptive, resilient, and data-based cybersecurity risk management systems that comply with the expectations of the digital era.</p> 2025-08-18T00:00:00+08:00 Copyright (c) 2025 Gazi Mohammad Moinul Haque, Dhiraj Kumar Akula, Yaseen Shareef Mohammed, Asif Syed, Yeasin Arafat https://www.inlibrary.uz/index.php/tajet/article/view/134916 The Impact of Artificial Intelligence on Information Systems: Opportunities and Challenges 2025-08-19T02:13:26+08:00 Yaseen Shareef Mohammed yaseen@theamericanjournals.com Dhiraj Kumar Akula dhirajkumarakula14@gmail.com Asif Syed asif@theamericanjournals.com Gazi Mohammad Moinul Haque gazi@theamericanjournals.com Yeasin Arafat yeasin@theamericanjournals.com <p>The increased speed at which Artificial Intelligence (AI) is integrated in Information Systems (IS) is the paradigm shift in the way organizations operate, manage their data and make decisions. In this paper, the authors are going to address the multidimensional role of AI in new IS, which is characterized by both the opportunities to transform it and the urgency of challenges related to this technology. That is why the research is based on a data-driven approach, which is cross-sectional since the current paper examines empirical studies and real-life case examples and provides statistical researches in high-impact journals and technology reports around the world to study how AI technologies: e.g., machine learning, natural language processing, intelligent automation, and others redesign IS architectures across industries. The study reveals a huge improvement of operational efficiency, cost-saving, accuracy of data processing and real-time decision making. Nonetheless, it is also revealing the challenges (such as; algorithmic biases, data governance issues, ethical concerns, and cybersecurity risks). On the thematic review of 25+ credible studies, it will be found that whereas the AI-driven IS enhances scaling and responsiveness, its adoption process is not always smooth due to the lack of technical preparedness, compliance with regulations, and skepticism towards the use of AI in the decision-making process. This paper is novel because of integrating the analysis of the opportunities and the challenges, providing the balanced picture with estimable figures. The results indicate that there is strategic alignment that is required between AI innovations with IS governance framework to realize full potential of AI. Suggestions to design ethical, resilient, as well as efficient AI-augmented IS infrastructures are presented to businesses, policymakers, and system designers. It can be argued that the present paper would serve the community of academia and the field of industry strategy by laying out the step-by-step framework that involves the sustainable and secure development of AI within corporate IS ecosystems.</p> 2025-08-18T00:00:00+08:00 Copyright (c) 2025 Yaseen Shareef Mohammed, Dhiraj Kumar Akula, Asif Syed, Gazi Mohammad Moinul Haque, Yeasin Arafat https://www.inlibrary.uz/index.php/tajet/article/view/130517 Conceptual And Applied Aspects of Artificial Intelligence – An Analysis of Ai Capabilities, Limitations, And Prospects in Modern Technologies 2025-08-02T14:35:06+08:00 Sergiu Metgher sergiu@theamericanjournals.com <p>This article presents a comprehensive analysis of artificial intelligence and its impact on various aspects of sustainable development. AI is actively utilized to enhance decision-making, automate processes, and optimize numerous fields of activity. However, its integration into critical domains such as sustainable development, public administration, and cultural innovation raises concerns regarding accessibility, inclusivity, and resource redistribution. The study examines key aspects of artificial intelligence, its classification—including narrow and general AI—and the technologies employed, such as machine learning, natural language processing, and computer vision. Special attention is given to AI’s relationship with sustainable development goals and its role in advancing innovative solutions in social and environmental spheres. The article also explores the ethical, social, and cultural consequences of AI implementation, emphasizing the necessity of developing responsible, transparent, and sustainable systems that align with international standards and ensure long-term societal well-being. This research may be of interest to a broad audience of specialists and researchers engaged in fields related to AI development, its applications, and its influence on societal processes.</p> 2025-08-01T00:00:00+08:00 Copyright (c) 2025 Sergiu Metgher https://www.inlibrary.uz/index.php/tajet/article/view/133618 Automating Fixed-Income Index Creation: Lessons Learned and Future Opportunities 2025-08-13T18:18:28+08:00 Tarun Chataraju tchataraju@gmail.com <p>Fixed-income index construction faces significant challenges due to reliance on manual processes that struggle to meet the demands of increasingly complex and volatile financial markets. The global fixed-income market encompasses diverse instruments across government, corporate, municipal, and securitized debt sectors, requiring sophisticated processing capabilities that manual approaches cannot efficiently deliver. Contemporary index construction involves extensive data sourcing from multiple terminal feeds, dealer networks, and regulatory sources, followed by complex normalization processes including currency standardization, credit rating harmonization, and maturity calculations. These manual processes introduce substantial vulnerabilities, including high error rates, processing delays, and scalability constraints that impact operational efficiency and index accuracy. Modern workflow orchestration technologies, including Apache Airflow, Dagster, and Prefect, offer transformative solutions by automating previously manual processes through sophisticated task management, fault-tolerant execution, and real-time processing capabilities. Automation implementation demonstrates dramatic improvements in processing speed, error reduction, and operational resilience while enabling resource reallocation toward strategic activities. Advanced artificial intelligence and machine learning technologies present unprecedented opportunities for dynamic index weighting optimization through reinforcement learning algorithms and anomaly detection systems that enhance data quality and market intelligence. The evolution toward automated index construction represents a fundamental transformation in financial market infrastructure, enabling institutions to maintain competitive advantages while meeting regulatory requirements and client expectations in rapidly evolving market environments.</p> 2025-08-13T00:00:00+08:00 Copyright (c) 2025 Tarun Chataraju https://www.inlibrary.uz/index.php/tajet/article/view/133471 Real-time Data Streaming using Kafka, Kinesis, and RabbitMQ 2025-08-12T22:03:49+08:00 Vladyslav Vodopianov vladyslav@theamericanjournals.com <p>In the present work a comprehensive comparative analysis of the three leading platforms for organizing message streaming — Apache Kafka, Amazon Kinesis and RabbitMQ — is performed with the aim of identifying their architectural features, operational strengths and limitations under conditions of peak loads and stringent latency requirements. The study relies on a comprehensive methodological approach, including a systematic review of current scientific publications, the conduct of comparative performance measurements in laboratory settings and the synthesis of practical case studies of integrating the systems under consideration into real IT landscapes. The obtained results demonstrate that a reasoned choice of platform for stream processing depends on a multitude of interrelated factors: the volume of messages processed, the required throughput metrics and maximum response time, the preferred deployment model (on-premises solution, cloud service or their hybrid), the capabilities for seamless integration with existing services and infrastructure, as well as the project’s budgetary constraints. On the basis of the conducted analysis a unified decision-making methodology is proposed for selecting tools for streaming data processing, adapted to the tasks of data engineers, distributed systems architects and researchers of high-performance information platforms. The material is of practical interest to specialists designing fault-tolerant and scalable distributed message queues, as well as to experts in real-time analytics and cloud solution developers seeking to gain a deeper understanding of the architectural schemes and methods for optimizing throughput applied in Kafka, Kinesis and RabbitMQ. In addition, the research results may be useful to scientists in the field of distributed computing and the Internet of Things, focusing on the theoretical foundations and practical aspects of constructing reliable event-data pipelines.</p> 2025-08-12T00:00:00+08:00 Copyright (c) 2025 Vladyslav Vodopianov https://www.inlibrary.uz/index.php/tajet/article/view/133470 Best Practices for Leading Front-End Development Teams: Balancing Technical Excellence and Team Growth 2025-08-12T22:03:48+08:00 Stanislav Antipov stanislav@theamericanjournals.com <p>Managing a front-end development team does not concern writing clean code or following rigid processes exclusively — it is a mix of engineering precision and people skills. This paper takes a closer look at how those two elements come together and offers a set of practical approaches drawn from real experience and recent research. Instead of sticking only to the technical side, the study pulls in ideas from agile leadership, team psychology, and modern software practices to give advice that actually fits how front-end teams work today. Key ideas that keep surfacing include shared ownership, creating a safe space for open communication (psychological safety), and leadership styles rooted in service and ethics. Continuous integration and deployment (CI/CD) also plays a big role. What is especially worth noting is how things like code reviews and automated testing — which are usually thought of as purely technical tasks — can double as learning moments and mentoring tools. They offer a chance for developers to support each other, grow together, and build a stronger team culture along the way.</p> 2025-08-12T00:00:00+08:00 Copyright (c) 2025 Stanislav Antipov https://www.inlibrary.uz/index.php/tajet/article/view/133469 Risk Management and Compliance Strategies for Legacy IT Infrastructure 2025-08-12T22:03:47+08:00 Yurii Shevchuk yurii@theamericanjournals.com <p>This article examines strategies for risk management and regulatory compliance in the context of modernizing legacy IT infrastructure. The relevance of the topic arises from the growing need to integrate new technological solutions in environments characterized by limited flexibility, high maintenance costs, and technical debt—factors that contribute to operational, technical, and regulatory risks. The study analyzes the current state of legacy infrastructure, identifies key threats and vulnerabilities, and develops methodological approaches including modular migration, AI-driven analytics, the adoption of cloud technologies, and the integration of advanced security measures to ensure compliance with regulatory frameworks such as GDPR and HIPAA. <br>The scientific novelty lies in proposing a new perspective on integrating strategic planning, change management, and modern security technologies to reduce operational costs and improve business resilience. This perspective emerged through a critical review of existing literature. The author’s hypothesis suggests that a comprehensive approach to modernizing legacy IT infrastructure—grounded in modular transition to cloud solutions and automated security monitoring—will lead to a reduction in risk and an increase in the efficiency of business processes. <br>The findings of this study may be of interest to IT management professionals, risk and compliance officers, and researchers seeking to integrate the latest analytical methods into the evaluation and modernization of inherited IT systems. The presented material is also expected to be useful to other scholars developing theoretical models for managing complex IT environments, as well as to practitioners implementing strategies for mitigating operational and regulatory risks amid continuous technological and legal transformation.</p> 2025-08-12T00:00:00+08:00 Copyright (c) 2025 Yurii Shevchuk https://www.inlibrary.uz/index.php/tajet/article/view/133468 Paradigms of Generative Artificial Intelligence in Automating Corporate Code Writing 2025-08-12T22:03:45+08:00 Ankit Agarwal ankit@theamericanjournals.com <p>This paper examines the paradigm shifts in leveraging generative artificial intelligence for automated code generation at the enterprise level. It is thus a critical review of prevailing prescriptions for integrating LLM agents into the software development lifecycles of modern enterprises, assessing their impact on team productivity and the new risks they introduce to confidentiality and licensing matters. The study would therefore be most befitting at this stage, as fast-forward steps are being made towards organizational adoption of generative AI, from mere IDE autocompletion features to more than a co-programmer but an autonomous agent capable even of popping pull requests sans humans in the loop, demanding new forms of legibility both organizationally and technically. The novelty of this research lies in its integration of material from scholarly works, industry reports, and case studies, along with lab pilot runs of Copilot and actual DevSecOps implementations, to triangulate the current state and future promise of this technology on a practical business level. Key findings include: a reduction of development cycle time by 50–60% without compromising code quality thanks to the integration of AI agents into IDEs and CI/CD pipelines; a shift of developers’ roles toward architects and reviewers as routine tasks are delegated to digital co‑programmers; and a necessity for phased implementation that accounts for private code protection and compliance with licensing norms. Significant barriers identified include model hallucination management, ensuring the traceability of changes, and adapting organizational culture and regulations to new roles such as prompt designers and AI-agent curators. The article will be of use to IT department heads, software architects, DevSecOps specialists, and researchers in the field of artificial intelligence.</p> 2025-08-12T00:00:00+08:00 Copyright (c) 2025 Ankit Agarwal https://www.inlibrary.uz/index.php/tajet/article/view/133096 From Concierge to Cloud: Reimagining Hospitality Through SaaS-Driven Experiences 2025-08-11T11:20:47+08:00 Vishesh Goel vishesh@theamericanjournals.com <p>Cloud-based Software-as-a-Service (SaaS) solutions are transforming the hospitality sector at a fast rate. They enable delivering services 30% faster and boosting guest satisfaction by 25% by means of real-time personalization and automation. This paper examines how SaaS impacts operational efficiency, guest experience, alignment with ESG, and strategic agility in the hospitality sector. With increasing customer demands and sustainability regulations, hotel brands are moving away from legacy systems to new, subscription-based solutions that are simple to scale and less environmentally unfriendly. This paper demonstrates, via a literature review, comparisons, and actual cases such as HotelKey, CitizenM, and Marriott, how SaaS in hospitality enables agility, digital inclusion, and ethical conduct in a post-pandemic world.</p> 2025-08-07T00:00:00+08:00 Copyright (c) 2025 Vishesh Goel https://www.inlibrary.uz/index.php/tajet/article/view/133095 The Role of Open-Source Contributions in the Development of the Frontend Ecosystem 2025-08-11T11:02:59+08:00 Karen Sarkisyan karen@theamericanjournals.com <p>This paper examines how open source helps shape the frontend world by pointing out its part in fast-tracking progress and setting uniform client-side web standards. Its importance comes from massive growth—more than 5.2 billion actions on GitHub and over 2.5 million packages on npm—the community’s move toward TypeScript, and React's supremacy. This work tries to define a broader model of open source help that covers not just code but also CI/CD setups, plugins, translated docs, and even meetup groups, while checking how this affects the lifecycle of frontend tools plus standards. What makes this work new is bringing together info from GitHub Octoverse, npm Registry, corporate OSPO reports using a method unifying descriptive stats, types of contributions, network analysis on pull-request workflows, assessment of ecosystem strength through bus-factor measures, as well as maintainer burnout metrics. The proposed three-tier participation model—individual maintainers, corporate contributors, and institutional bodies (OpenJS Foundation, TC39)—enables the description of mechanisms for idea generation, resource scaling, and API standardization. Key findings demonstrate that open source contributions deliver unprecedented speed and flexibility in adopting innovations: the median pull-request merge time is 9 hours, large-scale events such as Hacktoberfest attract tens of thousands of newcomers, and the “Vite + Vitest + Storybook” toolchain sets the DevEx standard, enhancing the network effect of package publication and consumption. It also exposes systemic risks: 60% of maintainers are burned out, and 65% of projects have a low bus factor—sustainable funding, OSPO initiatives, and mentorship programs for maintaining ecosystem health. This article will help OSS project leaders, DevEx specialists, and OSPO teams strategize long-term support and scaling for open frontend tools.</p> 2025-08-10T00:00:00+08:00 Copyright (c) 2025 Karen Sarkisyan https://www.inlibrary.uz/index.php/tajet/article/view/131657 Revolutionizing Last-Mile Logistics: Integrating Autonomous Vertical Delivery Systems in High-Rise Urban Environments 2025-08-05T23:24:24+08:00 Stanislav Markovich stanislav@theamericanjournals.com <p>This article examines the integration of autonomous vertical delivery systems (AVDS) for high-rise buildings in cities, while discussing the last-mile challenges that plague traditional elevator systems, which increase operational costs and CO₂ emissions. It is precisely due to this e-commerce and online food delivery boom that the study becomes relevant; modern times see up to 85% of a courier’s delivery cycle time as vertical movement inside a building—skyscraper situation kills all benefits brought by digitally optimized routing; enormous economic and reputational losses happen to both developers and delivery services. The novelty of this research lies in proposing full automation of vertical delivery based on a patented gateway for receiving couriers’ carts, a modular silent conveyor, and integrated microlockers on each floor, eliminating elevator frustration and ensuring end-to-end digital parcel tracking. The solution is easily scalable to any high-rise parameters and integrates with the IT infrastructure of major delivery operators. Following AVDS deployment, the courier time inside the building is reduced to less than one minute, empty elevator trips decrease by one-third, and the building’s carbon footprint is reduced by 3–4%. Econometric analysis reveals an 8.6% increase in tenants’ lease-renewal willingness and a 2.43 percentage point reduction in vacancy, while apartment values rise by USD 4,000–6,000 due to improved Environmental, Social, and Governance (ESG) metrics. This article will help developers, property managers, and logistics operators who want to cut costs and boost service quality in tall building projects.</p> 2025-08-05T00:00:00+08:00 Copyright (c) 2025 Stanislav Markovich https://www.inlibrary.uz/index.php/tajet/article/view/130518 Explainable AI (XAI) in Business Intelligence: Enhancing Trust and Transparency in Enterprise Analytics 2025-08-02T14:35:16+08:00 Indraneel Madabhushini indraneel@theamericanjournals.com <p>The integration of Artificial Intelligence in Business Intelligence systems has fundamentally transformed enterprise analytics capabilities, enabling sophisticated pattern recognition, predictive modeling, and automated decision-making processes. However, the opaque nature of many AI algorithms presents significant challenges in business contexts where transparency, accountability, and regulatory compliance remain paramount concerns. This comprehensive technical review examines the role of Explainable AI in addressing these critical challenges, providing detailed insights into current methodologies, implementation frameworks, and practical applications across enterprise analytics environments. The content explores theoretical foundations distinguishing interpretability from explainability, emphasizing their crucial roles for different stakeholder groups within organizations. Technical frameworks encompass model-agnostic and model-specific methods, including LIME, SHAP, and attention mechanisms, alongside implementation tools ranging from open-source libraries to enterprise platforms. Real-world applications demonstrate XAI effectiveness across financial services, healthcare, retail, manufacturing, and human resources sectors, highlighting regulatory compliance benefits and stakeholder trust improvements. Current challenges include computational complexity, explanation fidelity, multi-modal data integration, and scalability issues, while emerging trends focus on automated explanation generation, interactive interfaces, and causal reasoning methods. Regulatory and ethical considerations address compliance evolution, bias detection, and fairness metrics, while technical advancements explore foundation model interpretability and privacy-preserving techniques.</p> 2025-08-01T00:00:00+08:00 Copyright (c) 2025 Indraneel Madabhushini