https://www.inlibrary.uz/index.php/ijvsli/issue/feed International journal of signal processing, embedded systems and VLSI design 2025-07-21T21:15:51+08:00 Open Journal Systems <p>The aim of the International Journal of Signal Processing, Embedded Systems, and VLSI Design is to provide a comprehensive platform for researchers, academicians, and industry professionals to disseminate and exchange cutting-edge research findings, advancements, and innovations in the interdisciplinary fields of signal processing, embedded systems, and VLSI (Very Large Scale Integration) design. The journal strives to foster collaboration among experts from academia and industry, facilitating the exchange of ideas, methodologies, and practical applications.</p> https://www.inlibrary.uz/index.php/ijvsli/article/view/119311 MACsec on 400G Links: Hardware Acceleration for Financial Networks 2025-07-07T13:49:06+08:00 Ashutosh Chandra Jha ashutosh@academicpublishers.org <p>In financial networks, the explosive growth of high-frequency trading (HFT), market data feeds, and real-time clearing platforms has only heightened the demand for ultra-low latency and yet secure data transmission. With 400G Ethernet becoming the core of modern financial infrastructure, a complex technical challenge is to implement robust encryption without deterministically degrading system performance. This article discusses the deployment of MACsec on 400G links by leveraging hardware acceleration, including FPGA, SmartNICs, and ASICs. The study compares software-based MACsec with hardware-accelerated alternatives, conducted through emulation, simulation, and benchmarking in lab environments that mimic real-world financial traffic, using metrics such as latency, jitter, CPU utilization, throughput, and power efficiency. Hardware offloading significantly reduces latency induced by encryption and facilitates secure communication within a microsecond bound while also increasing system scalability—a crucial feature for both compliance-sensitive financial applications in practice. Proposes a comprehensive architecture to integrate both legacy and next-generation data center fabrics. The article offers deployment recommendations (mixed plumes vs. deposited samples), key lifecycle management principles, and a guide to component selection tailored to operational needs. This also highlights new trends, such as post-quantum MACsec hardware and AI-driven encrypted traffic visibility. For financial institutions seeking to strike a balance between security and speed in a world of terabit-scale networking, this research offers valuable insights.</p> 2025-07-07T00:00:00+08:00 Copyright (c) 2025 Ashutosh Chandra Jha https://www.inlibrary.uz/index.php/ijvsli/article/view/127060 Reducing Latency and Enhancing Accuracy in LLM Inference through Firmware-Level Optimization 2025-07-21T21:15:51+08:00 Reena Chandra reena@academicpublishers.org <p>Many edge and embedded platforms now rely on Large Language Models (LLMs) to efficiently handle natural language processing with just basic tools. Due to inference running slowly, limits on hardware, and making sacrifices between accuracy and efficiency, performing in real time is still a problem. This research analyzes firmware improvements that address these constraints, with the main goal of improving latency without any loss in the model's accuracy. This study put together a structure that brings together specific firmware actions, scheduled accesses to memory, and instructions that depend on the microarchitecture. We use 4-bit and 8-bit operations, predict memory accesses, and choose a schedule tuned for the ARM NEON and x86 AVX hardware. For confirmation, a special HIL framework processes tests in real time using a fault injection system for memory, accuracy, and latency tracking. We observe that our approach achieves a major improvement in time and energy use while maintaining over 95% of the original model’s performance. This work provides useful suggestions for developers and system architects using LLMs in applications that require fast responses.</p> 2025-07-21T00:00:00+08:00 Copyright (c) 2025 Reena Chandra