A Review of Large Language Models in Edge Computing: Applications, Challenges, Benefits, and Deployment Strategies

Venkata Srinivas Kompally

Large Language Models (LLMs) have achieved very good success in natural language processing, but deployment of these powerful models on edge computing devices across all domains presents unique challenges. This paper reviews the state of LLMs in edge computing, focusing on four key aspects: their emerging applications across various sectors, the technical challenges of running LLMs on resource-constrained edge devices, the potential benefits of bringing LLM capabilities closer to data sources, and effective deployment strategies to enable LLMs at the edge. We also discuss on how LLM edge deployment could offer low-latency, privacy-preserving intelligent assistance throughout a range of domains, such as healthcare, IoT, industrial automation, and more.  We also look at some techniques and architectures that can overcome the limitations of edge devices, such as cloud-edge collaboration, federated learning, model compression, and on-device inference. This review identifies practical ways to integrate LLMs into edge environments by examining current practices and their trade-offs. It also provides guidance for future research to address the remaining issues in this quickly expanding field.

78

Abstract views:

37

Downloads:

hh-index

0

Citations