IEEE Military Communications Conference
28 October – 1 November 2024 // Washington, DC, USA
C5I Technologies for Military and Intelligence Operations Today and Tomorrow

Distributed AI/ML at the Resource-Constrained Edge

Workshop theme
A massive amount of data is generated at edge networks with the emerging self-driving cars, drones, robots, smartphones, wireless sensors, smart meters, health monitoring devices. This vast data is expected to be processed via artificial intelligence, machine learning (AI/ML) algorithms, which is extremely challenging with the existing centralized cloud. For example, self driving cars generate around 10GB of data per mile [1]. Transmitting such massive data to the centralized cloud, and expecting timely processing are not realistic with limited bandwidth between the edge network and the centralized cloud [1–3]. An approach that has potential to address a number of problems in this space is edge computing, where AI/ML algorithms are distributively processed at the end devices, with possible help from edge servers and cloud.

The feasibility and scalability of AI/ML algorithms at the computing edge faces three principal challenges: (i) communication and computation cost, (ii) heterogeneity and time-varying nature of edge devices, and (iii) heightened security and privacy concerns given the vulnerability of edge devices. In this context, research is needed to tailor AI/ML mechanisms to edge computing to securely harvest heterogeneous resources including computing power, storage, battery, networking resources, etc., scattered across end devices, edge servers, and cloud with minimum amount of communication cost. There is an increasing interest to address these problems for AI/ML mechanism over edge networks by exploiting techniques including communication efficient distributed AI/ML, conditional computation, and decentralized learning. These approaches form the main themes of our workshop, which are summarized below:

Communication-Efficient Distributed AI/ML. The traditional approach to distribute AI/ML is data-distributed learning, which partitions and distributes data to workers; each worker updates AI/ML model independently from other workers [4], which are then combined. This approach, although has good convergence properties, has high communication cost, which puts a strain especially on edge systems and increases delay. An emerging approach is model-distributed learning [5–7], which suggests distributing layers of AI/ML across multiple workers, which is proven to reduce communication cost in large scale computing systems. Similar mechanisms should be developed for AI/ML at the edge as transmission bandwidth is much more scarce. Concurrently, there is a growing body of work studying information theoretic techniques for reducing the communication cost in iterative ML algorithms and studying the fundamental theoretical limits. For instance, recent work has developed coding-theoretic techniques for minimizing communication bottlenecks encountered during the training phase of distributed machine learning for gradient descent algorithms [8–10], as well as for generic modern distributed machine learning pipelines, e.g., Mapreduce [11–13] and graph analytics [14]. Closely connected to this are also recent approaches to gradient quantization - particularly for deep neural networks - based on fundamental information theory and coding concepts [15, 16].

Conditional Computation at the Edge. Conditional computation, or dynamic neural networks, refers to adapting the network operation or parameters depending on the specific input under consideration. This is in contrast to static neural networks, where every input undergoes the same sequence of operations. We solicit contributions that underline the benefits of conditional architectures in the context of edge networks. For example, conditional computation can provide several advantages relevant to edge networks including computational or communication efficiency. Of interest to this workshop are also the scenarios where the conditioning is with respect to the heterogeneous and time-varying resources of the edge devices, which are potentially more slowly changing than the inputs.

Decentralized and Distributed Learning at the Edge. Traditional distributed learning paradigms such as federated learning relies on a central server to collect model information from multiple clients. One disadvantage with such systems is that the failure of the aggregator means the termination of the entire learning process. Decentralized learning algorithms such as gossip or random walk learning eliminate the need of a central entity and provide a robust alternative to learning on general networks. However, there are several challenges that are yet to be resolved for decentralized learning. For example, model convergence tends to be poorer for decentralized learning. Lack of network connectivity at the edge would greatly hurt the performance of random walk type learning algorithms. The workshop solicits original work to address the challenges of decentralized learning algorithms for edge applications. Distributed algorithms in the traditional federated or related sense will also be considered.

Security and Privacy at the Edge. Edge devices can be vulnerable to security attacks given their limited computation capabilities and their openness to interact with other networks, such as IoT networks, or the Internet. Traditional cryptographic methods used for ensuring security on the edge and the privacy of the data there may not be viable given that they can be very taxing on the computation resources. This has motivated recent work [17–20] that focuses on designing secure algorithms with low computational complexity that are amenable to implementation on edge devices typically characterized by low computation power, bandwidth, battery, and storage.

Motivation for Workshop: Despite the rapidly growing scientific body of work to implement AI/ML mechanisms at the edge networks, the area is still nascent, and is conducted by many researchers whose instincts come from classical applications such as communications and networking. By exposing the challenges and ideas of modern AI/ML systems to edge computing, this workshop promises to provide a unique opportunity to shape the growth of the field and maximize the chances of its impact to meet AI/ML with edge computing. Furthermore, the workshop also offers a unique opportunity to AI/ML researchers by exposing a diverse set of new, relevant edge computing, networking, and distributed and decentralized learning ideas that cannot be found in textbooks or other standard means.

This workshop aims to bring together researchers, developers and practitioners in AI/ML and edge computing areas to identify common interests in solving problems related to the deployment of AI/ML algorithms over resource constrained computing platforms. For industry participants, we intend to create a forum to communicate what kind of problems are practically relevant. For academic participants, we hope to make it easier to become productive in this area. US Army Research Laboratory is very much interested bringing attention to Army specific AI/ML challenges through this workshop with a potential to collaborate with academic and industry partners. Overall, the workshop will provide an opportunity to share the most recent and innovative work at the intersection of AI/ML and edge computing, and discuss open problems
and relevant approaches.

Format of the Workshop
The day long workshop will involve around 2-3 invited talks each being 30 minutes long. We will integrate invited talks with 15 minute discussion sessions on the related topic; these discussions will be led and moderated the organizers, and aims to engage the audience with the speaker. These talks will be coupled with around 6 short oral presentations picked among contributed works. Among the papers submitted to the workshop, several will be selected for posters for a 1.5 hour poster session.