Details

Review and comparative evaluation of resource-adaptive collaborative training for heterogeneous edge devices
ID Radovič, Boris (Author), ID Canini, Marco (Author), ID Pejović, Veljko (Author)

.pdfPDF - Presentation file, Download (1,42 MB)
MD5: 2F04B57DBBF3A2B6ADF278C437DE4E48
URLURL - Source URL, Visit https://dl.acm.org/doi/10.1145/3708983 This link opens in a new window

Abstract
Growing concerns about centralized mining of personal data threatens to stifle further proliferation of machine learning (ML) applications. Consequently, a recent trend in ML training advocates for a paradigm shift – moving the computation of ML models from a centralized server to a federation of edge devices owned by the users whose data is to be mined. Though such decentralization aims to alleviate concerns related to raw data sharing, it introduces a set of challenges due to the hardware heterogeneity among the devices possessing the data. The heterogeneity may, in the most extreme cases, impede the participation of low-end devices in the training or even prevent the deployment of the ML model to such devices. Recent research in distributed collaborative machine learning (DCML) promises to address the issue of ML model training over heterogeneous devices. However, the actual extent to which the issue is solved remains unclear, especially as an independent investigation of the proposed methods’ performance in realistic settings is missing. In this paper, we present a detailed survey and an evaluation of algorithms that aim to enable collaborative model training across diverse devices. We explore approaches that harness three major strategies for DCML, namely Knowledge Distillation, Split Learning, and Partial Training, and we conduct a thorough experimental evaluation of these approaches on a real-world testbed of 14 heterogeneous devices. Our analysis compares algorithms based on the resulting model accuracy, memory consumption, CPU utilization, network activity, and other relevant metrics, and provides guidelines for practitioners as well as pointers for future research in DCML.

Language:English
Keywords:federated learning, split learning, distributed collaborative learning, ubiquitous and mobile computing, device heterogeneity
Work type:Article
Typology:1.01 - Original Scientific Article
Organization:FRI - Faculty of Computer and Information Science
Publication status:Published
Publication version:Version of Record
Year:2025
Number of pages:35 str.
Numbering:Vol. ǂ10, no. 1, art. 4
PID:20.500.12556/RUL-166789 This link opens in a new window
UDC:004
ISSN on article:2376-3639
DOI:10.1145/3708983 This link opens in a new window
COBISS.SI-ID:221443843 This link opens in a new window
Publication date in RUL:24.01.2025
Views:459
Downloads:228
Metadata:XML DC-XML DC-RDF
:
Copy citation
Share:Bookmark and Share

Record is a part of a journal

Title:ACM transactions on modeling and performance evaluation of computing systems
Shortened title:ACM trans. model. perform. eval. comput. syst.
Publisher:Association for Computing Machinery, Inc.
ISSN:2376-3639
COBISS.SI-ID:68955395 This link opens in a new window

Licences

License:CC BY 4.0, Creative Commons Attribution 4.0 International
Link:http://creativecommons.org/licenses/by/4.0/
Description:This is the standard Creative Commons license that gives others maximum freedom to do what they want with the work as long as they credit the author.

Secondary language

Language:Slovenian
Keywords:zvezno učenje, porazdeljeno učenje, vseprisotno računalništvo, mobilno računalništvo

Projects

Funder:Other - Other funder or multiple funders
Project number:ORA-CRG2021-4699

Funder:ARRS - Slovenian Research Agency
Project number:J2-3047
Name:Kontekstno-odvisno približno računanje na mobilnih napravah

Similar documents

Similar works from RUL:
Similar works from other Slovenian collections:

Back