Högskolan i Skövde

his.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Distributed and federated learning of support vector machines and applications
University of Skövde, School of Informatics. University of Skövde, Informatics Research Environment. Data Science and AI division, Department of Computer Science and Engineering, Chalmers University of Technology and University of Gothenburg, Sweden. (Skövde Artificial Intelligence Lab (SAIL))ORCID iD: 0000-0003-0669-9978
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Machine Learning (ML) has achieved remarkable success in solving classification, regression, and related problems over the past decade. In particular the exponential growth of digital data, makes using ML inevitable and necessary to exploit the wealth of information hidden inside the data. However, this posits new challenges for traditional serial learning methods concerning scalability as the learning process becomes slow, if it is feasible at all. Distributed and parallel learning are natural approaches for improving the performance of ML algorithms in terms of running time. Support Vector Machines (SVM) are successful and popular supervised machine learning models that enjoy the availability of parallel and distributed computational approaches to tackle the challenges arising from big data. However, efficient parallel and distributed implementation of SVMs is a complex endeavour. To make matters worse, distributed learning of SVMs may impose privacy concerns, particularly with models built from data distributed across multiple nodes where due to confidentiality of data, intellectual property interest, or other reasons, the information cannot be easily shared. Despite the large amounts of prior work, the problems regarding learning SVMs with respect to big data are not fully solved. Therefore in this dissertation, we try to shed light on efficient parallel SVM implementations and on how they can be further improved. This dissertation is based on a collection of publications. The research presented here addresses some problems in parallel and distributed computing of SVMs through four research questions. The key contribution is to provide answers to these research questions through five research articles. For the first research question, we explore available parallel approaches for learning SVMs for large-scale problems. We investigate important factors such as algorithmic approaches, HPC tools, strategies and heuristics used for effective parallel SVM implementations. All of these helped identifying the state-of-the-art parallel SVMs, their pros and cons, and provide suggestions for potential avenues for future studies. We conclude that it is the responsibility of the user to make judicious choices to balance the trade-offs. To address the second research question, we explore the impact of changes in the problem size and the important SVM parameters that lead to significant performance improvements. It turns out that the problem size has impact on the optimal choice of the important SVM parameters. In addition, we show the existence of a threshold on the number of cores after which the training time does not improve. The third research question investigates the effect of network topology on the performance of network-based distributed SVMs in terms of convergence. The three key contributions are to 1) show the effect of the expansion property and the connectivity of the underlying communication network on the convergence of the algorithm, 2) present a preferable network topology, and 3) supply an implementation that makes the theoretical advances practically available. The results I suggest that the graphs with higher node degrees and larger spectral gaps, thus higher connectivity, exhibit accelerated convergence. The fourth research question investigates federated learning of SVMs in which the data privacy is protected under limited and private communication between agents. The key contribution is incorporating differential privacy during the learning procedure. The results show that the learning process 1) respects data privacy, 2) achieves accuracy comparable to the algorithm without considering privacy-preserving, and 3) yields tight empirical guarantees for privacy after convergence. Finally, we combine all the contributions in the dissertation and offer recommendations for implementing an efficient privacy-preserving framework for parallel computing of SVMs for large-scale problems.

Abstract [sv]

Machine Learning (ML) har haft en anmärkningsvärd framgång bland annat när det gäller att lösa klassificerings- och regressionsproblem under det senaste decenniet. Särskilt när det gäller den exponentiella tillväxten av digital data, är det oundvikligt och nödvändigt att använda ML för att utnyttja den mängd information som är gömd i datan. Detta innebär dock nya utmaningar för traditionella seriella inlärningsmetoder när det gäller skalbarhet eftersom inlärningsprocessen blir långsam, om alls möjligt. Distribuerat och parallellt lärande är lovande metoder för att för[1]bättra prestandan hos ML-algoritmer när det gäller körtid. Support Vector Machines (SVM) är framgångsrika och populära modeller för övervakad maskininlärning som åtnjuter tillgången till parallell och distribuerad datoranvändning för att hantera de utmaningar som uppstår från big data. Hur som helst är effektiv parallell och distribuerad implementering av SVM ett komplext problem. För att göra saken värre kan distribuerad inlärning av SVM skapa integritetsproblem, särskilt med modeller byggda från data som distribueras över flera noder där informationen inte enkelt kan delas på grund av datakonfidentialitet, immateriella intressen eller andra skäl. Trots de stora mängderna tidigare arbete är problemen med att lära SVM:er med avseende på big data inte helt lösta, därför försöker vi i denna avhandling belysa effektiva parallella SVM-implementeringar och hur de kan förbättras ytterligare. Denna avhandling bygger på en samling publikationer. Forskningen som presenteras i denna avhandling tar upp några problem parallellt med och distribuerad beräkning av SVM:er genom fem forskningsfrågor. Det viktigaste bidraget för denna avhandling är att ge svar på avhandlingarnas forskningsfrågor genom fem forskningsartiklar. För den första forskningsfrågan utforskar vi tillgängliga parallella tillvägagångssätt för att lära SVM:er för storskaliga problem. Vi undersöker viktiga faktorer som algoritmiska tillvägagångssätt, HPC-verktyg, strategier och heuristik som används för effektiva parallella SVM-implementeringar. Alla dessa hjälpte till att identifiera de senaste parallella SVM:erna, deras för- och nackdelar, och ge förslag på potentiella vägar för framtida studier. Vi drar sen slutsatsen att det är användarens ansvar att göra kloka val för att balansera avvägningarna. För att ta itu med den andra forskningsfrågan utforskar vi effekterna av förändringar i problemstorleken och de viktiga SVM-parametrar som leder till betydande prestanda. Det visar sig att problemets storlek har inverkan på det optimala valet av viktiga SVM-parametrar. Dessutom visar vi att det finns en tröskel mellan träningstiden och antalet kärnor. Den tredje forskningsfrågan undersöker effekten av nätverkstopologi på prestandan hos nätverksbaserade distribuerade SVM när det gäller konvergens. De tre nyckelbidragen är att 1) visa effekten av expansionsegenskapen och det underliggande nätverkets uppkoppling på algoritmens konvergens, 2) presentera en föredragen nätverkstopologi och 3) tillhandahålla en implementering som gör de teoretiska framstegen praktiskt tillgängliga. Resultaten tyder på att graferna med högre grader och större spektralgap, alltså högre anslutningsmöjligheter, uppvisar accelererad konvergens. Den fjärde forskningsfrågan undersöker federerad inlärning av SVM:er där dataintegriteten skyddas under begränsad och privat kommunikation mellan agenter. Det viktigaste bidraget är att införliva differentierad integritet under inlärningsproceduren. Resultaten visar att inlärningsprocessen 1) respekterar dataintegritet, 2) uppnår en noggrannhet jämförbar med den icke-privata algoritmen och 3) ger snäva empiriska garantier för integritet efter konvergens. Slutligen löser man den sista forskningsfrågan genom att kombinera alla bidrag i avhandlingen och ger rekommendationer för att implementera ett effektivt ramverk för parallell beräkning av SVM för storskaliga problem.

Place, publisher, year, edition, pages
Skövde: University of Skövde , 2022. , p. xv, 180
Series
Dissertation Series ; 44
National Category
Computer Sciences Computer Systems Other Computer and Information Science
Research subject
Skövde Artificial Intelligence Lab (SAIL)
Identifiers
URN: urn:nbn:se:his:diva-21776ISBN: 978-91-984919-8-2 (print)OAI: oai:DiVA.org:his-21776DiVA, id: diva2:1693851
Public defence
2022-09-30, G110, Högskolan i Skövde, Skövde, 14:00 (English)
Opponent
Supervisors
Available from: 2022-09-08 Created: 2022-09-08 Last updated: 2022-09-08Bibliographically approved
List of papers
1. Parallel Computing of Support Vector Machines: A Survey
Open this publication in new window or tab >>Parallel Computing of Support Vector Machines: A Survey
2019 (English)In: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341, Vol. 51, no 6, p. 123:1-123:38, article id 123Article, review/survey (Refereed) Published
Abstract [en]

The immense amount of data created by digitalization requires parallel computing for machine-learning methods. While there are many parallel implementations for support vector machines (SVMs), there is no clear suggestion for every application scenario. Many factor—including optimization algorithm, problem size and dimension, kernel function, parallel programming stack, and hardware architecture—impact the efficiency of implementations. It is up to the user to balance trade-offs, particularly between computation time and classification accuracy. In this survey, we review the state-of-the-art implementations of SVMs, their pros and cons, and suggest possible avenues for future research.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2019
Keywords
Dual optimization, primal optimization, decomposition, CPU parallelism, GPU parallelism, speedup, data movement
National Category
Engineering and Technology Computer Sciences Computer and Information Sciences Information Systems
Research subject
INF301 Data Science; Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:his:diva-16554 (URN)10.1145/3280989 (DOI)000460376100014 ()2-s2.0-85061196907 (Scopus ID)
Funder
Knowledge Foundation
Available from: 2019-01-14 Created: 2019-01-14 Last updated: 2022-09-08Bibliographically approved
2. Empirical Study of Time Efficiency and Accuracy of Support Vector Machines Using an Improved Version of PSVM
Open this publication in new window or tab >>Empirical Study of Time Efficiency and Accuracy of Support Vector Machines Using an Improved Version of PSVM
2015 (English)In: Proceedings of the 2015 International Conference on Parallel and Distributed Processing Techniques and Applications: PDPTA 2015: Volume 1 / [ed] Hamid R. Arabnia; Hiroshi Ishii; Kazuki Joe; Hiroaki Nishikawa; Havaru Shouno, Printed in the United States of America: CSREA Press, 2015, Vol. 1, p. 177-183Conference paper, Published paper (Refereed)
Abstract [en]

We present a significantly improved implementation of a parallel SVM algorithm (PSVM) together with a comprehensive experimental study. Support Vector Machines (SVM) is one of the most well-known machine learning classification techniques. PSVM employs the Interior Point Method, which is a solver used for SVM problems that has a high potential of parallelism. We improve PSVM regarding its structure and memory management for contemporary processor architectures. We perform a number of experiments and study the impact of the reduced column size p and other important parameters as C and gamma on the class-prediction accuracy and training time. The experimental results show that there exists a threshold between the number of computational cores and the training time, and that choosing an appropriate value of p effects the choice of the C and gamma parameters as well as the accuracy.

Place, publisher, year, edition, pages
Printed in the United States of America: CSREA Press, 2015
Keywords
parallel svm, processor technology, training time
National Category
Computer Sciences
Research subject
Technology; Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:his:diva-11644 (URN)1-60132-400-6 (ISBN)1-60132-401-4 (ISBN)1-60132-402-2 (ISBN)
Conference
PDPTA'15 - The 21st International Conference on Parallel and Distributed Processing Techniques and Applications, Las Vegas, July 27-30, 2015
Available from: 2015-10-30 Created: 2015-10-30 Last updated: 2023-02-01Bibliographically approved
3. Effect Of Network Topology On The Performance Of ADMM-based SVMs
Open this publication in new window or tab >>Effect Of Network Topology On The Performance Of ADMM-based SVMs
2018 (English)In: Proceedings 2018 30th International Symposium on Computer Architecture and High Performance Computing SBAC-PAD 2018: Lyon, France 24-27 September 2018, IEEE Computer Society, 2018, p. 388-393Conference paper, Published paper (Other academic)
Abstract [en]

Alternating Direction Method Of Multipliers(ADMM) is one of the promising frameworks for training Support Vector Machines (SVMs) on large-scale data in adistributed manner. In a consensus-based ADMM, nodes may only communicate with one-hop neighbors and this may cause slow convergence. In this paper, we investigate the impact of network topology on the convergence speed of ADMM-basedSVMs using expander graphs. In particular, we investigate how much the expansion property of the network influence the convergence and which topology is preferable. Besides, we supply an implementation making these theoretical advances practically available. The results of the experiments show that graphs with large spectral gaps and higher degrees exhibit accelerated convergence.

Place, publisher, year, edition, pages
IEEE Computer Society, 2018
Series
International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD), ISSN 1550-6533
Keywords
ADMM, SVMs, Expander Graphs, Distributed Optimization, Convergence
National Category
Engineering and Technology Computer and Information Sciences Information Systems Computer Sciences
Identifiers
urn:nbn:se:his:diva-16555 (URN)10.1109/CAHPC.2018.8645857 (DOI)2-s2.0-85063145524 (Scopus ID)978-1-5386-7769-8 (ISBN)978-1-5386-7770-4 (ISBN)
Conference
High Performance Machine Learning (HPML2018) workshop, September 24, 2018, Lyon, France
Available from: 2019-01-14 Created: 2019-01-14 Last updated: 2022-09-08Bibliographically approved
4. Effects of network topology on the performance of consensus and distributed learning of SVMs using ADMM
Open this publication in new window or tab >>Effects of network topology on the performance of consensus and distributed learning of SVMs using ADMM
2021 (English)In: PeerJ Computer Science, E-ISSN 2376-5992, Vol. 7, article id e397Article in journal (Refereed) Published
Abstract [en]

The Alternating Direction Method of Multipliers (ADMM) is a popular and promising distributed framework for solving large-scale machine learning problems. We consider decentralized consensus-based ADMM in which nodes may only communicate with one-hop neighbors. This may cause slow convergence. We investigate the impact of network topology on the performance of an ADMM-based learning of Support Vector Machine using expander, and mean-degree graphs, and additionally some of the common modern network topologies. In particular, we investigate to which degree the expansion property of the network influences the convergence in terms of iterations, training and communication time. We furthermore suggest which topology is preferable. Additionally, we provide an implementation that makes these theoretical advances easily available. The results show that the performance of decentralized ADMM-based learning of SVMs in terms of convergence is improved using graphs with large spectral gaps, higher and homogeneous degrees.

Place, publisher, year, edition, pages
PeerJ, 2021
Keywords
Data Mining and Machine Learning, Distributed and Parallel Computing, Network Science and Online Social Networks, Machine learning, Parallel and distributed computing, SVMs, ADMM, Expander graphs, Distributed optimization, Convergence
National Category
Computer Sciences Computer Systems
Identifiers
urn:nbn:se:his:diva-21403 (URN)10.7717/peerj-cs.397 (DOI)000626728000001 ()2-s2.0-85103306951 (Scopus ID)
Note

CC BY 4.0

Corresponding author Shirin Tavara, tavara@chalmers.se

The authors received no funding for this work.

Available from: 2022-06-27 Created: 2022-06-27 Last updated: 2022-09-08Bibliographically approved
5. Federated Learning of Oligonucleotide Drug Molecule Thermodynamics with Differentially Private ADMM-Based SVM
Open this publication in new window or tab >>Federated Learning of Oligonucleotide Drug Molecule Thermodynamics with Differentially Private ADMM-Based SVM
2022 (English)In: Machine Learning and Principles and Practice of Knowledge Discovery in Databases: International Workshops of ECML PKDD 2021, Virtual Event, September 13-17, 2021, Proceedings, Part II / [ed] Michael Kamp; Irena Koprinska; Adrien Bibal; Tassadit Bouadi; Benoît Frénay; Luis Galárraga; José Oramas; Linara Adilova; Yamuna Krishnamurthy; Bo Kang; Christine Largeron; Jefrey Lijffijt; Tiphaine Viard; Pascal Welke; Massimiliano Ruocco; Erlend Aune; Claudio Gallicchio; Gregor Schiele; Franz Pernkopf; Michaela Blott; Holger Fröning; Günther Schindler; Riccardo Guidotti; Anna Monreale; Salvatore Rinzivillo; Przemyslaw Biecek; Eirini Ntoutsi; Mykola Pechenizkiy; Bodo Rosenhahn; Christopher Buckley; Daniela Cialfi; Pablo Lanillos; Maxwell Ramstead; Tim Verbelen; Pedro M. Ferreira; Giuseppina Andresini; Donato Malerba; Ibéria Medeiros; Philippe Fournier-Viger; M. Saqib Nawaz; Sebastian Ventura; Meng Sun; Min Zhou; Valerio Bitetta; Ilaria Bordino; Andrea Ferretti; Francesco Gullo; Giovanni Ponti; Lorenzo Severini; Rita Ribeiro; João Gama; Ricard Gavaldà; Lee Cooper; Naghmeh Ghazaleh; Jonas Richiardi; Damian Roqueiro; Diego Saldana Miranda; Konstantinos Sechidis; Guilherme Graça, Springer Nature Switzerland AG , 2022, Vol. 1, p. 459-467Conference paper, Published paper (Refereed)
Abstract [en]

A crucial step to assure drug safety is predicting off-target binding. For oligonucleotide drugs this requires learning the relevant thermodynamics from often large-scale data distributed across different organisations. This process will respect data privacy if distributed and private learning under limited and private communication between local nodes is used. We propose an ADMM-based SVM with differential privacy for this purpose. We empirically show that this approach achieves accuracy comparable to the non-private one, i.e. ∼86%, while yielding tight empirical privacy guarantees even after convergence. 

Place, publisher, year, edition, pages
Springer Nature Switzerland AG, 2022
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 1525
National Category
Computer Sciences Computer Systems
Identifiers
urn:nbn:se:his:diva-21404 (URN)10.1007/978-3-030-93733-1_34 (DOI)000771712800033 ()2-s2.0-85126255150 (Scopus ID)978-3-030-93732-4 (ISBN)978-3-030-93733-1 (ISBN)
Conference
International Workshops of ECML PKDD 2021, Virtual Event, September 13-17, 2021
Available from: 2022-06-27 Created: 2022-06-27 Last updated: 2022-09-08Bibliographically approved

Open Access in DiVA

fulltext(8640 kB)703 downloads
File information
File name FULLTEXT01.pdfFile size 8640 kBChecksum SHA-512
67bc0399dfc06b91781f06b1ad6c9b1095da43e5facb1cbe98e9eacd196ec6ef0a038748e19f84a75d4bea1a5d54b034925ef4f35da5c2a092d71baf28f1b257
Type fulltextMimetype application/pdf

Authority records

Tavara, Shirin

Search in DiVA

By author/editor
Tavara, Shirin
By organisation
School of InformaticsInformatics Research Environment
Computer SciencesComputer SystemsOther Computer and Information Science

Search outside of DiVA

GoogleGoogle Scholar
Total: 706 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 739 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf