Högskolan i Skövde

his.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Addressing the challenges of privacy preserving machine learning in the context of data anonymization
University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. (Skövde Artificial Intelligence Lab (SAIL))ORCID iD: 0000-0002-2564-0683
2019 (English)Report (Other academic)
Abstract [en]

Machine learning (ML) models trained on sensitive data pose a distinct threat to privacy with the emergence of numerous threat models exploiting their privacy vulnerabilities.Therefore, privacy preserving machine learning (PPML) has gained an increased attentionover the past couple of years. Existing PPML techniques introduced in the literatureare mainly based on differential privacy or cryptography based techniques. Respectivelythey are criticized for the poor predictive accuracy of the derived ML models and for theextensive computational cost. Moreover, they operate under the assumption that originaldata are always available for training the ML models. However, there exist scenarioswhere anonymized data are available instead of the original data. Anonymization ofsensitive data is required before publishing them in order to preserve the privacy of theunderlying data subjects. Nevertheless, there are valid organizational and legal requirementsfor data publishing. In this case, it is important to understand the impact of dataanonymization on ML in general and how this can be used as a stepping stone towardsPPML.The proposed research is aimed at understanding the opportunities and challenges forPPML in the context of data anonymization, and to address them effectively by developinga unified solution to serve the objectives of both data anonymization and PPML.

Place, publisher, year, edition, pages
Skövde: University of Skövde , 2019. , p. 60
Keywords [en]
privacy preserving machine learning, privacy preserving data publishing, data anonymization, privacy vulnerabilities in machine learning
National Category
Computer Systems
Research subject
Skövde Artificial Intelligence Lab (SAIL)
Identifiers
URN: urn:nbn:se:his:diva-16815OAI: oai:DiVA.org:his-16815DiVA, id: diva2:1306763
Note

Research proposal, PhD programme, University of Skövde

Available from: 2019-04-24 Created: 2019-04-24 Last updated: 2019-05-02Bibliographically approved

Open Access in DiVA

No full text in DiVA

Authority records

Senavirathne, Navoda

Search in DiVA

By author/editor
Senavirathne, Navoda
By organisation
School of InformaticsThe Informatics Research Centre
Computer Systems

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 325 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • apa-cv
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf