his.sePublications
Change search
Refine search result
1 - 4 of 4
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Senavirathne, Navoda
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Addressing the challenges of privacy preserving machine learning in the context of data anonymization2019Report (Other academic)
    Abstract [en]

    Machine learning (ML) models trained on sensitive data pose a distinct threat to privacy with the emergence of numerous threat models exploiting their privacy vulnerabilities.Therefore, privacy preserving machine learning (PPML) has gained an increased attentionover the past couple of years. Existing PPML techniques introduced in the literatureare mainly based on differential privacy or cryptography based techniques. Respectivelythey are criticized for the poor predictive accuracy of the derived ML models and for theextensive computational cost. Moreover, they operate under the assumption that originaldata are always available for training the ML models. However, there exist scenarioswhere anonymized data are available instead of the original data. Anonymization ofsensitive data is required before publishing them in order to preserve the privacy of theunderlying data subjects. Nevertheless, there are valid organizational and legal requirementsfor data publishing. In this case, it is important to understand the impact of dataanonymization on ML in general and how this can be used as a stepping stone towardsPPML.The proposed research is aimed at understanding the opportunities and challenges forPPML in the context of data anonymization, and to address them effectively by developinga unified solution to serve the objectives of both data anonymization and PPML.

  • 2.
    Senavirathne, Navoda
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Torra, Vicenç
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Approximating Robust Linear Regression With An Integral Privacy Guarantee2018In: 2018 16th Annual Conference on Privacy, Security and Trust (PST) / [ed] Kieran McLaughlin, Ali Ghorbani, Sakir Sezer, Rongxing Lu, Liqun Chen, Robert H. Deng, Paul Miller, Stephen Marsh, Jason Nurse, IEEE, 2018, p. 85-94Conference paper (Refereed)
    Abstract [en]

    Most of the privacy-preserving techniques suffer from an inevitable utility loss due to different perturbations carried out on the input data or the models in order to gain privacy. When it comes to machine learning (ML) based prediction models, accuracy is the key criterion for model selection. Thus, an accuracy loss due to privacy implementations is undesirable. The motivation of this work, is to implement the privacy model "integral privacy" and to evaluate its eligibility as a technique for machine learning model selection while preserving model utility. In this paper, a linear regression approximation method is implemented based on integral privacy which ensures high accuracy and robustness while maintaining a degree of privacy for ML models. The proposed method uses a re-sampling based estimator to construct linear regression model which is coupled with a rounding based data discretization method to support integral privacy principles. The implementation is evaluated in comparison with differential privacy in terms of privacy, accuracy and robustness of the output ML models. In comparison, integral privacy based solution provides a better solution with respect to the above criteria.

  • 3.
    Senavirathne, Navoda
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Torra, Vicenç
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre. Maynooth University Hamilton Institute, Kildare, Ireland.
    Integrally private model selection for decision trees2019In: Computers & security (Print), ISSN 0167-4048, E-ISSN 1872-6208, Vol. 83, p. 167-181Article in journal (Refereed)
    Abstract [en]

    Privacy attacks targeting machine learning models are evolving. One of the primary goals of such attacks is to infer information about the training data used to construct the models. “Integral Privacy” focuses on machine learning and statistical models which explain how we can utilize intruder's uncertainty to provide a privacy guarantee against model comparison attacks. Through experimental results, we show how the distribution of models can be used to achieve integral privacy. Here, we observe two categories of machine learning models based on their frequency of occurrence in the model space. Then we explain the privacy implications of selecting each of them based on a new attack model and empirical results. Also, we provide recommendations for private model selection based on the accuracy and stability of the models along with the diversity of training data that can be used to generate the models. 

  • 4.
    Torra, Vicenç
    et al.
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Senavirathne, Navoda
    University of Skövde, School of Informatics. University of Skövde, The Informatics Research Centre.
    Maximal c consensus meets2019In: Information Fusion, ISSN 1566-2535, E-ISSN 1872-6305, Vol. 51, p. 58-66Article in journal (Refereed)
    Abstract [en]

    Given a set S of subsets of a reference set X, we define the problem of finding c subsets of X that maximize the size of the intersection among the included subsets. Maximizing the size of the intersection means that they are subsets of the sets in S and they are as large as possible. We can understand the result of this problem as c consensus sets of S, or c consensus representatives of S. From the perspective of lattice theory, each representative will be a meet of some sets in S. In this paper we define formally this problem, and present heuristic algorithms to solve it. We also discuss the relationship with other established problems in the literature.

1 - 4 of 4
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf