Högskolan i Skövde

his.sePublications
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 82) Show all publications
Senavirathne, N. & Torra, V. (2023). Rounding based continuous data discretization for statistical disclosure control. Journal of Ambient Intelligence and Humanized Computing, 14(11), 15139-15157
Open this publication in new window or tab >>Rounding based continuous data discretization for statistical disclosure control
2023 (English)In: Journal of Ambient Intelligence and Humanized Computing, ISSN 1868-5137, E-ISSN 1868-5145, Vol. 14, no 11, p. 15139-15157Article in journal (Refereed) Published
Abstract [en]

“Rounding” can be understood as a way to coarsen continuous data. That is, low level and infrequent values are replaced by high-level and more frequent representative values. This concept is explored as a method for data privacy with techniques like rounding, microaggregation, and generalisation. This concept is explored as a method for data privacy in statistical disclosure control literature with perturbative techniques like rounding, microaggregation and non-perturbative methods like generalisation. Even though “rounding” is well known as a numerical data protection method, it has not been studied in depth or evaluated empirically to the best of our knowledge. This work is motivated by three objectives, (1) to study the alternative methods of obtaining the rounding values to represent a given continuous variable, (2) to empirically evaluate rounding as a data protection technique based on information loss (IL) and disclosure risk (DR), and (3) to analyse the impact of data rounding on machine learning based models. Here, in order to obtain the rounding values we consider discretization methods introduced in the unsupervised machine learning literature along with microaggregation and re-sampling based approaches. The results indicate that microaggregation based techniques are preferred over unsupervised discretization methods due to their fair trade-off between IL and DR. 

Place, publisher, year, edition, pages
Springer, 2023
Keywords
Micro data protection, Rounding for micro data, Unsupervised discretization, Discrete event simulation, Economic and social effects, Machine learning, Numerical methods, Volume measurement, Data protection techniques, Discretization method, Numerical data protection methods, Perturbative techniques, Statistical disclosure Control, Unsupervised machine learning, Data privacy
National Category
Computer Sciences
Research subject
Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:his:diva-17858 (URN)10.1007/s12652-019-01489-7 (DOI)2-s2.0-85074009425 (Scopus ID)
Funder
Swedish Research Council, 2016-03346
Note

CC BY 4.0

Published: 25 September 2019

Correspondence to Navoda Senavirathne.

This work is supported by Vetenskapsrådet project: “Disclosure risk and transparency in big data privacy” (VR 2016-03346, 2017-2020)

DRIAT

Available from: 2019-11-07 Created: 2019-11-07 Last updated: 2024-02-13Bibliographically approved
Senavirathne, N. & Torra, V. (2022). Dissecting Membership Inference Risk in Machine Learning. In: Weizhi Meng; Mauro Conti (Ed.), Cyberspace Safety and Security: 13th International Symposium, CSS 2021, Virtual Event, November 9–11, 2021, Proceedings. Paper presented at CSS 2021, 13th International Symposium on Cyberspace Safety and Security, Copenhagen, Denmark (Online), 9-11 November 2021 (pp. 36-54). Springer
Open this publication in new window or tab >>Dissecting Membership Inference Risk in Machine Learning
2022 (English)In: Cyberspace Safety and Security: 13th International Symposium, CSS 2021, Virtual Event, November 9–11, 2021, Proceedings / [ed] Weizhi Meng; Mauro Conti, Springer, 2022, p. 36-54Conference paper, Published paper (Refereed)
Abstract [en]

Membership inference attacks (MIA) have been identified as a distinct threat to privacy when sensitive personal data are used to train the machine learning (ML) models. This work is aimed at deepening our understanding with respect to the existing black-box MIAs while introducing a new label only MIA model. The proposed MIA model can successfully exploit the well generalized models challenging the conventional wisdom that states generalized models are immune to membership inference. Through systematic experimentation, we show that the proposed MIA model can outperform the existing attack models while being more resilient towards manipulations to the membership inference results caused by the selection of membership validation data. 

Place, publisher, year, edition, pages
Springer, 2022
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 13172
Keywords
Data privacy, Membership inference attack, Privacy preserving machine learning, Privacy-preserving techniques, Attack modeling, Black boxes, Generalized models, Inference attacks, Inference risk, Machine learning models, Machine-learning, Privacy preserving, Machine learning
National Category
Computer Sciences
Research subject
Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:his:diva-20889 (URN)10.1007/978-3-030-94029-4_3 (DOI)2-s2.0-85123431800 (Scopus ID)978-3-030-94028-7 (ISBN)978-3-030-94029-4 (ISBN)
Conference
CSS 2021, 13th International Symposium on Cyberspace Safety and Security, Copenhagen, Denmark (Online), 9-11 November 2021
Note

© 2022, Springer Nature Switzerland AG.

Also part of the Security and Cryptology book sub series (LNSC, volume 13172)

Available from: 2022-02-03 Created: 2022-02-03 Last updated: 2022-04-22Bibliographically approved
Torra, V., Galván, E. & Navarro-Arribas, G. (2022). PSO + FL = PAASO: particle swarm optimization + federated learning = privacy-aware agent swarm optimization. International Journal of Information Security, 21(6), 1349-1359
Open this publication in new window or tab >>PSO + FL = PAASO: particle swarm optimization + federated learning = privacy-aware agent swarm optimization
2022 (English)In: International Journal of Information Security, ISSN 1615-5262, E-ISSN 1615-5270, Vol. 21, no 6, p. 1349-1359Article in journal (Refereed) Published
Abstract [en]

In this paper, we present an unified framework that encompasses both particle swarm optimization (PSO) and federated learning (FL). This unified framework shows that we can understand both PSO and FL in terms of a function to be optimized by a set of agents but in which agents have different privacy requirements. PSO is the most relaxed case, and FL considers slightly stronger constraints. Even stronger privacy requirements can be considered which will lead to still stronger privacy-preserving solutions. Differentially private solutions as well as local differential privacy/reidentification privacy for agents opinions are the additional privacy models to be considered. In this paper, we discuss this framework and the different privacy-related alternatives. We present experiments that show how the additional privacy requirements degrade the results of the system. To that end, we consider optimization problems compatible with both PSO and FL. 

Place, publisher, year, edition, pages
Springer Nature Switzerland AG, 2022
Keywords
Swarm intelligence, Differential privacies, Differentially private social choice, Federated learning, Masking, Particle swarm, Particle swarm optimization, Privacy requirements, Social choice, Swarm optimization, Unified framework, Particle swarm optimization (PSO), Differential privacy
National Category
Computer Sciences
Research subject
Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:his:diva-21918 (URN)10.1007/s10207-022-00614-6 (DOI)000859123600001 ()2-s2.0-85138534515 (Scopus ID)
Note

C BY 4.0

© 2022, The Author(s)

© 2022 Springer Nature Switzerland AG. Part of Springer Nature.

Published online: 22 September 2022

Vicenç Torra: vtorra@ieee.org

Edgar Galván: Edgar.Galvan@mu.ie

Guillermo Navarro-Arribas: guillermo.navarro@uab.cat

Available from: 2022-10-06 Created: 2022-10-06 Last updated: 2023-01-16Bibliographically approved
Torra, V. (2021). Andness directedness for operators of the OWA and WOWA families. Fuzzy sets and systems (Print), 414, 28-37
Open this publication in new window or tab >>Andness directedness for operators of the OWA and WOWA families
2021 (English)In: Fuzzy sets and systems (Print), ISSN 0165-0114, E-ISSN 1872-6801, Vol. 414, p. 28-37Article in journal (Refereed) Published
Abstract [en]

Andness directed aggregation is about selecting aggregators from a desired andness level. In this paper we consider operators of the OWA and WOWA families: aggregation functions that permit us to represent some degree of compensation of the input values. In addition to compensation, WOWA permits us to represent importance (weights) of the input values. Selection of appropriate parameters given an andness level will be based on families of fuzzy quantifiers.

Place, publisher, year, edition, pages
Elsevier, 2021
Keywords
Aggregation operators, Andness, Orness, OWA, WOWA, Fuzzy sets, Aggregation functions, Fuzzy quantifiers, Input values, Fuzzy inference
National Category
Computer Sciences
Research subject
Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:his:diva-19143 (URN)10.1016/j.fss.2020.09.004 (DOI)000645901400002 ()2-s2.0-85091252122 (Scopus ID)
Note

CC BY 4.0

Available online 14 September 2020

Available from: 2020-10-01 Created: 2020-10-01 Last updated: 2021-07-02Bibliographically approved
Torra, V. & Navarro-Arribas, G. (2021). Fuzzy Meets Privacy: A Short Overview. In: Cengiz Kahraman, Sezi Cevik Onar, Basar Oztaysi, Irem Ucal Sari, Selcuk Cebi, A. Cagri Tolga (Ed.), Intelligent and Fuzzy Techniques: Smart and Innovative Solutions: Proceedings of the INFUS 2020 Conference, Istanbul, Turkey, July 21-23, 2020. Paper presented at INFUS 2020 Conference, Istanbul, Turkey, July 21-23, 2020 (pp. 3-9). Cham: Springer
Open this publication in new window or tab >>Fuzzy Meets Privacy: A Short Overview
2021 (English)In: Intelligent and Fuzzy Techniques: Smart and Innovative Solutions: Proceedings of the INFUS 2020 Conference, Istanbul, Turkey, July 21-23, 2020 / [ed] Cengiz Kahraman, Sezi Cevik Onar, Basar Oztaysi, Irem Ucal Sari, Selcuk Cebi, A. Cagri Tolga, Cham: Springer, 2021, p. 3-9Conference paper, Published paper (Refereed)
Abstract [en]

The amount of information currently available is a threat to individual privacy. Data privacy is the area that studies how to process data and propose methods so that disclosure does not take place. Fuzzy sets has been extensively used in all kind of applications. In this paper we review the areas in which fuzzy sets and systems can have an impact in the field of data privacy. We will review some of our contributions in the field and some open problems. 

Place, publisher, year, edition, pages
Cham: Springer, 2021
Series
Advances in Intelligent Systems and Computing, ISSN 2194-5357, E-ISSN 2194-5365 ; 1197
Keywords
Data privacy, Fuzzy meets privacy, Fuzzy sets and systems, Fuzzy sets, Amount of information, Individual privacy, Process data
National Category
Computer Sciences
Research subject
Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:his:diva-18892 (URN)10.1007/978-3-030-51156-2_1 (DOI)2-s2.0-85088750814 (Scopus ID)978-3-030-51155-5 (ISBN)978-3-030-51156-2 (ISBN)
Conference
INFUS 2020 Conference, Istanbul, Turkey, July 21-23, 2020
Note

INFUS: International Conference on Intelligent and Fuzzy Systems

Available from: 2020-08-11 Created: 2020-08-11 Last updated: 2020-10-28Bibliographically approved
Senavirathne, N. & Torra, V. (2021). Systematic evaluation of probabilistic K-anonymity for privacy preserving micro-data publishing and analysis. In: Sabrina De Capitani di Vimercati; Pierangela Samarati (Ed.), Proceedings of the 18th International Conference on Security and Cryptography, SECRYPT 2021: . Paper presented at 18th International Conference on Security and Cryptography, SECRYPT 2021, Virtual, Online, 6 July 2021 - 8 July 2021 (pp. 307-320). SciTePress
Open this publication in new window or tab >>Systematic evaluation of probabilistic K-anonymity for privacy preserving micro-data publishing and analysis
2021 (English)In: Proceedings of the 18th International Conference on Security and Cryptography, SECRYPT 2021 / [ed] Sabrina De Capitani di Vimercati; Pierangela Samarati, SciTePress, 2021, p. 307-320Conference paper, Published paper (Refereed)
Abstract [en]

In the light of stringent privacy laws, data anonymization not only supports privacy preserving data publication (PPDP) but also improves the flexibility of micro-data analysis. Machine learning (ML) is widely used for personal data analysis in the present day thus, it is paramount to understand how to effectively use data anonymization in the ML context. In this work, we introduce an anonymization framework based on the notion of “probabilistic k-anonymity” that can be applied with respect to mixed datasets while addressing the challenges brought forward by the existing syntactic privacy models in the context of ML. Through systematic empirical evaluation, we show that the proposed approach can effectively limit the disclosure risk in micro-data publishing while maintaining a high utility for the ML models induced from the anonymized data. 

Place, publisher, year, edition, pages
SciTePress, 2021
Series
International Joint Conference on e-Business and Telecommunications - SECRYPT, ISSN 2184-7711
Keywords
Anonymization, Data Privacy, Privacy Preserving Machine Learning, Statistical Disclosure Control, Cryptography, Information analysis, Data anonymization, Data publishing, Disclosure risk, Empirical evaluations, Privacy models, Privacy preserving, Privacy-preserving data publications, Systematic evaluation, Privacy by design
National Category
Computer Sciences
Research subject
Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:his:diva-20485 (URN)10.5220/0010560703070320 (DOI)000720102500025 ()2-s2.0-85111886138 (Scopus ID)978-989-758-524-1 (ISBN)
Conference
18th International Conference on Security and Cryptography, SECRYPT 2021, Virtual, Online, 6 July 2021 - 8 July 2021
Funder
Swedish Research Council, 2016-03346
Note

CC BY-NC-ND 4.0

Copyright © 2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved

This work is supported by Vetenskapsrådet project:”Disclosure risk and transparency in big data privacy” (VR 2016-03346, 2017-2020)

DRIAT

Available from: 2021-08-19 Created: 2021-08-19 Last updated: 2022-01-26Bibliographically approved
Torra, V., Taha, M. & Navarro-Arribas, G. (2021). The space of models in machine learning: using Markov chains to model transitions. Progress in Artificial Intelligence, 10(3), 321-332
Open this publication in new window or tab >>The space of models in machine learning: using Markov chains to model transitions
2021 (English)In: Progress in Artificial Intelligence, ISSN 2192-6352, Vol. 10, no 3, p. 321-332Article in journal (Refereed) Published
Abstract [en]

Machine and statistical learning is about constructing models from data. Data is usually understood as a set of records, a database. Nevertheless, databases are not static but change over time. We can understand this as follows: there is a space of possible databases and a database during its lifetime transits this space. Therefore, we may consider transitions between databases, and the database space. NoSQL databases also fit with this representation. In addition, when we learn models from databases, we can also consider the space of models. Naturally, there are relationships between the space of data and the space of models. Any transition in the space of data may correspond to a transition in the space of models. We argue that a better understanding of the space of data and the space of models, as well as the relationships between these two spaces is basic for machine and statistical learning. The relationship between these two spaces can be exploited in several contexts as, e.g., in model selection and data privacy. We consider that this relationship between spaces is also fundamental to understand generalization and overfitting. In this paper, we develop these ideas. Then, we consider a distance on the space of models based on a distance on the space of data. More particularly, we consider distance distribution functions and probabilistic metric spaces on the space of data and the space of models. Our modelization of changes in databases is based on Markov chains and transition matrices. This modelization is used in the definition of distances. We provide examples of our definitions. 

Place, publisher, year, edition, pages
Springer, 2021
Keywords
Hypothesis space, Machine and statistical learning models, Probabilistic metric spaces, Space of data, Space of models, Data privacy, Distribution functions, Machine learning, Markov chains, Constructing models, Distance distribution functions, Model Selection, Model transition, Nosql database, Statistical learning, Transition matrices, Database systems
National Category
Computer Sciences
Research subject
Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:his:diva-19666 (URN)10.1007/s13748-021-00242-6 (DOI)000639627000001 ()2-s2.0-85104447939 (Scopus ID)
Funder
Swedish Research Council, 2016-03346Knut and Alice Wallenberg Foundation
Note

CC BY 4.0

© 2021, The Author(s).

Correspondence Address: Torra, V.; School of Informatics, Sweden; email: vtorra@ieee.org

Published: 12 April 2021

Acknowledgements: This study was partially funded by Vetenskapsrådet project “Disclosure risk and transparency in big data privacy” (VR 2016-03346, 2017-2020), Spanish project TIN2017-87211-R is gratefully acknowledged, and by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.

Available from: 2021-04-29 Created: 2021-04-29 Last updated: 2021-09-13Bibliographically approved
Koloseni, D., Helldin, T. & Torra, V. (2020). AHP-Like Matrices and Structures: Absolute and Relative Preferences. Mathematics, 8(5), Article ID 813.
Open this publication in new window or tab >>AHP-Like Matrices and Structures: Absolute and Relative Preferences
2020 (English)In: Mathematics, E-ISSN 2227-7390, Vol. 8, no 5, article id 813Article in journal (Refereed) Published
Abstract [en]

Aggregation functions are extensively used in decision making processes to combine available information. Arithmetic mean and weighted mean are some of the most used ones. In order to use a weighted mean, we need to define its weights. The Analytical Hierarchy Process (AHP) is a well known technique used to obtain weights based on interviews with experts. From the interviews we define a matrix of pairwise comparisons of the importance of the weights. We call these AHP-like matrices absolute preferences of weights. We propose another type of matrix that we call a relative preference matrix. We define this matrix with the same goal—to find the weights for weighted aggregators. We discuss how it can be used for eliciting the weights for the weighted mean and define a similar approach for the Choquet integral.

Place, publisher, year, edition, pages
MDPI, 2020
Keywords
aggregation functions, weight selection, fuzzy measures, AHP (Analytical Hierarchy Process)
National Category
Computer Sciences
Research subject
Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:his:diva-18466 (URN)10.3390/math8050813 (DOI)000542738100193 ()2-s2.0-85086099761 (Scopus ID)
Available from: 2020-05-29 Created: 2020-05-29 Last updated: 2020-08-27Bibliographically approved
Salas, J. & Torra, V. (2020). Differentially Private Graph Publishing and Randomized Response for Collaborative Filtering. In: Pierangela Samarati; Sabrina De Capitani di Vimercati; Mohammad Obaidat; Jalel Ben-Othman (Ed.), Proceedings of the 17th International Joint Conference on e-Business and Telecommunications: Volume 3: SECRYPT. Paper presented at The 17th International Conference on Security and Cryptography (SECRYPT 2020), 8-10 July 2020, online streaming, Lieusaint - Paris, France (pp. 415-422). SciTePress, 3
Open this publication in new window or tab >>Differentially Private Graph Publishing and Randomized Response for Collaborative Filtering
2020 (English)In: Proceedings of the 17th International Joint Conference on e-Business and Telecommunications: Volume 3: SECRYPT / [ed] Pierangela Samarati; Sabrina De Capitani di Vimercati; Mohammad Obaidat; Jalel Ben-Othman, SciTePress, 2020, Vol. 3, p. 415-422Conference paper, Published paper (Refereed)
Abstract [en]

Several methods for providing edge and node-differential privacy for graphs have been devised. However, most of them publish graph statistics, not the edge-set of the randomized graph. We present a method for graph randomization that provides randomized response and allows for publishing differentially private graphs. We show that this method can be applied to sanitize data to train collaborative filtering algorithms for recommender systems. Our results afford plausible deniability to users in relation to their interests, with a controlled probability predefined by the user or the data controller. We show in an experiment with Facebook Likes data and psychodemographic profiles, that the accuracy of the profiling algorithms is preserved even when they are trained with differentially private data. Finally, we define privacy metrics to compare our method for different parameters of e with a k-anonymization method on the MovieLens dataset for movie recommendations.

Place, publisher, year, edition, pages
SciTePress, 2020
Series
International Joint Conference on e-Business and Telecommunications - SECRYPT, ISSN 2184-7711
Keywords
Noise-graph Addition, Randomized Response, Edge Differential Privacy, Collaborative Filtering
National Category
Computer Sciences
Research subject
Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:his:diva-19525 (URN)10.5220/0009833804150422 (DOI)000615962200040 ()2-s2.0-85110834027 (Scopus ID)978-989-758-446-6 (ISBN)
Conference
The 17th International Conference on Security and Cryptography (SECRYPT 2020), 8-10 July 2020, online streaming, Lieusaint - Paris, France
Funder
Swedish Research Council, 2016-03346
Note

CC BY-NC-ND 4.0

This work was partially supported by the Swedish Research Council (Vetenskapsrådet) project DRIAT (VR 2016-03346), the Spanish Government under grants RTI2018-095094-B-C22 ”CONSENT”, and the UOC postdoctoral fellowship program.

ICETE: International Conference on E-Business and Telecommunication Networks

Available from: 2021-03-05 Created: 2021-03-05 Last updated: 2021-08-10Bibliographically approved
Torra, V., Navarro-Arribas, G. & Galván, E. (2020). Explaining Recurrent Machine Learning Models: Integral Privacy Revisited. In: Josep Domingo-Ferrer, Krishnamurty Muralidhar (Ed.), Privacy in Statistical Databases: UNESCO Chair in Data Privacy, International Conference, PSD 2020, Tarragona, Spain, September 23–25, 2020, Proceedings. Paper presented at UNESCO Chair in Data Privacy, International Conference, PSD 2020, Tarragona, Spain, September 23–25, 2020 (pp. 62-73). Cham: Springer
Open this publication in new window or tab >>Explaining Recurrent Machine Learning Models: Integral Privacy Revisited
2020 (English)In: Privacy in Statistical Databases: UNESCO Chair in Data Privacy, International Conference, PSD 2020, Tarragona, Spain, September 23–25, 2020, Proceedings / [ed] Josep Domingo-Ferrer, Krishnamurty Muralidhar, Cham: Springer, 2020, p. 62-73Conference paper, Published paper (Refereed)
Abstract [en]

We have recently introduced a privacy model for statistical and machine learning models called integral privacy. A model extracted from a database or, in general, the output of a function satisfies integral privacy when the number of generators of this model is sufficiently large and diverse. In this paper we show how the maximal c-consensus meets problem can be used to study the databases that generate an integrally private solution. We also introduce a definition of integral privacy based on minimal sets in terms of this maximal c-consensus meets problem. 

Place, publisher, year, edition, pages
Cham: Springer, 2020
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 12276
Keywords
Clustering, Integral privacy, Maximal c-consensus meets, Parameter selection, Data privacy, Database systems, Machine learning models, Privacy models, Machine learning
National Category
Computer Sciences
Research subject
Skövde Artificial Intelligence Lab (SAIL)
Identifiers
urn:nbn:se:his:diva-19186 (URN)10.1007/978-3-030-57521-2_5 (DOI)2-s2.0-85092091090 (Scopus ID)978-3-030-57520-5 (ISBN)978-3-030-57521-2 (ISBN)
Conference
UNESCO Chair in Data Privacy, International Conference, PSD 2020, Tarragona, Spain, September 23–25, 2020
Funder
Swedish Research Council, 2016-03346
Note

CC BY 4.0

Also part of the Information Systems and Applications, incl. Internet/Web, and HCI book sub series (LNISA, volume 12276)

Partial support of the project Swedish Research Council (grant number VR 2016-03346) is acknowledged.

DRIAT

Available from: 2020-10-15 Created: 2020-10-15 Last updated: 2021-08-18Bibliographically approved
Projects
Disclosure risk and transparency in big data privacy [2016-03346_VR]; University of Skövde; Publications
Senavirathne, N. & Torra, V. (2023). Rounding based continuous data discretization for statistical disclosure control. Journal of Ambient Intelligence and Humanized Computing, 14(11), 15139-15157Senavirathne, N. & Torra, V. (2021). Systematic evaluation of probabilistic K-anonymity for privacy preserving micro-data publishing and analysis. In: Sabrina De Capitani di Vimercati; Pierangela Samarati (Ed.), Proceedings of the 18th International Conference on Security and Cryptography, SECRYPT 2021: . Paper presented at 18th International Conference on Security and Cryptography, SECRYPT 2021, Virtual, Online, 6 July 2021 - 8 July 2021 (pp. 307-320). SciTePressTorra, V., Taha, M. & Navarro-Arribas, G. (2021). The space of models in machine learning: using Markov chains to model transitions. Progress in Artificial Intelligence, 10(3), 321-332Salas, J. & Torra, V. (2020). Differentially Private Graph Publishing and Randomized Response for Collaborative Filtering. In: Pierangela Samarati; Sabrina De Capitani di Vimercati; Mohammad Obaidat; Jalel Ben-Othman (Ed.), Proceedings of the 17th International Joint Conference on e-Business and Telecommunications: Volume 3: SECRYPT. Paper presented at The 17th International Conference on Security and Cryptography (SECRYPT 2020), 8-10 July 2020, online streaming, Lieusaint - Paris, France (pp. 415-422). SciTePress, 3Torra, V., Navarro-Arribas, G. & Galván, E. (2020). Explaining Recurrent Machine Learning Models: Integral Privacy Revisited. In: Josep Domingo-Ferrer, Krishnamurty Muralidhar (Ed.), Privacy in Statistical Databases: UNESCO Chair in Data Privacy, International Conference, PSD 2020, Tarragona, Spain, September 23–25, 2020, Proceedings. Paper presented at UNESCO Chair in Data Privacy, International Conference, PSD 2020, Tarragona, Spain, September 23–25, 2020 (pp. 62-73). Cham: SpringerTorra, V. (2020). Fuzzy clustering-based microaggregation to achieve probabilistic k-anonymity for data with constraints. Journal of Intelligent & Fuzzy Systems, 39(5), 5999-6008Senavirathne, N. & Torra, V. (2020). On the role of data anonymization in machine learning privacy. In: Guojun Wang, Ryan Ko, Md Zakirul Alam Bhuiyan, Yi Pan (Ed.), Proceedings - 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications, TrustCom 2020: . Paper presented at 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications, TrustCom 2020, 29 December 2020 – 1 January 2021, Guangzhou, China (pp. 664-675). IEEESalas, J., Megías, D., Torra, V., Toger, M., Dahne, J. & Sainudiin, R. (2020). Swapping trajectories with a sufficient sanitizer. Pattern Recognition Letters, 131, 474-480Torra, V. & Salas, J. (2019). Graph Perturbation as Noise Graph Addition: A New Perspective for Graph Anonymization. In: Cristina Pérez-Solà; Guillermo Navarro-Arribas; Alex Biryukov; Joaquin Garcia-Alfaro (Ed.), Data Privacy Management, Cryptocurrencies and Blockchain Technology: ESORICS 2019 International Workshops, DPM 2019 and CBT 2019, Luxembourg, September 26–27, 2019, Proceedings. Paper presented at ESORICS 2019 International Workshops, DPM 2019 and CBT 2019, Luxembourg, September 26–27, 2019 (pp. 121-137). Cham: Springer, 11737Torra, V. & Senavirathne, N. (2019). Maximal c consensus meets. Information Fusion, 51, 58-66
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-0368-8037

Search in DiVA

Show all publications