Now showing 1 - 4 of 4
No Thumbnail Available
Publication

Differentially Private Graph Publishing Through Noise-Graph Addition

2023 , Salas, Julián , González Zelaya, Carlos Vladimiro , Torra, Vicenç , Megías, David

Differential privacy is commonly used for graph analysis in the interactive setting, were a query of some graph statistic is answered with additional noise to avoid leaking private information. In such setting, only a statistic can be studied. However, in the non-interactive setting, the data may be protected with differential privacy and then published, allowing for all kinds of privacy preserving analyses. We present a noise-graph addition method to publish graphs with differential privacy guarantees. We show its relation to the probabilities in the randomized response matrix and prove that such probabilities can be chosen in such a way to preserve the sparseness of the original graph in the protected graph. Thus, better preserving the utility for different tasks, such as link prediction. Additionally, we show that the previous models of random perturbation and random sparsification are differentially private, and calculate the � guarantees that they provide depending on their specifications.

Thumbnail Image
Publication

¿Por qué dos tigres no deben pelear a muerte? Una introducción al juego del go

2018 , González Zelaya, Carlos Vladimiro , Campus Ciudad de México

Esta ponencia trata acerca del go, un antiguo juego de estrategia oriental. El escrito se divide en tres partes: en la primera se da un panorama general del mundo del go, en la segunda se explican las reglas básicas para jugarlo y en la tercera se hacen algunas analogías entre el go y los negocios.

No Thumbnail Available
Publication

Fair-MDAV: An Algorithm for Fair Privacy by Microaggregation

2020 , Salas, Julián , González Zelaya, Carlos Vladimiro

Automated decision systems are being integrated to several institutions. The General Data Protection Regulation from the European Union, considers the right to explanation on such decisions, but many systems may require a group-level or community-wide analysis. However, the data on which the algorithms are trained is frequently personal data. Hence, the privacy of individuals should be protected, at the same time, ensuring the fairness of the algorithmic decisions made. In this paper we present the algorithm Fair-MDAV for privacy protection in terms of t-closeness. We show that its microaggregation procedure for privacy protection improves fairness through relabelling, while the improvement on fairness obtained equalises privacy guarantees for different groups. We perform an empirical test on Adult Dataset, carrying out the classification task of predicting whether an individual earns per year, after applying Fair-MDAV with different parameters on the training set. We observe that the accuracy of the results on the test set is well preserved, with additional guarantees of privacy and fairness.

No Thumbnail Available
Publication

Optimising Fairness Through Parametrised Data Sampling

2021 , González Zelaya, Carlos Vladimiro , Salas, Julián , Prangle, Dennis , Missier, Paolo

Improving machine learning models’ fairness is an active research topic, with most approaches focusing on specific definitions of fairness. In contrast, we propose ParDS, a parametrized data sampling method by which we can optimize the fairness ratios observed on a test set, in a way that is agnostic to both the specific fairness definitions, and the chosen classification model. Given a training set with one binary protected attribute and a binary label, our approach involves correcting the positive rate for both the favoured and unfavoured groups through resampling of the training set. We present experimental evidence showing that the amount of resampling can be optimized to achieve target fairness ratios for a specific training set and fairness definition, while preserving most of the model’s accuracy. We discuss conditions for the method to be viable, and then extend the method to include multiple protected attributes. In our experiments we use three different sampling strategies, and we report results for three commonly used definitions of fairness, and three public benchmark datasets: Adult Income, COMPAS and German Credit.