Repository logo
  • English
  • Deutsch
  • Español
  • Français
  • Log In
    New user? Click here to register.Have you forgotten your password?
Universidad Panamericana
  • Communities & Collections
  • Research Outputs
  • Fundings & Projects
  • Researchers
  • Statistics
  • Feedback
  • English
  • Deutsch
  • Español
  • Français
  1. Home
  2. Browse by Author

Browsing by Author "4f953fcf-a9ae-4d5f-9c9d-a6740f211541"

Now showing 1 - 4 of 4
Results Per Page
Sort Options
  • No Thumbnail Available
    Some of the metrics are blocked by your 
    consent settings
    Publication
    Differentially Private Graph Publishing Through Noise-Graph Addition
    (2023)
    Salas, Julián
    ;
    González Zelaya, Carlos Vladimiro
    ;
    Torra, Vicenç
    ;
    Megías, David
    Differential privacy is commonly used for graph analysis in the interactive setting, were a query of some graph statistic is answered with additional noise to avoid leaking private information. In such setting, only a statistic can be studied. However, in the non-interactive setting, the data may be protected with differential privacy and then published, allowing for all kinds of privacy preserving analyses. We present a noise-graph addition method to publish graphs with differential privacy guarantees. We show its relation to the probabilities in the randomized response matrix and prove that such probabilities can be chosen in such a way to preserve the sparseness of the original graph in the protected graph. Thus, better preserving the utility for different tasks, such as link prediction. Additionally, we show that the previous models of random perturbation and random sparsification are differentially private, and calculate the � guarantees that they provide depending on their specifications.
    Scopus© Citations 4  13  2
  • No Thumbnail Available
    Some of the metrics are blocked by your 
    consent settings
    Publication
    Fair-MDAV: An Algorithm for Fair Privacy by Microaggregation
    (2020)
    Salas, Julián
    ;
    González Zelaya, Carlos Vladimiro
    Automated decision systems are being integrated to several institutions. The General Data Protection Regulation from the European Union, considers the right to explanation on such decisions, but many systems may require a group-level or community-wide analysis. However, the data on which the algorithms are trained is frequently personal data. Hence, the privacy of individuals should be protected, at the same time, ensuring the fairness of the algorithmic decisions made. In this paper we present the algorithm Fair-MDAV for privacy protection in terms of t-closeness. We show that its microaggregation procedure for privacy protection improves fairness through relabelling, while the improvement on fairness obtained equalises privacy guarantees for different groups. We perform an empirical test on Adult Dataset, carrying out the classification task of predicting whether an individual earns per year, after applying Fair-MDAV with different parameters on the training set. We observe that the accuracy of the results on the test set is well preserved, with additional guarantees of privacy and fairness.
    Scopus© Citations 2  10  2
  • No Thumbnail Available
    Some of the metrics are blocked by your 
    consent settings
    Publication
    Optimising Fairness Through Parametrised Data Sampling
    (2021)
    González Zelaya, Carlos Vladimiro
    ;
    Salas, Julián
    ;
    Prangle, Dennis
    ;
    Missier, Paolo
    Improving machine learning models’ fairness is an active research topic, with most approaches focusing on specific definitions of fairness. In contrast, we propose ParDS, a parametrized data sampling method by which we can optimize the fairness ratios observed on a test set, in a way that is agnostic to both the specific fairness definitions, and the chosen classification model. Given a training set with one binary protected attribute and a binary label, our approach involves correcting the positive rate for both the favoured and unfavoured groups through resampling of the training set. We present experimental evidence showing that the amount of resampling can be optimized to achieve target fairness ratios for a specific training set and fairness definition, while preserving most of the model’s accuracy. We discuss conditions for the method to be viable, and then extend the method to include multiple protected attributes. In our experiments we use three different sampling strategies, and we report results for three commonly used definitions of fairness, and three public benchmark datasets: Adult Income, COMPAS and German Credit.
    Scopus© Citations 4  10  2
  • Loading...
    Thumbnail Image
    Some of the metrics are blocked by your 
    consent settings
    Publication
    ¿Por qué dos tigres no deben pelear a muerte? Una introducción al juego del go
    (Hospitalidad ESDAI, 2018)
    González Zelaya, Carlos Vladimiro
    ;
    Campus Ciudad de México
    This lecture is about go, an ancient oriental game of strategy. It is divided in three parts: on the first part, a broad perspective of the world of go is given; the second part deals with the basic rules of the game, and on the final part some analogies are made on go and business.
      4  144

Copyright 2024 Universidad Panamericana
Términos y condiciones | Política de privacidad | Reglamento General

Built with DSpace-CRIS software - Extension maintained and optimized by - Hosting & support SCImago Lab

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback