Fair-RGNN: Mitigating Relational Bias on Knowledge Graphs

Author:

Chuang Yu-Neng1ORCID,Lai Kwei-Herng1ORCID,Tang Ruixiang1ORCID,Du Mengnan2ORCID,Chang Chia-Yuan3ORCID,Zou Na4ORCID,Hu Xia1ORCID

Affiliation:

1. Department of Computer Science, Rice University, USA

2. Department of Data Science, New Jersey Institute of Technology, USA

3. Department of Computer Science and Engineering, Texas A&M University, USA

4. Department of Engineering Technology and Industrial Distribution, Texas A&M University, USA

Abstract

Knowledge graph data are prevalent in real-world applications, and knowledge graph neural networks (KGNNs) are essential techniques for knowledge graph representation learning. Although KGNN effectively models the structural information from knowledge graphs, these frameworks amplify the underlying data bias that leads to discrimination towards certain groups or individuals in resulting applications. Additionally, as existing debiasing approaches mainly focus on entity-wise bias, eliminating the multi-hop relational bias that pervasively exists in knowledge graphs remains an open question. However, it is very challenging to eliminate relational bias due to the sparsity of the paths that generate the bias and the non-linear proximity structure of knowledge graphs. To tackle the challenges, we propose Fair-KGNN, a KGNN framework that simultaneously alleviates multi-hop bias and preserves the proximity information of entity-to-relation in knowledge graphs. The proposed framework is generalizable to mitigate relational bias for all types of KGNN. Fair-KGNN is applicable to incorporate two stateof- the-art KGNN models, RGCN and CompGCN, to mitigate gender-occupation and nationality-salary bias. The experiments carried out on three benchmark knowledge graph datasets demonstrate that Fair-KGNN can effectively mitigate unfair situations during representation learning while preserving the predictive performance of KGNN models. The source code of the proposed method is available at: https://github.com/ynchuang/Mitigating-Relational-Bias-on-Knowledge-Graphs .

Publisher

Association for Computing Machinery (ACM)

Reference53 articles.

1. Agarwal, C., Lakkaraju, H., and Zitnik, M. Towards a unified framework for fair and stable graph representation learning. In Uncertainty in Artificial Intelligence (2021), PMLR, pp. 2114–2124.

2. Artificial Intelligence

3. Arduini, M., Noci, L., Pirovano, F., Zhang, C., Shrestha, Y. R., and Paudel, B. Adversarial learning for debiasing knowledge graph embeddings. arXiv preprint arXiv:2006.16309 (2020).

4. Bakker, I. Social reproduction and the constitution of a gendered political economy. New Political Economy (2007).

5. Multi-relational poincaré graph embeddings;Balazevic I.;Advances in Neural Information Processing Systems,2019

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3