1、|Enyan DaiCollege of Information Sciences and Technology The Pennsylvania State University|FairnessFairness andand ExplainabilityExplainabilityin in GraphGraph NeuralNeural NetworksNetworks1|01FairnessCONTENT|BackgroundDefinitionsAdversarial DebiasingFairness Constraints02ExplainabilityPost-hoc Expl
2、anations GNNExplainerSelf-Explainable GNN SE-GNN|Fairness01|DiscriminationDiscrimination/Biases/Biases in in MachineMachine Learning Learning|Face recognition perform poorly for the darker female.4|IntroductionIntroduction|Graph neural networks have higher risk in discrimination:People linked in the
3、 network tend to have the same sensitive attributes.The message passing would lead linked nodes have similar predictionsAn example of salary prediction.(Due to historical reasons,the male generally earns high salary in the dataset Ground Truth:HighLowLowHighHighGround Truth:LowPrediction:LowPredicti
4、on:High5|Graph neural networks have higher risk in discrimination:EmpiricalEmpiricalAnalysisLarge!and#$indicate an unfair model.Results on Pokec-z6Dai,Enyan,and Suhang Wang.Say no to the discrimination:Learning fair graph neural networks with limited sensitive attribute information.WSDM.2021.|7Dai,E
5、nyan,Tianxiang Zhao,Huaisheng Zhu,Junjie Xu,Zhimeng Guo,Hui Liu,Jiliang Tang,and Suhang Wang.A Comprehensive Survey on Trustworthy Graph Neural Networks:Privacy,Robustness,Fairness,and Explainability.2022.|Node Classification:Let 0,1 and 0,1 denote the sensitive attribute and label.Statistical Parit
6、y:The prediction should be independent with the sensitive attribute:|=0=|=1Equal Opportunity:The probability of an instance in a positive class being assigned to apositive outcome should be equal for both subgroup members:|=1,=0=|=1,=1Definitions of GroupDefinitions of Group FairnessFairnessLink Pre