《预训练图神经网络:以数据为中心的一些思考.pdf》由会员分享,可在线阅读,更多相关《预训练图神经网络:以数据为中心的一些思考.pdf(25页珍藏版)》请在三个皮匠报告上搜索。
1、1A Data-Centric Perspective onPre-Training Graph Neural NetworksJiarong XuFudan U2Negative Transfer in Graph Pre-Training Pre-training Graph Neural Networks shows potential to be apopular strategy for learning from graph data without costly labels.However,in practice,graph pre-trained models can lea
2、d to negative transfer on many downstream tasks.1 Qiu et al.GCC:Graph Contrastive Coding for Graph Neural Network Pre-Training.KDD20.Almost 45.5%of downstream taskssuffer from the negative transfer!Results of direct fine-tuning on graph pre-trained model 1.3Research Roadmap1.When to Pre-Train Graph
3、Neural Networks?2.Better with Less:Data-Active Pre-training of Graph Neural Networks4When to Pre-train GNNs?To avoid the negative transfer,recent efforts focus on what to pre-train and how to pre-train.However,the transferability from pre-training data to downstream data cannot be guaranteed in some
4、 cases.It is a necessity to understand when to pre-train,i.e.,under what situations the“graph pre-train and fine-tune”paradigm should be adopted.5Existing methods Enumerate“pre-train and fine-tune”attemptsGraph metrics to measure the similarityProposed W2PGNNAnswer when to pre-train GNNs from a grap
5、h data generation perspective before“pre-training and fine-tuning”Key insight:Downstream data can benefit from pretraining,if it can be generated with high probability by a graph generator that summarizes pre-training dataWhen to Pre-train GNNs?6Input space-Node-level:ego-networks-Graph-level:graphs
6、(e.g.,molecules)Generator space-:a graphon(i.e.,generator)fitted from a set of(sub)graphs with similar patterns!-Width of:each!is assigned with a corresponding weight!-:a weighted combination of generator basis(!,!)=!#$!-generator space:Possible downstream spaceHow to Obtain an Appropriate Graph Gen