Auth-Graph: GenAI-empowered attribute-masked backdoor for on-demand authorizable graph learning

文献类型: 外文期刊

第一作者: Yang, Xiao

作者: Yang, Xiao;Li, Gaolei;Li, Jianhua;Li, Gaolei;Li, Jianhua;Zhou, Kai;Lai, Yuni

作者机构:

关键词: Graph learning; Generative artificial intelligence; Authorizable access; Access control; Backdoor paradigm

期刊名称:INFORMATION FUSION ( 影响因子:15.5; 五年影响因子:17.9 )

ISSN: 1566-2535

年卷期: 2025 年 124 卷

页码:

收录情况: SCI

摘要: Owing to the ability to fuse non-Euclidean node-edge information, Graph Learning (GL) is pervasively leveraged across applications including web recommendation, community detection, and molecular classification. Current GL paradigms extremely emphasize absolute fairness and impartiality for all clients. This limits its flexibility and adaptability in addressing specific circumstances that demand customizable model queries (e.g., access control and intellectual property protection), where authorizable GL models present non-trivial obstacles in realization. Inspired by Generative Artificial Intelligence (GenAI), to overcome this limitation, we propose Auth-Graph, the first authorizable GL methodology via a built-in-model access control mechanism. Specifically, our Auth-Graph employs a generative perturbating-driven backdoor to reach authorizable access. The activation of the backdoor is exclusively confined to rightly masked and perturbed inputs, which yield accurate results, whereas all other inputs induce the GL model to produce erroneous outcomes. Moreover, to strengthen compatibility and support multi-user functionality, the masking mechanism operates correctly with a generative masker solely for authorized users possessing valid tokens, with each user's token being uniquely distinct. Empirical results across benchmark GL models and datasets substantiate that Auth-Graph robustly prevents unauthorized access (average accuracy 3.68%) while promoting legitimate users to attain standard outputs (average accuracy drop 3.45%).

分类号:

  • 相关文献
作者其他论文 更多>>