Sci论文 - 至繁归于至简,Sci论文网。 设为首页|加入收藏
当前位置:首页 > 计算机论文 > 正文

Adversarial Attack on Graph Structured Data图形结构数据的对抗攻击(附PDF版全文下载)

发布时间:2018-07-16 14:22:23 文章来源:SCI论文网 我要评论














SCI论文(www.lunwensci.com):

       小编特地整理了 蚂蚁金服人工智能部研究员ICML贡献论文系列 第五篇论文,以下只是改论文摘录出来的部分英文内容和翻译内容,具体论文英文版全文,请在本页面底部自行下载学习研究。
 
         Adversarial Attack on Graph Structured Data
        图形结构数据的对抗攻击

Hanjun Dai , Hui Li , Tian Tian , Xin Huang , Lin Wang , Jun Zhu , Le Song
戴汉军,李辉,田田,黄欣,王林,朱军,乐松

       Abstract
       Deep learning on graph structures has shown exciting results in various applications. However, few attentions have been paid to the robustness of such models, in contrast to numerous research work for image or text adversarial attack and defense. In this paper, we focus on the adversarial attacks that fool the model by modifying the combinatorial structure of data. We first propose a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier. Also,variants of genetic algorithms and gradient methods are presented in the scenario where prediction confidence or gradients are available. We use both synthetic and real-world data to show that, a family of Graph Neural Network models are vulnerable to these attacks, in both graph-level and node-level classification tasks. We also show such attacks can be used to diagnose the learned classifiers.

       中文翻译:
       摘要:
       对图结构的深入学习在各种应用中都显示出令人兴奋的结果。然而,相对于众多的研究工作,这些模型的鲁棒性研究很少受到重视。 图像或文字对抗攻击和防御。本文主要研究通过修改数据组合结构来欺骗模型的对抗性攻击。我们首先建议增援 基于学习的攻击方法,它学习可推广的攻击策略,同时只需要来自目标分类器的预测标签。此外,遗传算法和梯度方法的变体 在预测置信度或梯度可用的场景中显示。我们使用合成的和真实的数据来证明,一族图神经网络模型是易受攻击的。 这些攻击在图级别和节点级分类任务中都是如此。
我们还显示这样的攻击可用于诊断所学习的分类器。

\

        1. Introduction
        Graph structure plays an important role in many real-world applications. Representation learning on the structured data with deep learning methods has shown promising results in various applications, including drug screening (Duvenaud et al., 2015), protein analysis (Hamilton et al., 2017),knowledge graph completion (Trivedi et al., 2017), etc..

        Despite the success of deep graph networks, the lack of interpretability and robustness of these models make it risky for some financial or security related applications. As analyzed in Akoglu et al. (2015), the graph information is proven to be important in the area of risk management. A graph sensitive evaluation model will typically take the user-user relationship into consideration: a user who connects with many high-credit users may also have high credit. Such heuristics learned by the deep graph methods would often yield good predictions, but could also put the model in a risk.

        A criminal could try to disguise himself by connecting other people using Facebook or Linkedin. Such ‘attack’ to the credit prediction model is quite cheap, but the consequence could be severe. Due to the large number of transactions happening every day, even if only one-millionth of the transactions are fraudulent, fraudsters can still obtain a huge benefit. However, few attentions have been put on domains involving graph structures, despite the recent advances in adversarial attacks and defenses for other domains like images (Goodfellow et al., 2014) and text (Jia & Liang, 2017).So in this paper, we focus on the graph adversarial attack for a set of graph neural network(GNN) (Scarselli et al., 2009)models. These are a family of supervised (Dai et al., 2016)models that have achieved state-of-the-art results in many transductive tasks (Kipf & Welling, 2016) and inductive tasks (Hamilton et al., 2017). Through experiments in both node classification and graph classification problems, we will show that the adversarial samples do exist for such models.And the GNN models can be quite vulnerable to such attacks.

        However, effectively attacking graph structures is a nontrivial problem. Different from images where the data is continuous, the graphs are discrete. Also the combinatorial nature of the graph structures makes it much more difficult than text. Inspired by the recent advances in combinatorial optimization (Bello et al., 2016; Dai et al., 2017), we propose a reinforcement learning based attack method that learns to modify the graph structure with only the prediction feedback fromthe target classifier. The modification is done by sequentially add or drop edges from the graph. A hierarchical method is also used to decompose the quadratic action space, in order to make the training feasible. Figure 1 illustrates this approach.

        We show that such learned agent can also propose adversarial attacks for new instances without access to the classifier.

       Several different adversarial attack settings are considered in our paper. When more information from the target classifier is accessible, a variant of the gradient based method and a genetic algorithm based method are also presented. Here we mainly focus on the following three settings:

• white box attack (WBA): in this case, the attacker is allowed to access any information of the target classifier,including the prediction, gradient information, etc..
• practical black box attack (PBA): in this case, only the prediction of the target classifier is available. When the prediction confidence is accessible, we denote this setting as PBA-C; if only the discrete prediction label is allowed we denote the setting as PBA-D.
• restrict black box attack (RBA): this setting is one step further than PBA. In this case, we can only do black-box queries on some of the samples, and the attacker is asked to create adversarial modifications to other samples.

       As we can see, regarding the amount of information the attacker can obtain from the target classifier, we can sort the above settings as WBA > PBA-C > PBA-D > RBA. For simplicity, we focus on the non-targeted attack, though it is easy to extend to the targeted attack scenario.

       In Sec 2, we first present the background about GNNs and two supervised learning tasks. Then in Sec 3 we formally define the graph adversarial attack problem. Sec 3.1 presents the attack method RL-S2V that learns the generalizable attack policy over the graph structure. We also propose other attack methods with different levels of access to the target classifier in Sec 3.2. We experimentally show the vulnerability of GNN models in Sec 4, and also present a way of doing defense against such attacks.

      中文翻译:
       1. 介绍
        图结构在许多实际应用中起着重要的作用。利用深度学习方法对结构化数据进行表示法学习,在各种应用中都显示出了良好的效果, 包括药物筛选(Duvenaud等人,2015年)、蛋白质分析(Hamilton等人,2017年)、知识图完成(Trivedi等人,2017年)等。

       尽管深图网络取得了成功,但由于这些模型缺乏可解释性和健壮性,使得某些与财务或安全相关的应用程序面临风险。如Akoglu等人所作的分析。(2015年)证明,图表信息在风险管理领域十分重要。对图形敏感的评估模型通常采用用户-用户关系i。 注意:与许多高信用用户相联系的用户也可能有很高的信用.这种由深度图方法学习的启发式方法通常会得到很好的预测,但也会使t值更高。 他冒着风险建模。

       犯罪分子可以通过使用Facebook或LinkedIn联系其他人来伪装自己。这种对信用预测模型的“攻击”相当便宜,但后果可能是严重的。由于 每天都有大量的交易发生,即使只有百万分之一的交易是欺诈性的,欺诈者仍然可以获得巨大的收益。然而,很少有人注意到这一点。 关于涉及图结构的区域其他领域的对抗性攻击和防御,如图像(Goodfreer等人,2014年)和文本(贾&梁,2017)。因此,本文主要研究了一组图神经网络模型的图对抗性攻击(Scalselli等人,2009)。这些是一组有监督的模型(DAI等人,2016),模型中有ac。 最先进的技术导致了许多传感器任务(Kipf&Wling,2016)和归纳任务(Hamilton等人,2017)。通过节点分类和图分类的实验 问题是,我们将证明这类模型确实存在对抗样本,而GNN模型很容易受到这种攻击。

\

       然而,有效地攻击图结构是一个非平凡的问题。与数据连续的图像不同,图是离散的。图的组合性质 它比文字更难。受组合优化最新进展的启发(Bello等人,2016;DAI等人,2017),我们提出了一种基于强化学习的攻击方法。 一种只使用目标分类器的预测反馈来修改图结构的方法。修改是通过从图形中依次添加或删除边来完成的。等级 为了使训练可行,还对二次作用空间进行了分解。图1说明了这种方法。

        我们表明,这样的学习代理也可以提出对抗攻击的新实例,而不访问分类器。

        本文考虑了几种不同的对抗攻击设置。当可以从目标分类器获得更多信息时,基于梯度的方法和遗传算法的变体。 文中还提出了基于M的方法。在这里,我们主要关注以下三个设置:

       白盒攻击(WBA):在这种情况下,攻击者被允许访问目标分类器的任何信息,包括预测、梯度信息等。

        实用的黑盒攻击(PBA):在这种情况下,只有目标分类器的预测是可用的。当预测可信度是可访问的时,我们将此设置表示为PBA-C;如果仅 允许离散预测标签,我们表示设置为PBA-D。

        限制黑箱攻击(RBA):这个设置比PBA更进一步。在这种情况下,我们只能对一些样本进行黑箱查询,并且要求攻击者创建对抗MOD。 对其他样本的影响。

        正如我们所看到的,关于攻击者可以从目标分类器获取的信息量,我们可以将上述设置排序为WBA>PBA-C>PBA-D>RBA。为了简单起见,我们把重点放在非- 目标攻击,虽然很容易扩展到目标攻击场景。

        在第2节中,我们首先介绍了GNN和两个监督学习任务的背景。在第三章中,我们正式定义了图的对抗性攻击问题。第3.1节介绍了RL-S2V攻击方法 它学习了图结构上的可通用攻击策略。在第3.2章中,我们还提出了对目标分类器进行不同级别访问的其他攻击方法。我们实验证明 第4节中GNN模型的脆弱性,并给出了一种防范此类攻击的方法。

       以上只摘录了论文一小部分,具体论文全文请点击下方下载,用于研究和学习。
       蚂蚁金服人工智能部研究员ICML贡献论文系列 第五篇论文

      《Adversarial Attack on Graph Structured Data》全文PDF版下载链接:
       http://www.lunwensci.com/uploadfile/2018/0716/20180716023922488.pdf

        关注SCI论文创作发表,寻求SCI论文修改润色、SCI论文代发表等服务支撑,请锁定SCI论文网!

文章出自SCI论文网转载请注明出处:https://www.lunwensci.com/jisuanjilunwen/162.html

发表评论

Sci论文网 - Sci论文发表 - Sci论文修改润色 - Sci论文期刊 - Sci论文代发
Copyright © Sci论文网 版权所有 | SCI论文网手机版 | 鄂ICP备2022005580号-2 | 网站地图xml | 百度地图xml