




版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
1、Attack and DefenseAttack and DefenseHung-yi LeeSource of image: Motivation We seek to deploy machine learning classifiers not only in the labs, but also in real world. The classifiers that are robust to noises and work “most of the time” is not sufficient. We want the classifiers that are robust the
2、 inputs that are built to fool the classifier. Especially useful for spam classification, malware detection, network intrusion detection, etc. 光是強還不夠應付來自人類的惡意Attack medieval-weapon-1352-3.jpgWhat do we want to do?Something ElseTiger Cat0.64NetworkOriginal ImageAttacked ImagecloseLoss Function for At
3、tack Training: Non-targeted Attack: Targeted Attack: Constraint: Network不要被發現closefarclosee.g. cate.g. fish fixed fixed?Constraint L2-norm L-infinity Change one pixel muchChange every pixel a little bitsame L2 small L- large L- How to Attack Gradient Descent (Modified Version)Just like training a ne
4、ural network, but network parameter is replaced with input For t = 1 to T Start from original image If How to Attack Gradient Descent (Modified Version)Just like training a neural network, but network parameter is replaced with input def For all fulfill Return the one closest to For t = 1 to T Start
5、 from original image If How to Attack def For all fulfill Return the one closest to L2-norm L-infinityExample Tiger Cat0.64Original ImageAttacked ImageStar Fish1.00ResNet-50True = Tiger catFalse = Star FishExample Tiger Cat0.64Original ImageAttacked ImageStar Fish1.00=-50 xExample Tiger Cat0.64Origi
6、nal ImageAttacked ImageKeyboard0.98ResNet-50True = Tiger catFalse = Keyboardtabby catPersiancatfire screen tiger catWhat happened?Random Specific DirectionAttack Approaches FGSM ( ) Basic iterative method ( ) L-BFGS ( ) Deepfool ( ) JSMA ( ) C&W ( ) Elastic net attack ( ) Spatially Transformed (
7、 ) One Pixel Attack ( ) only list a fewAttack Approaches Fast Gradient Sign Method (FGSM)Different optimization methodsDifferent constraintsonly have +1 or -1 Attack Approaches Fast Gradient Sign Method (FGSM)Different optimization methodsDifferent constraints gradientonly have +1 or -1L-infinityAtt
8、ack Approaches Fast Gradient Sign Method (FGSM)Different optimization methodsDifferent constraints gradientonly have +1 or -1L-infinityvery large learning rateWhite Box v.s. Black Box In the previous attack, we fix network parameters to find optimal . To attack, we need to know network parameters Th
9、is is called White Box Attack. Are we safe if we do not release model? You cannot obtain model parameters in most on-line API. No, because Black Box Attack is possible. Black Box AttackIf you have the training data of the target networkTrain a proxy network yourselfUsing the proxy network to generat
10、e attacked objectsOtherwise, obtaining input-output pairs from target network NetworkBlackNetworkProxyTraining DataAttacked ObjectBlack Box AttackIf you have the training data of the target networkTrain a proxy network yourselfUsing the proxy network to generate attacked objectsOtherwise, obtaining
11、input-output pairs from target network BlackProxyUniversal Adversarial Attack Black Box Attack is also possible!Adversarial Reprogramming Gamaleldin F. Elsayed,Ian Goodfellow,Jascha Sohl-Dickstein, “Adversarial Reprogramming of Neural Networks”, ICLR, 2019Attack in the Real World Black Box AttackAtt
12、ack in the Real World rs/face-rec-ccs16.pdf1. An attacker would need to find perturbations that generalize beyond a single image.2. Extreme differences between adjacent pixels in the perturbation are unlikely to be accurately captured by cameras. 3. It is desirable to craft perturbations that are co
13、mprised mostly of colors reproducible by the printer. s/1707.08945Beyond Images You can attack audio You can attack text 07.07328.pdfDefenseDefense Defense Adversarial Attack cannot be defended by weight regularization, dropout and model ensemble. Two types of defense: Passive defense: Finding the a
14、ttached image without modifying the model Special case of Anomaly Detection Proactive defense: Training a model that is robust to adversarial attackPassive Defense KeyboardNetworkFiltere.g. Smoothing Attack signalOriginal Do not influence classificationLess harmfulTiger Cattiger cat0.64Smoothing tig
15、er cat0.45Keyboard0.98Smoothing tiger cat0.37Passive Defense Feature Squeeze s/1704.01155Randomization at Inference Phase s/1711.01991Proactive Defense 精神:找出漏洞、補起來For t = 1 to T Given training data Find adversarial input given by an attack algorithm Using both to update your model For n = 1 to N We have new training data Data Augmentationdifferent in each iteration找出漏洞Using to train your model 把洞補起來Using algorithm AThis method would stop algorithm A, but is still vulnerable for algorithm B. Concluding Remarks Attack: given the network parameters
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 部门目标责任合同协议
- 2025年房地产经纪协理之房地产经纪综合能力综合检测试卷B卷含答案
- 水电修缮协议书
- 转让摆摊设备合同协议
- 迷彩油漆采购合同协议
- 心理干预技巧试题及答案
- 提升实践能力的2024年高级审计师考试试题及答案
- 护理职业路径试题及答案
- 无人机的电气系统解析试题及答案
- 2024年高级会计试题及答案应对指南
- 婚姻家庭继承法期末考试复习题及参考答案
- 2024年四川省成都市中考数学试卷(含解析)
- 有机肥原材料采购合同范本
- 2024新版挂名法人免责协议完整版
- (正式版)JTT 1497-2024 公路桥梁塔柱施工平台及通道安全技术要求
- python程序设计-说课
- MOOC 一生的健康锻炼-西南交通大学 中国大学慕课答案
- 中国古典文学中的生态意识
- 牛顿第三定律说课市公开课一等奖省赛课微课金奖课件
- 示范村建设项目勘察设计后续服务的安排及保证措施
- AI工业质检简介演示
评论
0/150
提交评论