±¨ ¸æ ÈË£º Éò³¬Ãô£¬»ª¶«Ê¦·¶´óÑ§ÍÆËã»ú¿ÆÑ§Óë¼¼ÊõѧԺ
»ã±¨¹¦·ò£º11ÔÂ9ÈÕ£¨ÖÜÒ»£©10:00-12:00
»ã±¨µØÖ·£ºÐ£±¾²¿¶«ÇøÍÆËã»úѧԺ402
Ñû Çë ÈË£ºº«Ô½ÐË ¸±½ÌÊÚ
»ã±¨ÌáÒª£º
We propose a scheme, named SEAL (Suppressing Eigenvalue in Adversarial Learning), for defending against adversarial attacks by suppressing the largest eigenvalue of the Fisher information matrix (FIM). SEAL is based on the following observation: adversarial phenomenon may occur when the FIM, which is a connection between the input and output in the neural network, has large eigenvalue(s). This observation makes the adversarial defense possible by controlling the eigenvalues of the FIM. Our solution is adding a regularization term to the loss function of the original network. The term represents the maximum eigenvalue or the trace of the FIM, as its eigenvalues are bounded by the trace. SEAL does not require any modification of the network structure. Our adversarial robustness is verified by experiments using a variety of standard attacking methods on typical deep neural networks, e.g. LeNet, VGG and ResNet, with datasets MNIST, CIFAR10, and German Traffic Sign Recognition Benchmark (GTSRB). SEAL decreases the fooling ratio of the generated adversarial examples significantly, and remains the classification accuracy of the original network.
»ã±¨È˼ò½é:
Éò³¬Ãô£¬»ª¶«Ê¦·¶´óÑ§ÍÆËã»ú¿ÆÑ§Óë¼¼ÊõѧԺ¸±½ÌÊÚ¡£´ÓÊÂÈËΪÖÇÄÜÔÚͼÏñ´¦ÖÃÖеÄÀíÂÛºÍÀûÓÃ×êÑУ¬Ô̺¬Éî¶È½ø½¨ÏÂµÄÆ¥µÐ·ÀÓù¡¢¹Ç¿ÆÊÖÊõµ¼º½¡¢MRI¼±¾ç³Á½¨µÈ¡£Ö÷³Ö¹ú¶ÈÌìÈ»»ù½ðÃæÉÏÏîÄ¿ºÍºáÏòÏîÄ¿£¬×÷ΪѧÊõ¹Ç¸É³Ðµ£973¡¢¹ú¶ÈÌìÈ»»ù½ð³ÁµãÏîÄ¿¡£ÔÚ¹ú¼Ê³ÁҪѧÊõÆÚ¿¯ºÍ»áÒéÉϰ䷢ѧÊõÂÛÎÄ40ÓàÆª£¬Ô̺¬CCF A¡¢SCI 1ÇøµÄÂÛÎÄ¶àÆª¡£ÈÎÊýѧͼÏñͬÃË (Union of Mathematical Imaging, UMI) ÃØÊ鳤¡¢ÉϺ£Á¦Ñ§»á½»Í¨Á÷ÓëÊý¾Ý¿ÆÑ§×¨ÒµÎ¯Ô±»áίԱ¡£×÷ΪÁìµ¼ÀÏʦ£¬Á쵼ѧÉúÔÚµÚÈý½ìͼÏñÍÆËãÓëÊý×Öҽѧ¹ú¼Ê×êÑлá(ISICDM 2019) ¸ÎÔàÔ׸îÌôÕ½ÈüÉÏ»ñ¶þµÈ½±¡£