Deep neural networks can predict well even when fitting noisy data. The phenomenon is called benign overfitting. In this seminar, we analyze the overparametrized model under the adversarial perturbation, showing the fitting noise leads to sensitive models to the adversarial perturbation. In contrast to the natural risk where noise cancels out for each dimension, the small perturbation of each feature accumulates to significant change of the output in the adversarial attack.  And we also study the adversarial training in these overparametrized models, showing that while it can increase the robustness of the model, it leads to distinct parameter to the oracle and decreases in performance for natural data.

5月6日
10am - 11am
地點
https://hkust.zoom.us/j/92129409608 (Passcode: 568117)
講者/表演者
Mr. Zhichao HUANG
主辦單位
Department of Mathematics
聯絡方法
付款詳情
對象
Alumni, Faculty and staff, PG students, UG students
語言
英語
其他活動
6月21日
研討會, 演講, 講座
IAS / School of Science Joint Lecture - Alzheimer’s Disease is Likely a Lipid-disorder Complication: an Example of Functional Lipidomics for Biomedical and Biological Research
Abstract Functional lipidomics is a frontier in lipidomics research, which identifies changes of cellular lipidomes in disease by lipidomics, uncovers the molecular mechanism(s) leading to the chan...
5月24日
研討會, 演講, 講座
IAS / School of Science Joint Lecture - Confinement Controlled Electrochemistry: Nanopore beyond Sequencing
Abstract Nanopore electrochemistry refers to the promising measurement science based on elaborate pore structures, which offers a well-defined geometric confined space to adopt and characterize sin...