ÏÕЩ×îÓŵÄVCά¶ÈºÍαά¶È½çÏÞÓÃÓÚÉî¶ÈÉñ¾­ÍøÂçµ¼Êý

2023.10.16

Ͷ¸å£º¹¨»ÝÓ¢²¿ÃÅ£ºÀíѧԺä¯ÀÀ´ÎÊý£º

»î¶¯ÐÅÏ¢

»ã±¨±êÌâ (Title)£ºÏÕЩ×îÓŵÄVCά¶ÈºÍαά¶È½çÏÞÓÃÓÚÉî¶ÈÉñ¾­ÍøÂçµ¼Êý

»ã±¨ÈË (Speaker)£ºÑîÑÅºè ²©Ê¿ £¨±öϦ·¨ÄáÑÇÖÝÁ¢´óѧ£©

»ã±¨¹¦·ò (Time)£º2023Äê10ÔÂ19ÈÕ£¨ÖÜËÄ£© 9:00

»ã±¨µØÖ· (Place)£ºÌÚѶ»áÒ飨696406234£©

Ô¼ÇëÈË(Inviter)£ºÇØÏþÑ©

Ö÷°ì²¿ÃÅ£ºÀíѧԺÊýѧϵ

»ã±¨ÌáÒª£ºThis paper addresses the problem of nearly optimal Vapnik--Chervonenkis dimension (VC-dimension) and pseudo-dimension estimations of the derivative functions of deep neural networks (DNNs). Two important applications of these estimations include: 1) Establishing a nearly tight approximation result of DNNs in the Sobolev space; 2) Characterizing the generalization error of machine learning methods with loss functions involving function derivatives. This theoretical investigation fills the gap of learning error estimations for a wide range of physics-informed machine learning models and applications including generative models, solving partial differential equations, operator learning, network compression, distillation, regularization, etc.

¡¾ÍøÕ¾µØÍ¼¡¿