»ã±¨±êÌâ (Title)£ºÍ¹Óë·Ç͹ÓÅ»¯µÄÒ»ÖÂ×îÓÅÐÔ
»ã±¨ÈË (Speaker)£ºGuanghui Lan ½ÌÊÚ£¨ÃÀ¹ú×ôÖÎÑÇÀí¹¤Ñ§Ôº£©
»ã±¨¹¦·ò (Time)£º2024Äê01ÔÂ05ÈÕ (ÖÜÎå) 15:00
»ã±¨µØÖ· (Place)£ºÐ£±¾²¿F309
Ô¼ÇëÈË(Inviter)£ºÐì×Ë ½ÌÊÚ
Ö÷°ì²¿ÃÅ£ºÀíѧԺÊýѧϵ
»ã±¨ÌáÒª£ºThe past few years have witnessed growing interest in the development of easily implementable parameter-free first-order methods to facilitate their applications, e.g., in data science and machine learning. In this talk, I will discuss some recent progresses that we made on uniformly optimal methods for convex and nonconvex optimization. By uniform optimality, we mean that these algorithms do not require the input of any problem parameters but can still achieve the best possible iteration complexity bounds for solving different classes of optimization problems. We first consider convex optimization problems under different smoothness levels and show that neither such smoothness information nor line search procedures are needed to achieve uniform optimality. We then consider regularity conditions (e.g., strong convexity and lower curvature) that are imposed over a global scope and thus are notoriously more difficult to estimate. By presenting novel methods that can achieve tight complexity bounds to compute solutions with verifiably small (projected) gradients, we show that such regularity information is in fact superfluous for handling strongly convex and nonconvex problems. It is worth noting that our complexity bound for nonconvex problems also appears to be new in the literature.