Ëæ»úÖ±½ÓËÑË÷µÄ·ÇÊÕÁ²ÐÔ·ÖÎö

2025.05.19

Ͷ¸å£º¹¨»ÝÓ¢²¿ÃÅ£ºÀíѧԺä¯ÀÀ´ÎÊý£º

»î¶¯ÐÅÏ¢

»ã±¨±êÌâ (Title) Non-convergence Analysis of Randomized Direct Search £¨Ëæ»úÖ±½ÓËÑË÷µÄ·ÇÊÕÁ²ÐÔ·ÖÎö£©

»ã±¨ÈË (Speaker)£ºÕÅÔÚÀ¤ ½ÌÊÚ£¨ÖÐɽ´óѧ£©

»ã±¨¹¦·ò (Time)£º2025Äê5ÔÂ17ÈÕ (ÖÜÁù) 14:00

»ã±¨µØÖ· (Place)£ºÐ£±¾²¿GJ303

Ô¼ÇëÈË(Inviter)£ºÐì×Ë ½ÌÊÚ

Ö÷°ì²¿ÃÅ£ºÀíѧԺÊýѧϵ

»ã±¨ÌáÒª: Direct search is a popular method in derivative-free optimization. Randomized direct search has attracted increasing attention in recent years due to both its practical success and theoretical appeal. It is proved to converge under certain conditions at the same global rate as its deterministic counterpart, but the cost per iteration is much lower, leading to significant advantages in practice. However, a fundamental question has been lacking a systematic theoretical investigation: when will randomized direct search fail to converge? We answer this question by establishing the non-convergence theory of randomized direct search. We prove that randomized direct search fails to converge if the searching set is probabilistic ascent. Our theory does not only deepen our understanding of the behavior of the algorithm, but also clarifies the limit of reducing the cost per iteration by randomization, and hence provides guidance for practical implementations of randomized direct search.

This is a joint work with Cunxin Huang, a Ph.D. student funded by the Hong Kong Ph.D. Fellowship Scheme.

¡¾ÍøÕ¾µØÍ¼¡¿