ÊÓÆµ´óÊý¾ÝÓÃÓÚʵÌåÊÀ½çµÄ¶¯Ì¬¹Û²â

2014.12.22

Ͷ¸å£º¶¡ÇಿÃÅ£º¿Æ¼¼·¢Õ¹×êÑÐÔºä¯ÀÀ´ÎÊý£º

»î¶¯ÐÅÏ¢

¹¦·ò£º 2014Äê12ÔÂ26ÈÕ 10:00

µØÖ·£º У±¾²¿J101

Ñݽ²±êÌ⣺Big Visual Data for Dynamic Physical World Monitoring

            ÊÓÆµ´óÊý¾ÝÓÃÓÚʵÌåÊÀ½çµÄ¶¯Ì¬¹Û²â

 

Ñݽ²ÈË£ºJenq-Neng Hwang½ÌÊÚ
 

Ñݽ²È˼ò½é£º

»ÆÕýÄܽÌÊÚÂ½ÐøÈýÄ걻ƸΪб¦GG“º£±íÃûʦ” ¡£Ëû±ðÀëÓÚ1981ÄêºÍ1983Äê´Ó¹úÁ¢Ì¨Íå´óѧ»ñµÃµç×Ó¹¤³ÌרҵµÄѧʿºÍ˶ʿѧλ£¬Ö®ºó¸°ÃÀÁôѧ£¬ÔÚÄϼÓÀû¸£ÄáÑÇ´óѧ»ñµÃ²©Ê¿Ñ§Î» ¡£±ÏÒµºóÓÚÃÀ¹ú»ªÊ¢¶Ù´óѧ£¨Î÷ÑÅͼ£©ÈνÌ£¬1999ÄêÌáÉýΪÕý½ÌÊÚ£¬Ä¿Ç°ÊÇ»ªÊ¢¶Ù´óѧ£¨Î÷ÑÅͼ£©µç×Ó¹¤³ÌϵƽÉú½ÌÊÚ£¬Ö÷¹Ü¿ÆÑеĸ±ÏµÖ÷ÈÎ ¡£

»Æ½ÌÊÚÔÚѧÊõÉÏÓµÓкÜÉîµÄÔìÒ裬ÊÇÓйØÁìÓòÓµÓнϸ߹ú¼ÊÓ°ÏìµÄ³ÛÃû¿ÆÑ§¼Ò ¡£ÏȺóÔÚ¶àýÌåÐźŴ¦Ö㬶àýÌåϵͳ¼¯³ÉºÍÍøÂçÁìÓò°ä·¢ÁË300¶àƪÂÛÎÄ¡¢ÖøÊöµÈ ¡£Í¬Ê±»¹µ£Èζà¼ÒѧÊõίԱ»áµÄÖ÷ϯ¼°¶à±¾³ÛÃûµç×Ó¹¤³ÌÀàÔÓÖ¾µÄÖ÷±à£¬´Ó2001ÄêÆð¼´³ÉΪIEEE Fellow ¡£

»Æ½ÌÊÚ»ñµÃ¹ý¶àÏî¸÷ÀàÈ˲żν±ºÍѧÊõ¼Î½±£¬ÓµÓзá˶µÄ¹ú¼ÊºÏ×÷¾­Àú£¬Óëб¦GG·¢Õ¹Á˽ϳ¤¹¦·òµÄºÏ×÷Ó뻥»»£¬Êܵ½Ê¦ÉúµÄ¿í·ºÔÞÃÀ ¡£

 

Ñݽ²ÄÚÈÝÌáÒª£º

With the huge amount of networked video cameras installed everywhere nowadays, there is an urgent need of embedded intelligence for automated tracking and event understanding of video objects. In this talk, I will present an automatic system which dynamically tracks human objects and create their 3-D visualization from big visual data, collected either from a single or an array of static/moving cameras. These cameras are continuously calibrated among one another in a fully unsupervised manner so that the tracking across multiple cameras can be effectively integrated and reconstructed via the 3D open map service such as Google Earth or Microsoft Virtual Earth.

 

µØÖ·£ºÐ±¦GGУ±¾²¿J101ÊÒ

¹¦·ò£º2014Äê12ÔÂ26ÈÕ£¨ÖÜÎ壩10£º00-11£º00

 

Ö÷°ìµ¥Ôª£ºÐ±¦GGÖǻ۳ÇÊÐ×êÑÐÔº¡¢Ð±¦GG¹ú¼ÊÊÂÎñ´¦

¡¾ÍøÕ¾µØÍ¼¡¿