Unleashing Video Search
Google Tech Talks
October, 25 2007
Video is rapidly becoming a regular part of our digital lives. However, its tremendous growth is increasing expectations that video will be as easy to search as text. Unfortunately, it is still difficult to find relevant video content. And today's solutions are not keeping pace on problems ranging from video search to content classification to automatic filtering. In this talk we describe recent techniques that leverage the computer's ability to effectively analyze visual features of video and apply statistical machine learning techniques to classify video scenes automatically. We examine related efforts on the modeling of large video semantic spaces and review public evaluations such as TRECVID, which are greatly facilitating research and development on video retrieval. Finally, we discuss the role of MPEG-7 as a way to store metadata generated for video in a fully standards-based searchable representation. Overall, we show how these approaches together go a long way to truly unleash video search.
Speaker: John R Smith, IBM Research