|When searching for videos depicting a specific sentence or event, current systems do not process the actual video, but only search the textual annotations provided by the users that uploaded the video. Other state of the art systems that do search video content only search for objects and events, but they cannot distinguish events based on the roles objects play. They also cannot handle adjectives, adverbs, and prepositions.
Researchers at Purdue University have developed a system for sentence tracking that can identify video clips from a database depicting the sentence that was queried. This system uses video processing to recognize combinations of people and other objects whose combined motion corresponds to the sentence. Using compositional semantics, this system can determine subtle meaning that is lost in other systems such as distinguishing two sentences that have the same words, but different meanings. This system can search for complex queries that contain multiple phrases and modifiers, such as prepositional phrases and adverbs, and allows for more focused and accurate search and retrieval of specific videos.
-More focused and accurate video search and retrieval
-Recognizes combinations of people and other objects corresponding to specific sentences
-Queries can contain adjectives, adverbs, and prepositions
Apr 4, 2016
Apr 4, 2016
Dec 6, 2013
Nov 10, 2015
Jun 15, 2013
Purdue Office of Technology Commercialization
The Convergence Center
101 Foundry Drive, Suite 2500
West Lafayette, IN 47906
Phone: (765) 588-3475
Fax: (765) 463-3486