User-enhanced semantic concept detection via active context-based concept fusionTechnology #m06-087
Questions about this technology? Ask a Technology Manager
This technology is a method for detecting high-level features (referred to as concepts) in images and videos. Predictions of individual concepts are combined with user annotations correctly identifying other concepts to train a context-based Support Vector Machine (SVM) classifier. The technology then actively selects concepts for the user to annotate which are sufficiently related enough to other concepts under consideration. The resulting detection can go beyond what is possible with either no annotations or annotation of randomly selected concepts.
Active selection of concepts enables detection performance superior to that obtained by passive schemes utilizing user annotation.
Context-Based Concept Fusion (CBCF) methods aim to enhance concept detection of by exploiting the statistical interdependence of different concepts present in visual content. Incorporation of human user input to identify specific concepts in the detection process has the potential to increase accuracy of detection, but does not provide any means of determining the most optimal concepts to annotate. By contrast, this technology actively selects concepts whose annotation by a user is predicted to result in the most optimal enhancement of detection accuracy.
The technology’s effectiveness in comparison to other methods has been confirmed over the TRECVID05 video test set.
- User-aided indexing of large image or video collections.
- Classification of images or videos based upon specific high-level features.
- Accurate categorization of visual content in consumer photo/video management systems.
- More effective use of user input than obtained in passive concept fusion systems.
- Improved semantic concept detection accuracy.
Patent Issued (US 7,720,851)
Tech Ventures Reference: IR M06-087