Joint Inference of Objects and Scenes with Efficient Learning of Text-Object-Scene Relations


Botao Wang, Dahua Lin, Hongkai Xiong, Senior Member, IEEE, and Yuan F. Zheng, Fellow, IEEE


Abstract: The rapid growth of web images presents new challenges as well as opportunities to the task of image understanding. Conventional approaches rely heavily on fine-grained annotations, such as bounding boxes and semantic segmentations, which are not available for web-scale images. In general, images over the Internet are accompanied with descriptive texts, which are relevant to their contents. To bridge the gap between textual and visual analysis for image understanding, this paper presents an algorithm to learn the relations between scenes, objects, and texts with the help of image-level annotations. In particular, the relation between the texts and objects is modeled as the matching probability between the nouns and the object classes, which can be solved via a constrained bipartite matching problem. On the other hand, the relations between the scenes and objects/texts are modeled as the conditional distributions of their co-occurrence. Built upon the learned cross-domain relations, an integrated model brings together scenes, objects, and texts for joint image understanding, including scene classification, object classification and localization, and the prediction of object cardinalities. The proposed cross-domain learning algorithm and the integrated model elevate the performance of image understanding for web images in the context of textual descriptions. Experimental results show that the proposed algorithm significantly outperforms conventional methods in various computer vision tasks.





Citation: Botao Wang, Dahua Lin, Hongkai Xiong, and Yuan F. Zheng, "Joint Inference of Objects and Scenes with Efficient Learning of Text-Object-Scene Relations", IEEE Transactions on Multimedia (TMM), vol. 18, no. 3, pp. 507-520, March 2016.

[PDF] [Dataset] [Bibtex] [IEEE Xplore]

Institute of Media, Information, and Network (MIN Lab)