Thursday, September 12 2019
15:30 - 17:00

Alladi Ramakrishnan Hall

Brain Inspired Automated Concept and Object Learning: Vision, Text, and Beyond

Vwani Roychowdhury

University of California, Los Angeles

Brains are endowed with innate models that can learn effective informational and reasoning prototypes of the various objects and concepts in the real world around us. A distinctive hallmark of the brain, for example, is its ability to automatically discover and model objects, at multi-scale resolutions, from repeated exposures to unlabeled contextual data and then to be able to robustly detect the learned objects under various non-ideal circumstances, such as partial occlusion and different view angles. Replication of such capabilities in a machine would require three key ingredients: (i) access to large-scale perceptual data of the kind that humans experience, (ii) flexible representations of objects, and (iii) an efficient unsupervised learning algorithm. The Internet fortunately provides unprecedented access to vast amounts of visual data. The first part of this work will focus on our recent work that leverages the availability of such data to develop a scalable framework for unsupervised learning of object prototypes—brain-inspired flexible, scale, and shift invariant representations of deformable objects (e.g., humans, motorcycles, cars, airplanes) composed of parts, their different configurations and views, and their spatial relationships. We apply our framework to various datasets and show that our approach is computationally scalable and can construct accurate and operational part-aware object models much more efficiently than in much of the recent computer vision literature. We also present efficient algorithms for detection and localization in new scenes of objects and their partial views. The second part of this work will focus on processing large scale textual data, wherein our algorithms can create semantic concept-level maps from unstructured data sets. Finally we will conclude with the outlines of a general framework of contextual unsupervised learning that can remove many of the scalability and robustness limitations of existing supervised frameworks that require large amounts of labeled training sets and mostly act as impressive memorization engines.



Download as iCalendar

Done