Understanding Scenes using Vision and Range Sensing
The Curious George project aims to construct a spatial-semantic modeling system featuring automated learning of object appearance and object-place relations from online annotated database, and the application of these relations to a variety of real-world tasks. The physical system currently developed at UBC, a visually guided mobile robot, can recognize objects in an environment based on imagery collected from the World Wide web, as demonstrated in an international contest known as the Semantic Robot Vision Challenge. The UBC team has won that contest both at AAAI 2007 in Vancouver and at CVPR 2008 in Anchorage, and won the software division in 2009.
Recently, we have begun work on labeling novel scenes with place information using object-place relationship automatically extracted from the web. A summer student working in our lab would help us to integrate our existing technologies to give our robotic system the ability to perform state-of-the-art object recognition and create semantic place maps of realistic environments. Ultimately the recognition and place mapping system on our robots will form the basis for home robots and assistive robots for home care. There are a wide range of opportunities in the project, ranging from mining the Web for images and semantic information about scenes to working with mobile robots on exploring areas looking for objects.
The student will work closely with senior graduate students to develop and maintain subsystems to enable the robot to understand range images. New developments in range sensing such as the Microsoft range interface (due in November) will revolutionize sensing.