This fundamental research project investigates semantic visual navigation tasks, such as asking a household robot to “go find my keys”. We seek to enhance the efficacy of repeated search tasks within the same environment, by explicitly building, maintaining, and exploiting a map of locations that the robot had previously explored. We also seek to exploit prior location-tolocation, object-within-location, and object-to-object relationships from similar environments (e.g. within a common cultural region) to improve semantic visual navigation in unseen environments.