Gated and RGB Fusion for Robust Perception

Robust perception in all weather conditions is a critical requirement for autonomous vehicles. This project proposes fusing gated and conventional RGB camera images for robust scenes encoding, depth estimation and trajectory prediction. Conventional approaches using lidar and RGB camera fail to perform robustly in rain, fog and snow. By extending existing computer vision algorithms to Gated-RGB camera pair the fusion algorithms developed will utilize features that are robust in one sensor modality but not the other.

Image Deblurring for Mobile Devices Year Two

The wide proliferation of digital cameras has resulted in a massive flow of images captured then shared online in various forms. At the same time, capturing high-quality images with embedded cameras remains a significant challenge. One of the most common image degrading effects is motion blur, produced by camera movement while taking a photograph. This problem is widespread and has attracted considerable research and development efforts, but there is still no satisfying solution in place suitable for a wide variety of conditions.

An efficient implementation of a computational camera for smartphones

A camera is a device that captures light from scenes. Over the last century, the evolution of cameras has been truly remarkable. Through this evolution, the underlying camera has been improved by using a better optical lens. However, the new improved optical lenses, have been remained fixed in terms of size and weight which makes it hard to use in portable devices. In contrast to optical trend, according to Moore’s law, the number of transistor in the chip doubles approximately every two years. This leads to a huge improvement in computational devices.