Real-Time multi-view rendering from 3D stereoscopic video

With the steady production of 3D movies, there is an increased availability of 3D stereoscopic video content, paving the way for the deployment of 3DTV to the general consumer market. Auto-stereoscopic displays, those that produce 3D without the need to wear glasses, are a viable technology for the home environment and are obtaining a lot of attention from the media production and consumer electronics industries. One of the main issues to solve for the introduction of auto-stereoscopic 3DTVs in the home market is the lack of content specially prepared for this kind of display. Auto-stereoscopic displays require several views, more than two and normally around nine, to present multiple points of view to the viewer, generating a more realistic and comfortable 3D representation of the scene. Given that most of the content currently produced is based on a two-view format (S3D), there is an important need for high quality algorithms for converting two-views stereoscopic video content to multi-view format for auto-stereoscopic displays. The additional requirement of real-time conversion for the broadcasting industry adds a layer of complexity to the conversion problem.

In this project we aim at developing a set of tools to perform the conversion of video content from two-view to multi-view format in real-time. The intended application is the broadcasting industry. The amount of data to process, HD and UHD (4K), and the high image and depth quality required for broadcasting applications makes this problem a difficult one. The conversion process includes three main steps: recovering the depth information from the stereoscopic video source, rendering of the additional views required for auto-stereoscopic displays based on the recovered depth and the viewing conditions and post-processing the images for presentation in the auto-stereoscopic display. Each of these steps requires the application of a number of image and video processing algorithms to obtain the desired results.

This particular project will focus on the join extraction of the depth information and rendering of the new views in real-time. The extraction of depth information from stereoscopic video sources is a well studied problem normally referred to as the stereo matching or stereo correspondence problem. There is, however, not a single algorithm that performs well for all types of videos. Another problem to tackle is the complexity of the available algorithms. Most of the proposed algorithms are highly complex and are not suitable for real-time implementation. The combination of the depth extraction step with the view rendering process is a promising avenue to improve both, the quality of the reconstructed images and the processing time of the whole process.

Another aspect that will be considered in the project is the choice of depth parameters to ensure the comfort of the viewer. Viewer comfort for stereoscopic imaging is one of the most important research topics in the 3DTV research community, and ensuring that the produced depth is consistent with the original scene and comfortable to view is one important requirement of the project.

Faculty Supervisor:

Carlos Vazquez










Current openings

Find the perfect opportunity to put your academic skills and knowledge into practice!

Find Projects