Efficient and low-complexity video coding for virtual reality and 360-degree video streaming
Virtual reality (VR) and augmented reality (AR) offer a unique immersive video experience by providing 360-degree video in a panoramic view. Limited bandwidth, demanding high quality, encoder delay, network latency and lack of standards are the main problems to deliver true VR immersive experience. To address these challenges, in this research, we intend to design a VR system based on learning concepts to provide efficient bandwidth usage where the encoder makes smart decisions to assign different qualities to different parts of the spherical frame based on the user’s view using features such as video content and user’s movement patterns. The 360 frame is split into segments such as tiles where the size, number and the quality of tiles are determined adaptively and on-line.