A no-reference framework for evaluating video quality streamed through wireless network

In this work, a no-reference framework is proposed for the video quality estimation streamed through the wireless network. The work presents a comprehensive survey of the existing full reference (FR), reduced reference (RR), and no-reference (NR) algorithms. A comparison has been made among existing algorithms, i.e. in terms of subjective correlation and feasibility to use these algorithms in wireless architecture, to describe the necessity of the proposed framework to overcome the limitations of the existing algorithms. A brief summary of our previously published algorithms, i.e. NR blockiness, NR blur, NR network, NR just noticeable distortion, and RR, has also been presented. These algorithms have also been used as function modules in the proposed framework. The proposed framework is able to measure the video quality by taking into account major spatial, temporal, network impairments, and human visual system effects for a comprehensive quality evaluation. The proposed framework is able to measure the video quality compressed by different codecs, i.e. MPEG x / H.264x, Motion JPEG/Motion, and JPEG2000, etc. The framework is able to work with two different kinds of received data, i.e. bit streams and decoded pixels. The framework is an integration of the RR and NR method, and can work in three different modes depending on the availability of the RR data, i.e. 1) only RR measurement, 2) hybrid of RR and NR measurement, and 3) only NR estimation. In addition, any individual function block, i.e. blurring, can also be used independently for particular specific distortion. A new subjective video quality database containing compressed and distorted videos (due to channel induced distortions) is also developed to test the proposed framework. The framework has also been tested on publicly available LIVE Video Quality Database. Overall test results show that our framework demonstrates a strong correlation with subjective evaluation of the two separate video databases as compared with other existing algorithms. The proposed framework also shows good results while working only in NR mode as compared with existing RR and FR algorithms. The proposed framework is more scalable and feasible to use in any kind of available network bandwidth as compared with other algorithms, as it can be used in different modes by using different function modules.