No Reference Image and Video Quality Assessment

本文介绍了一种无参考图像质量评估方法,该方法利用自然场景统计特性进行图像质量评价,适用于多种失真类型,包括块压缩失真和JPEG2000压缩失真等。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

ut ut

Laboratory for Image & Video Engineering

No Reference Image and Video Quality Assessment


Please gohere to download our quality assessment databases and for free software releases of our quality assessment algorithms.

Introduction

Objective quality assessment is a very complicated task, and even full-reference QA methods have had only limited success in making accurate quality predictions. Researchers therefore tend to break up the problem of NR QA into smaller, domain-specific problems by targeting a limited class of artifacts --- distortion-specific IQA. The most common being the blocking-artifact, which is usually the result of block-based compression algorithms running at low bit rates. At LIVE we have conducted research into NR QA for blocking distortion as well as pioneering research into NR measurement of distortion introduced by Wavelet based compression algorithms based on Natural Scene Statistics modeling.

Recently, we have tackled the distortion-agnostic no-reference/blind IQA problem, i.e., we have designed algorithms that are capable of assessing the quality of an image without need for a reference and without knowledge of the distortion that affects the image.

Video BLIINDS


We propose the "Video BLIINDS" blind video quality evaluation approach that is non-distortion specific. The approach relies on a spatio-temporal model of video scenes in the discrete cosine transform (DCT) domain, and on a model that characterizes the type of motion occurring in the scenes, to predict video quality. The video quality assessment (VQA) algorithm does not require the presence of a pristine video to compare against in order to predict a quality score. The contributions of this work are three-fold.

1) We propose a spatio-temporal natural scene statistics (NSS) model for videos.
2) We propose a motion model that quantifies motion coherency in video scenes.
3) We show that the proposed NSS and motion coherency models are appropriate for quality assessment of videos, and we utilize them to design a blind VQA algorithm that correlates highly with human judgments of quality.

The proposed algorithm, called Video BLIINDS, is tested on the LIVE VQA Database. We demonstrate that its performance approaches the performance of the top performing reduced and full reference algorithms.

Relevant Publications:

1.M.A. Saad and A.C Bovik, “Blind Quality Assessment of Videos Using a Model of Natural Scene Statistics and Motion Coherency”, Asilomar Conference on Signals, Systems, and Computers, November 2012.

Naturalness Image Quality Evaluator (NIQE)


Natural Image Quality Evaluator (NIQE) blind image quality assessment (IQA) is a completely blind image quality analyzer that only makes use of measurable deviations from statistical regularities observed in natural images, without training on human-rated distorted images, and, indeed without any exposure to distorted images. However, all current state-of-the-art general purpose no reference (NR) IQA algorithms require knowledge about anticipated distortions in the form of training examples and corresponding human opinion scores.


It is based on the construction of a quality aware collection of statistical features based on a simple and successful space domain natural scene statistic (NSS) model. These features are derived from a corpus of natural, undistorted images. Experimental results show that the new index delivers performance comparable to top performing NR IQA models that require training on large databases of human opinions of distorted images.

Relevant Publications:

1.A. Mittal, R. Soundararajan and A. C. Bovik, “Making a Completely Blind Image Quality Analyzer”, IEEE Signal processing Letters, pp. 209-212, vol. 22, no. 3, March 2013.

Blind/Referenceless Image Spatial QUality Evaluator (BRISQUE)


Blind/Referenceless Image Spatial QUality Evaluator (BRISQUE) is a natural scene statistic (NSS)-based distortion-generic blind/no-reference (NR) image quality assessment (IQA) model which operates in the spatial domain. It does not compute distortion specific features such as ringing, blur or blocking, but instead uses scene statistics of locally normalized luminance coefficients to quantify possible losses of ‘naturalness’ in the image due to the presence of distortions, thereby leading to a holistic measure of quality.


The underlying features used derive from the empirical distribution of locally normalized luminances and products of locally normalized luminances under a spatial natural scene statistic model. No transformation to another coordinate frame (DCT, wavelet, etc) is required, distinguishing it from prior no reference IQA approaches. Despite its simplicity, we are able to show that BRISQUE is statistically better than the full-reference peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) and highly competitive to all present-day distortion-generic NR IQA algorithms. BRISQUE has very low computational complexity, making it well suited for real time applications. BRISQUE features may be used for distortion-identification as well.


To illustrate a new practical application of BRISQUE, we describe how a non-blind image denoising algorithm can be augmented with BRISQUE in order to perform blind image denoising. Results show that BRISQUE augmentation leads to performance improvements over the state-of-the-art.

Relevant Publications:

1.A. Mittal, A. K. Moorthy and A. C. Bovik, “No-Reference Image Quality Assessment in the Spatial Domain”, IEEE Transactions on ImageProcessing, 2012 (to appear).

2.A. Mittal, A. K. Moorthy and A. C. Bovik, “Referenceless Image Spatial Quality Evaluation Engine''. 45th Asilomar Conference on Signals, Systems and Computers. November 2011.

Distortion Identification-based Image Verity and INtegrity Evalutation (DIIVINE)


DIIVINE is a distortion-agnostic approach to blind IQA that utilizes concepts from natural scene statistics (NSS) to not only quantify the distortion and hence the quality of the image, but also qualify the distortion type afflicting the image. The Distortion Identification-based Image Verity and INtegrity Evaluation (DIIVINE) index utilizes a 2-stage framework for blind IQA that first identifies the distortion afflicting the image and then performs distortion-specific quality assessment.


Our computational theory for distortion-agnostic blind IQA is based on the regularity of natural scene statistics (NSS); for example, it is known that the power spectrum of natural scenes fall-off as (approximately) 1/f^b , where f is frequency. NSS models for natural images seek to capture and describe the statistical relationships that are common across natural (undistorted) images. Our hypothesis is that, the presence of distortion in natural images alters the natural statistical properties thereby rendering the image ‘un-natural’. NR IQA can then be accomplished by quantify- ing this ‘un-naturalness’ and relating it to perceived quality.


The Distortion Identification-based Image Verity and INtegrity Evaluation (DIIVINE) – divines the quality of an image without any need for a reference or the benefit of distortion models, with such precision that its performance is statistically indistinguishable from popular FR algorithms such as the structural similarity index (SSIM). The DIIVINE approach is distortion-agnostic, since it does not compute distortion-specific indicators of quality, but utilizes an NSS-based approach to qualify as well as quantify the distortion afflicting the image. The approach is modular, in that it can easily be extended beyond the pool of distortions considered here.


Relevant Publications:

1. A. K. Moorthy and A. C. Bovik, ``Blind Image Quality Assessment: From Scene Statistics to Perceptual Quality'', IEEE Transactions Image Processing, pp. 3350-3364, vol. 20, no. 12, 2011.

2. A. K. Moorthy and A. C. Bovik, ``A Two-step Framework for Constructing Blind Image Quality Indices". IEEE Signal Processing Letters, pp. 587-599, vol. 17, no. 5, May 2010.

3. A. K. Moorthy and A. C. Bovik, ``A Two-stage Framework for Blind Image Quality Assessment". IEEE International Conference on Image Processing (ICIP). September 2010.

4. A. K. Moorthy and A. C. Bovik, ``Statistics of Natural Image Distortions". IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). March 2010.


BLind Image Integrity Notator using DCT-Statistics (BLIINDS)


BLIINDS is an efficient, general-purpose, non- distortion specific, blind/no-reference image quality assessment (NR-IQA) algorithm that uses natural scene statistics models of discrete cosine transform (DCT) coefficients to perform distortion-agnostic NR IQA.


We derive a generalized NSS-based model of local DCT coefficients, and transform the model parameters into features suitable for perceptual image quality score prediction. The statistics of the DCT features vary in a natural and predictable manner as the image quality changes. A generalized probabilistic model is applied to these features, and used to make probabilistic predictions of visual quality. We show that the method correlates highly with human subjective judgements of quality.


The contributions of our approach are as follows: 1) The proposed method inherits the advantages of the NSS approach to IQA. While the goal of IQA research is to produce algorithms that accord with human visual perception of quality, one can to some degree avoid modeling poorly understood functions of the human visual system (HVS), and resort to deriving models of the natural environment instead. 2) BLIINDS is non-distortion specific; while most NR-IQA algorithms quantify a specific type of distortion, the features used in our algorithm are derived independently of the type of distortion of the image and are effective across multiple distortion types. Consequently, it can be deployed in a wide range of applications. 3) We propose a novel model for the statistics of DCT coefficients. 4) Since the framework operates entirely in the DCT domain, one can take exploit the availability of platforms devised for the fast computation of DCT transforms. 5) The method requires minimal training, and relies on a simple probabilistic model for quality score prediction. This leads to further computational gains. 6) Finally, the method correlates highly with human visual perception of quality and yields highly competitive performance, even with respect to state-of-the-art FR-IQA algorithms.


Relevant Publications:

1. M. A. Saad, A. C. Bovik and C. Charrier, ``Model-Based Blind Image Quality Assessment: A natural scene statistics approach in the DCT domain''', IEEE Transactions Image Processing, pp. 3339-3352, vol. 21, no. 8, 2012.

2. M. A. Saad, A. C. Bovik and C. Charrier, ``DCT Statistics Model-based Blind Image Quality Assessment'', IEEE International Conference on Image Processing (ICIP). September 2011.

3. M. A. Saad, A. C. Bovik and C. Charrier, ``A DCT Statistics-Based Blind Image Quality Index", IEEE Signal Processing Letters, pp. 583-586, vol. 17, no. 6, June 2010.

4. M. A. Saad, A. C. Bovik and C. Charrier, ``Natural DCT statistics approach to no-reference image quality assessment'', IEEE International Conference on Image Processing (ICIP). September 2010.

No-Reference Quality Assessment algorithm for Block-Based compression artifacts

Perhaps the most common distortion type that one comes across in real-world applications is the distortion introduced by lossy compression algorithms, such as JPEG (for images) or MPEG/H.263 (for videos). These compression algorithms are based on reduction of spatial redundancies using the block-based Discrete Cosine Transform (DCT). When these algorithms are constrained to increase the amount of compression, a visible 'blocking' artifact can be seen.

Blocking resulting from DCT based compression algorithms running at low bit rates has a very regular profile. It manifests itself as an edge every 8 pixels (for the typical block-size of 8 x 8 pixels), oriented in the horizontal and vertical directions. The strength of the blocking artifact can be measured by estimating the strength of these block-edges. At LIVE, we have developed frequency domain algorithms for measuring blocking artifact in images compressed by JPEG, with the algorithm having no information about the reference image.

Relevant Publications

  1. Z. Wang, H. R. Sheikh and A. C. Bovik, "No-reference perceptual quality assessment of JPEG compressed images",Proc. IEEE International Conference on Image Processing, September 2002.
  2. L. Lu, Z. Wang, A. C. Bovik and J. Kouloheris, "Full-Reference Video Quality Assessment Considering Structural Distortion and No-Reference Quality Evaluation of MPEG Video",Proc. IEEE International Conference on Multimedia and Expo, August 2002.
  3. S. Liu and A. C. Bovik, "DCT domain blind measurement of blocking artifacts in DCT-coded images",Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, May 2001.
  4. Z. Wang, A. C. Bovik, and B. L. Evans, "Blind measurement of blocking artifacts in images",Proc. IEEE International Conference on Image Processing, September 2000.

No-Reference Quality Assessment for JPEG2000 Compressed Images using Natural Scene Statistics.

Not all compression algorithms are block-based. Recent research in image and video coding algorithms has revealed that a greater compression can be achieved for the same visual quality if the block-based DCT approach is replaced by a Discrete Wavelet Transform (DWT). JPEG2000 is a recent image compression standard that uses DWT for image compression. However, DWT based algorithms also suffer from artifacts at low bit rates, specifically, from blurring and ringing artifacts. Blurring and ringing artifacts are image dependent, unlike the blocking artifact, whose spatial location is predictable. This makes the task of quantifying distortion resulting from DWT based compression algorithms (such as the JPEG2000) much harder to quantify. At LIVE we have proposed a unique and innovative solution to the problem. We propose to use Natural Scene Statistics models to quantify the departure of a distorted image from "expected" natural behavior.

Relevant Publications

  1. H. R. Sheikh, A. C. Bovik, and L. K. Cormack, "No-Reference Quality Assessment Using Natural Scene Statistics: JPEG2000,"IEEE Transactions on Image Processing, vol. 14, no. 12, December 2005.
  2. H. R. Sheikh, A. C. Bovik, and L. Cormack, "Blind Quality Assessment of JPEG2000 Compressed Images Using Natural Scene Statistics,"Proc. IEEE Asilomar Conf. on Signals, Systems, and Computers, Nov. 2003.
  3. H.R. Sheikh, Z. Wang, L. K. Cormack and A.C. Bovik, "Blind quality assessment for JPEG2000 compressed images,"Proc. Thirty-Sixth Annual Asilomar Conference on Signals, Systems, and Computers, November 2002.

Back to Quality Assessment Research page


### 回答1: SAR(合成孔径雷达)影像质量评估是一项用于评估合成孔径雷达图像质量的过程。SAR是一种采用雷达原理进行成像的技术,它可以对地球表面进行高分辨率的观测,具有良好的穿透障碍物的能力,并且不受天气条件的限制。因此,SAR影像质量评估对于正确地解释和分析雷达数据非常重要。 SAR影像质量评估可以通过以下几个方面来进行: 1. 分辨率:分辨率是指雷达影像中可以识别的最小物体大小。高分辨率意味着可以看到更小的细节,因此更好地了解地表特征。 2. 噪声:噪声是影响图像质量的一个重要因素,它可能会掩盖地表特征并降低图像的可用性。通过对图像进行噪声滤波等处理,可以减少噪声的影响。 3. 斑点:斑点是指图像中的不规则亮度变化,它可能来自于雷达系统的不稳定性。通过对图像进行斑点滤波等处理,可以减少斑点对图像质量的影响。 4. 平滑度:平滑度是指图像中的亮度变化程度。平滑度不宜过高,否则可能会造成细节的丢失。因此,合理的平滑度可以提高图像的质量。 5. 几何校正:SAR影像在获取过程中可能会发生几何失真,这将影响图像的准确性和质量。通过对图像进行几何校正,可以提高图像的质量和准确性。 总之,SAR影像质量评估是确保合成孔径雷达图像质量和准确性的重要步骤。合理评估和处理SAR影像质量,将有助于提高对地表特征的解释能力,并促进对雷达数据的正确分析和利用。 ### 回答2: SAR(Synthetic Aperture Radar)影像质量评估是对合成孔径雷达所生成的影像进行评估的过程。SAR影像质量评估的目的是确保影像的准确性和可用性。 在SAR影像质量评估中,常用的评估指标包括图像分辨率、噪声、辐射校准、几何校正等。首先,图像分辨率是评估SAR影像质量的重要指标之一,它衡量了影像中目标的清晰度和细节程度。图像分辨率较高意味着能够更准确地识别和分析目标。 其次,噪声是影响SAR影像质量的另一个重要因素。SAR影像在获取过程中会受到多种干扰源的影响,如大气湿度、电磁干扰等,这些干扰会导致影像中出现噪点。评估SAR影像噪声水平的准确性,有助于提高图像质量。 此外,辐射校准和几何校正也是SAR影像质量评估的关键步骤。辐射校准用于确保影像中灰度值的准确性和一致性,而几何校正用于纠正影像中的几何畸变,使影像更符合实际地物的形状。 为了进行SAR影像质量评估,可以采用目视评估和客观评估两种方法。目视评估依靠人眼对影像进行主观判断,而客观评估则是基于一系列算法和指标进行自动化评估。两种方法结合使用,可以更全面地评估SAR影像的质量。 综上所述,SAR影像质量评估是一项关键的工作,它可以确保SAR影像的准确性和可用性。通过评估图像分辨率、噪声、辐射校准和几何校正等指标,可以得到一个准确地评估SAR影像质量的结果。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值