RGB "Bayer" Color and MicroLenses, convertion between RGB and YUV

本文介绍Bayer阵列原理及其到RGB颜色空间的转换方法,包括白平衡调整和色彩矩阵处理等内容。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

找到原文是很有意义的,这里转载RGB,BAYER,YUV间转码的公式,让你能找到正宗。

原文链接:点击打开链接


RGB "Bayer" Color and MicroLenses

Bayer color filter array is a popular format for digital acquisition of color images [1]. The pattern of the color filters is shown below. Half of the total number of pixels are green (G), while a quarter of the total number is assigned to both red (R) and blue (B).

 

G

R

G

R

B

G

B

G

G

R

G

R

B

G

B

G

In order to obtain this color information, the  color image sensor is covered with either a red, a green, or a blue filter, in a repeating pattern. This pattern, or sequence, of filters can vary, but the widely adopted “Bayer”  pattern, which was invented at Kodak, is a repeating 2x2 arrangement.


Photographs & Text adapted from Photobit

When the image sensor is read out, line by line, the pixel sequence comes out GRGRGR, etc., and then the alternate line sequence is BGBGBG, etc. This output is called sequential RGB (or sRGB).

Since each pixel has been made sensitive only to one color (one spectral band), the overall sensitivity of a color image sensor is lower than a monochrome (panchromatic) sensor, and in fact is typically 3x less sensitive. As a result, monochrome sensors are better for low-light applications, such as security cameras. (It is also why human eyes switch to black and white mode in the dark).

 

MicroLenses

Microlenses and a metal opaque layer above the silicon funnel light to the photo-sensitive portion of each pixel. On their way, the photons of light pass through a color filter array (CFA) where the process begins of obtaining color from the inherently “monochrome” chip. (Actually, “panchromatic” is an apter term, since sensors respond across the spectrum; the word monochrome comes from television use and refers to black and white).

 

White Balance, Bayer Interpolation and Color matrix Processing

White Balance and Color Correction are processing operations performed to ensure proper color fidelity in a captured digital camera image. In digital cameras an array of light detectors with color filters over them is used to detect and capture the image. This sensor does not detect light exactly as the human eye does, and so some processing or correction of the detected image is necessary to ensure that the final image realistically represents the colors of the original scene. 

 

G

R

G

R

B

G

B

G

G

R

G

R

B

G

B

G

 

Each pixel only represents a portion of the color spectrum and must be interpolated to obtain an RGB value per pixel.  The Bayer color filter array (CFA) pattern, shown above, is a popular format for digital acquisition of color images [1]. Half of the total number of pixels are green (G), while a quarter of the total number is assigned to both red (R) and blue (B).

This note describes conversions from Bayer format data to RGB and between RGB and YUV (YCrCb) color spaces. We also discuss two color processing operations (white balance and color correction) in the RGB domain, and derive the corresponding operations in the YUV domain. Using derived operations in the YUV domain, one can perform white balance and color correction directly in the YUV domain, without switching back to the RGB domain.

1.)  White Balance & Bayer Interpolation

The first step in processing the raw pixel data is to perform a white balance operation.  A white object will have equal values of reflectivity for each primary color: ie:

 

R = G = B

 

An image of a white object can be captured and its histogram analyzed.  The color channel that has the highest level is set as the target mean and the remaining two channels are increased with a gain multiplier to match.  For example, if Green channel has the highest mean, gain ‘a’ is applied to Red and gain ‘b’ is applied to Blue.

 

G’ = R’a = bB’

 

The White Balance will vary, based on the color lighting source (Sunlight, Fluorescent, Tungsten) applied to the object and the amount of each color component within it.  A full color natural scene can also be processed in the same fashion.  This “Gray World” method assumes that the world is gray and the distribution of primaries color will be equal. 

 

The “White Patch” method attempts to locate the objects that are truly white, within the scene; by assuming the whites pixels are also the brightest (I = R+G+B).   Then, only the top percentage intensity pixels are included in the calculation of means, while excluding any pixels that may have any channel that is saturated.

 

Bayer Interpolation

To convert an image from the bayer format to an RGB per pixel format, we need to interpolate the two missing color values in each pixel. Several standard interpolation methods (nearest neighbor, linear, cubic, cubic spline, etc.) were evaluated on this problem in [2]. The authors have measured interpolation accuracy as well as the speed of the method and concluded that the best performance is achieved by a correlation-adjusted version of the linear interpolation. The suggested method is presented here.

 

Interpolating red and blue components

 

G

B

G

R

G

R

G

B

G

(a)

 

G

R

G

B

G

B

G

R

G

(b)

B

G

B

G

R

G

B

G

B

(c)

R

G

R

G

B

G

R

G

R

(d)

 Figure 1: Four possible cases for interpolating R and B components

As suggested in [2], R and B values are interpolated linearly from the nearest neighbors of the same color. There are four are possible cases, as shown in Figure 1. When interpolating the missing values of R and B on a green pixel, as in Figure 1 (a) and (b), we take the average values of the two nearest neighbors of the same color. For example, in Figure 1 (a), the value for the blue component on a shaded G pixel will be the average of the blue pixels above and below the G pixel, while the value for the red component will be the average of the two red pixels to the left and right of the G pixel.

Figure 1 (c) shows the case when the value of the blue component is to be interpolated for an R pixel. In such case, we take the average of the four nearest blue pixels cornering the R pixel. Similarly, to determine the value of the red component on a B pixel in Figure 2 (d) we take the average of the four nearest red pixels cornering the B pixel.

Interpolating the green component

By [2], green component is adaptively interpolated from a pair of nearest neighbors. To illustrate the procedure, consider two possible cases in Figure 2.

 

 

R1

 

 

 

 

G1

 

 

R4

G4

R

G2

R2

 

 

G3

 

 

 

 

R3

 

 

(a)

 

 

B1

 

 

 

 

G1

 

 

B4

G4

B

G2

B2

 

 

G3

 

 

 

 

B3

 

 

(b)

Figure 2: Two possible cases for interpolating G component

 In Figure 2 (a), the value of the green component is to be interpolated on an R pixel. The value used for the G component here is

In other words, we take into account the correlation in the red component to adapt the interpolation method. If the difference between R1 and R3 is smaller than the difference between R2 and R4, indicating that the correlation is stronger in the vertical direction, we use the average of the vertical neighbors G1 and G3 to interpolate the required value. If the horizontal correlation is larger, we use horizontal neighbors. If neither direction dominates the correlation, we use all four neighbors.

Similarly, for Figure 2 (b) we will have

  

 To conclude this section, note that if the speed of execution is the issue, one can safely use simple linear interpolation of the green component from the four nearest neighbors, without any adaptation

  

According to [2], this method of interpolation executes twice as fast as the adaptive method, and achieves only slightly worse performance on real images.  For even fast updates only two of the four green values are averaged.  However, this method displays false color on edges or zipper artifacts.

 

Color Saturation Matrix

The operation for saturation can be applied at the same time as the color correction matrix.  Unlike the color correction matrix, the saturation matrix does not rotate the vectors in the color wheel:

[m00  m01  m02]   [ R ]
[m10  m11  m12] * [ G ]
[m20  m21  m22]   [ B ]

m00 = 0.299 + 0.701*K
m01 = 0.587 * (1-K)
m02 = 0.114 * (1-K)

m10 = 0.299 * (1-K)
m11 = 0.587 + 0.413*K
m12 = 0.114 * (1-K)

m20 = 0.299 * (1-K)
m21 = 0.587 * (1-K)
m22 = 0.114 + 0.886*K

K is the saturation factor
K=1 means no change
K > 1 increases saturation
0<K<1 decreases saturation,  K=0 produces B&W ,  K<0 inverts color

A sample table of matrix values are calculated and shown below:

 

 

Saturation

Saturation

Saturation

Saturation

 

 

1

1.7

1.9

2

R’ =

+R*

1.0

1.4907

1.6309

1.701

 

+G*

0

-0.4109

-0.5283

-0.587

 

+B*

0

-0.0798

-0.1026

-0.114

 

 

 

 

 

 

G’ =

+R*

0

-0.2093

-0.2691

-0.299

 

+G*

1.0

1.2891

1.3717

1.413

 

+B*

0

-0.0798

-0.1026

-0.114

 

 

 

 

 

 

B’ =

+R*

0

-0.2093

-0.2691

-0.299

 

+G*

0

-0.4109

-0.5283

-0.587

 

+B*

1.0

1.6202

1.7974

1.886

  
The saturated image can be further processed with an additional color correction matrix to compensate for cross-talk induced by the micro-lens and color filter process, lighting and temperature effects.  The combination matrix ([color correction matrix] *  [saturation matrix]) results in a closer to true world color representation, but an increase in noise.  Typically, the blue pixel has the lowest pixel response and the highest Crosstalk from the Green and Red light.  The resulting noise after matrix operation is a high degree of blue noise.

  

A monochrome image can now be easily obtained from a color image by setting K=0

 

m00 = 0.299

m01 = 0.587

m02 = 0.114

m10 = 0.299

m11 = 0.587

m12 = 0.114

m20 = 0.299

m21 = 0.587

m22 = 0.114

2. Conversion between RGB and YUV

 We give two commonly used forms of equations for conversion between RGB and YUV formats. The first one is recommended by CCIR [3]

(2.1)

The second form is used by Intel in their image processing library [4], and may be more suitable for implementation:

 (2.2)

In either case, resulting values of YU and V should be clipped to fit the appropriate range for the YUV format (e.g. [0,255] for a 24-bit YUV format). The inverse conversion may be accomplished by:

   (2.3)

 

3. White balance operation in RGB and YUV domains

The white balance operation is defined as a gain correction for red, green and blue components by gain factors ARAG and AB, respectively, i.e.

 (3.1)

The new (white-balanced) values for red, green and blue are RwbGwb and Bwb. To derive the equivalent form of this operation in the YUV domain, we proceed as follows. First, write equation (2.1) as

(3.2)

where   is the vector in the RGB space,   is the corresponding vector in the YUV space,  , and C is the appropriate matrix of conversion coefficients. Similarly, (3.1) can be written as

(3.3)

where   is the vector in the RGB space modified by white balance operation (2.4), and  . We want to determine what is the corresponding vector  in the YUV domain, without having to revert back to the RGB domain. Vector   is found by substituting   for x in (3.2)

 .

Let  , so that  . Then  . Substitute this expression for x back into (3.2) to obtain:

(3.4)

This equation provides the connection between y and   without involving x or   (i.e. without going back to the RGB domain). Manipulating (3.4) and using the fact that for nonsingular matrices   [5], we get that white balance operation in the YUV domain is

 

(3.5)


Expressing components of   from (3.5) we get

Terms with leading coefficient less than 10-3 have been dropped.  

References

[1]     B. E. Bayer, Color imaging array, US Patent No. 3971065.

[2]     T. Sakamoto, C. Nakanishi and T. Hase, “Software pixel interpolation for digital still cameras suitable for a 32-bit MCU,” IEEE Trans. Consumer Electronics, vol. 44, no. 4, November 1998.

{3}  http://www.northpoleengineering.com/rgb2yuv.htm

内容概要:本文详细介绍了扫描单分子定位显微镜(scanSMLM)技术及其在三维超分辨体积成像中的应用。scanSMLM通过电调透镜(ETL)实现快速轴向扫描,结合4f检测系统将不同焦平面的荧光信号聚焦到固定成像面,从而实现快速、大视场的三维超分辨成像。文章不仅涵盖了系统硬件的设计与实现,还提供了详细的软件代码实现,包括ETL控制、3D样本模拟、体积扫描、单分子定位、3D重建和分子聚类分析等功能。此外,文章还比较了循环扫描与常规扫描模式,展示了前者在光漂白效应上的优势,并通过荧光珠校准、肌动蛋白丝、线粒体网络和流感A病毒血凝素(HA)蛋白聚类的三维成像实验,验证了系统的性能和应用潜力。最后,文章深入探讨了HA蛋白聚类与病毒感染的关系,模拟了24小时内HA聚类的动态变化,提供了从分子到细胞尺度的多尺度分析能力。 适合人群:具备生物学、物理学或工程学背景,对超分辨显微成像技术感兴趣的科研人员,尤其是从事细胞生物学、病毒学或光学成像研究的科学家和技术人员。 使用场景及目标:①理解和掌握scanSMLM技术的工作原理及其在三维超分辨成像中的应用;②学习如何通过Python代码实现完整的scanSMLM系统,包括硬件控制、图像采集、3D重建和数据分析;③应用于单分子水平研究细胞内结构和动态过程,如病毒入侵机制、蛋白质聚类等。 其他说明:本文提供的代码不仅实现了scanSMLM系统的完整工作流程,还涵盖了多种超分辨成像技术的模拟和比较,如STED、GSDIM等。此外,文章还强调了系统在硬件改动小、成像速度快等方面的优势,为研究人员提供了从理论到实践的全面指导。
内容概要:本文详细介绍了基于Seggiani提出的渣层计算模型,针对Prenflo气流床气化炉中炉渣的积累和流动进行了模拟。模型不仅集成了三维代码以提供气化炉内部的温度和浓度分布,还探讨了操作条件变化对炉渣行为的影响。文章通过Python代码实现了模型的核心功能,包括炉渣粘度模型、流动速率计算、厚度更新、与三维模型的集成以及可视化展示。此外,还扩展了模型以考虑炉渣组成对特性的影响,并引入了Bingham流体模型,更精确地描述了含未溶解颗粒的熔渣流动。最后,通过实例展示了氧气-蒸汽流量增加2%时的动态响应,分析了温度、流动特性和渣层分布的变化。 适合人群:从事煤气化技术研究的专业人士、化工过程模拟工程师、以及对工业气化炉操作优化感兴趣的科研人员。 使用场景及目标:①评估不同操作条件下气化炉内炉渣的行为变化;②预测并优化气化炉的操作参数(如温度、氧煤比等),以防止炉渣堵塞;③为工业气化炉的设计和操作提供理论支持和技术指导。 其他说明:该模型的实现基于理论公式和经验数据,为确保模型准确性,实际应用中需要根据具体气化炉的数据进行参数校准。模型还考虑了多个物理场的耦合,包括质量、动量和能量守恒方程,能够模拟不同操作条件下的渣层演变。此外,提供了稳态求解器和动态模拟工具,可用于扰动测试和工业应用案例分析。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值