Directory
Introduction to LAB color model
What is LAB?
Lab color model is composed of three elements, one is brightness , L. A and B are two color channels. A includes colours ranging from dark green (low brightness) to grey (medium brightness) to light pink (high brightness). B goes from bright blue (low brightness) to grey (medium brightness) to yellow (high brightness). Therefore, this color mix will produce a bright color. What’s more, it can make up for the lack of RGB color space. Thus, I will try to use LAB to extract the Hog feature of the fire.
How to use LAB in OpenCV?
In OpenCV, it provides an interface for converting different color spaces:
## BGR==>LAB ##
lab = cv2.cvtColor(frame, cv2.COLOR_BGR2LAB)
RIO region division
Step 1: RGB color model is converted to LAB color model.
frame = cv2.imread('picture/fire_2.jpg')
frame = cv2.resize(frame, (400, 400))
frame = cv2.GaussianBlur(frame, (3, 3), 1)
lab = cv2.cvtColor(frame, cv2.COLOR_BGR2LAB)
Step 2: Mask making.
Through experiments, we can obtain the mask area of the fire. In order to enhance the brightness difference of the image, we perform image morphology operation and high-pass filtering operation on the mask.
'''
L ==> (200, 255)
A ==> (120, 185)
B ==> (135, 255)
'''
l_m = np.array([200, 120, 135])
u_m = np.array([255, 185, 255])
mask = cv2.inRange(lab, l_m, u_m)
kernel1 = np.ones((15, 15), np.uint8)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel1) # filter out non-rio areas
mask1 = cv2.GaussianBlur(mask, (3, 3), 0)
mask = mask - mask1 # enhance the difference in brightness
Step 3: Mapping RIO region.
res = cv2.bitwise_and(frame, frame, mask=mask)
img = frame.copy()
ret, thresh = cv2.threshold(mask, 0, 255, cv2.THRESH_BINARY)
binary, contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
for cnt in contours:
l = cv2.arcLength(cnt, True)
if l > 100: # filter out non-conforming areas
x, y, w, h = cv2.boundingRect(cnt)
img = cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2)
Result:
We tested three scenarios: one fire, more fire and daytime. We can see that the result of daytime is poorer, but the fire area can still be divided. Since the artificial neural network needs to be added in the later stage, the influence is relatively small.
HOG feature of LAB
Refer to the calculation method of Hog feature learned before(https://blog.youkuaiyun.com/qq_40776179/article/details/104992748), we can obtain three feature vectors about the fire. For example:
By synthesizing three sets of feature vectors, we can get the feature of flame. Therefore, LAB can give a more accurate description of the fire.
Conclusion
- LAB color model has a clear boundary between color and brightness, which is helpful for fire recognition. But when using a camera to capture a fire, it typically reaches 255 brightness and stays near white, so the algorithm still needs to be improved.
- Now I am debugging the neural network, I believe that it will have a better recognition effect.