图像泊松融合学习笔记

目录

两图拼接使用渐进色蒙版场景过渡缝隙 还没测试

图像泊松融合原理:

使用方法

python opencv 泊松融合例子

转效果:

效果图:

根据mask图进行泊松融合

​编辑

报错并解决:

泊松融合c++版:


两图拼接使用渐进色蒙版场景过渡缝隙 还没测试

OpenCV开发笔记(八十二):两图拼接使用渐进色蒙版场景过渡缝隙_长沙红胖子Qt-松山湖开发者村综合服务平台

图像泊松融合原理:

从泊松方程的解法,聊到泊松图像融合 - 知乎

使用方法

需要三张图片:前景图、背景图、mask图

mask图(指明前景图中需要融合的区域,最简单的就是直接等于前景图大小的 mask,待融合区域是白色,其余位置黑色)。

center是背景图的融合中心坐标。

python opencv 泊松融合例子

不需要png格式,

# 注意修改路径!
import cv2
import numpy as np

# Read images : src image will be cloned into dst
im = cv2.imread("images/wood-texture.jpg")
obj= cv2.imread("images/iloveyouticket.jpg")

# Create an all white mask
mask = 255 * np.ones(obj.shape, obj.dtype)

# The location of the center of the src in the dst
height, width,channels = im.shape
center = (width/2,height/2)

# Seamlessly clone src into dst and put the results in output
normal_clone = cv2.seamlessClone(obj, im, mask, center, cv2.NORMAL_CLONE)
mixed_clone = cv2.seamlessClone(obj, im, mask, center, cv2.MIXED_CLONE)

# Write results
cv2.imwrite("images/opencv-normal-clone-example.jpg", normal_clone)
cv2.imwrite("images/opencv-mixed-clone-example.jpg", mixed_clone)

转效果:

前景图:

背景图: 

效果图:

简单总结,前景图的背景图,几乎纯色的时候,会自动被去掉。

根据mask图进行泊松融合

GitHub - samousavizade/Poisson-Blending: Poisson Blending (poisson equation with dirichlet boundary conditions poisson blending)

import cv2 as cv
import cv2
import numpy as np
import scipy.sparse as sp
from scipy.sparse.linalg import spsolve
from tqdm import trange

class PoisonBlender:
    MAX_INTENSITY = 255

    def __init__(self, source, target, mask, delta):
        self.source = source
        self.CHANNEL_SIZE = self.source.shape[2]

        self.target = target
        self.height, self.width, _ = self.target.shape

        self.mask = mask
        self.delta_x, self.delta_y = delta

    @staticmethod
    def get_helper_matrix(height, width):
        block = sp.lil_matrix((width, width))
        PoisonBlender.laplacian_taylor_approximation_block(block)
        A = sp.block_diag([block] * height).tolil()
        PoisonBlender.set_semi_main_diameter(A, width)
        return A

    def get_blended_channel(self, A, channel, f_laplacian, flatten_mask):
        flatten_target = self.target[:self.height, :self.width, channel].flatten()
        flatten_source = self.source[:self.height, :self.width, channel].flatten()
        b = self.calculate_b_vector(flatten_mask, f_laplacian, flatten_source, flatten_target)
        f = spsolve(A, b).reshape((self.height, self.width))
        f = PoisonBlender.outlier_intensities_correction(f).astype('uint8')
        return f

    @staticmethod
    def flatten2rectangular(matrix, height, width):
        return matrix.reshape((height, width))

    def translate(self, input, x, y):
        translation_matrix = np.float32([[1, 0, x],
                                         [0, 1, y]])

        return cv.warpAffine(input, translation_matrix, (self.width, self.height))

    @staticmethod
    def calculate_b_vector(flatten_mask, laplacian, source_flat, target_flat):
        b = laplacian.dot(source_flat)
        b[flatten_mask == 0] = target_flat[flatten_mask == 0]
        return b

    @staticmethod
    def outlier_intensities_correction(matrix):
        matrix[matrix <= 0] = 0
        matrix[matrix >= PoisonBlender.MAX_INTENSITY] = PoisonBlender.MAX_INTENSITY
        return matrix

    @staticmethod
    def set_semi_main_diameter(A, width):
        A.setdiag(-1, width)
        A.setdiag(-1, -width)

    @staticmethod
    def laplacian_taylor_approximation_block(block):
        # laplacian in x,y coordinate = 4f(x,y)-f(x-1,y)-f(x+1,y)-f(x,y-1)-f(x,y+1)
        block.setdiag(-1, -1)
        block.setdiag(-1, 1)
        block.setdiag(4)

    def set_out_pixel(self, coefficient_matrix, counter):
        coefficient_matrix[counter, counter + self.width] = 0
        coefficient_matrix[counter, counter - self.width] = 0
        coefficient_matrix[counter, counter + 1] = 0
        coefficient_matrix[counter, counter - 1] = 0
        # set to identity (out of mask region)
        coefficient_matrix[counter, counter] = 1

    def blend(self):
        # translate source image
        self.source = self.translate(self.source, self.delta_x, self.delta_y)
        # binary mask
        self.mask[:self.height, :self.width][self.mask != 0] = 1
        # initiate coefficient matrix in Af=b
        A = self.get_helper_matrix(self.height, self.width)
        f_l = A.tocsc()

        self.set_out_submatrix_to_identity(A)
        A = A.tocsc()

        # calculate f from Af=b based on flatten matrices
        flatten_mask = self.mask.flatten()
        for channel in trange(self.CHANNEL_SIZE):
            print('channel ' + str(channel) + ' blended')
            self.target[:self.height, :self.width, channel] = self.get_blended_channel(A,
                                                                                       channel,
                                                                                       f_l,
                                                                                       flatten_mask)

        return self.target

    def set_out_submatrix_to_identity(self, coefficient_matrix):
        # set to identity submatrix in out of mask region
        for row in range(1, self.height - 1):
            for col in range(1, self.width - 1):
                if self.mask[row, col] == 0:
                    counter = col + row * self.width
                    # set to zero/one
                    self.set_out_pixel(coefficient_matrix, counter)


class PolygonMaker:
    def __init__(self, points, mask_shape):
        self.points = points
        h, w, _ = mask_shape
        self.mask = np.zeros((h, w), np.uint8)

    class ClickHandler:
        image = None
        POINTS_SIZE = 0

        def __init__(self, image, window_name):
            self.image = image.copy()
            self.window_name = window_name
            cv.imshow(self.window_name, image)

            h, w, _ = self.image.shape
            self.counter = 0
            self.points = []
            print('clicked vertices of polygon:')

        def get_points(self):
            return np.array([[x, y] for x, y in self.points], np.int)

        def click_event(self, event, clicked_x, clicked_y, flags, params):
            if event == cv.EVENT_LBUTTONDOWN:
                print(clicked_x, clicked_y)
                point = np.array([clicked_x, clicked_y])
                cv.imshow(self.window_name, self.image)
                self.points.append(point)

    def get_filled_polygon(self):
        return cv.fillConvexPoly(self.mask, self.points, 255)



def main():
    source_path = r"C:\Users\Administrator\Downloads\masks\part_3.jpg"
    target_path = r"C:\Users\Administrator\Downloads\masks\part_2.jpg"
    mask_path=r"C:\Users\Administrator\Downloads\masks\mask.jpg"
    result_path = 'res2.jpg'

    # read source and target image
    source = cv.imread(source_path)
    target = cv.imread(target_path)
    mask_img=cv.imread(mask_path,0)
    mask_img=255-mask_img

    kernel = np.ones((15, 15), np.uint8)

    # 进行形态学膨胀操作
    dilated_image = cv2.dilate(mask_img, kernel, iterations=1)

    # 展示原始图像和扩大后的图像
    cv2.imshow('Dilated Image', dilated_image)
    cv2.imshow('target',target)
    cv2.waitKey()
    cv2.destroyAllWindows()
    delta=(0,0)

    poisson_blend_result = PoisonBlender(source,
                                         target,
                                         mask_img,
                                         delta).blend()

    # save
    cv.imwrite(result_path, poisson_blend_result)


if __name__ == '__main__':
    main()

报错并解决:

Traceback (most recent call last):
  File "F:/biadu_down/yolov5-face-master/data/ronghe/bosong.py", line 20, in <module>
    mixed_clone = cv2.seamlessClone(obj, dst, mask, center, cv2.MIXED_CLONE)
cv2.error: OpenCV(4.5.4) :-1: error: (-5:Bad argument) in function 'seamlessClone'
> Overload resolution failed:
>  - Can't parse 'p'. Sequence item with index 0 has a wrong type
>  - Can't parse 'p'. Sequence item with index 0 has a wrong type

原因:数据类型不对,

center = (height/2, width/2)

center应该是int类型,修改方法:

center = (height//2, width//2)

修改后完整代码:


import cv2
import numpy as np

if __name__ == '__main__':

    # Read images : src image will be cloned into dst
    dst = cv2.imread("diban.jpg")
    obj= cv2.imread("zimu.jpg")

    # Create an all white mask
    mask = 255 * np.ones(obj.shape, obj.dtype)

    # The location of the center of the src in the dst
    height,width, channels = dst.shape
    center = (width//2,height//2)

    # Seamlessly clone src into dst and put the results in output
    normal_clone = cv2.seamlessClone(obj, dst, mask, center, cv2.NORMAL_CLONE)
    mixed_clone = cv2.seamlessClone(obj, dst, mask, center, cv2.MIXED_CLONE)

    # Write results
    cv2.imshow("opencv-normal-clone-example.jpg", normal_clone)
    cv2.imshow("opencv-mixed-clone-example.jpg", mixed_clone)

    cv2.waitKey()

泊松融合c++版:

https://github.com/Erkaman/poisson_blend

#include <opencv2\opencv.hpp>
 #include <iostream>
 #include <string>
 
 using namespace std;
 using namespace cv;
 
 void main()
 {
     Mat imgL = imread("data/apple.jpg");
     Mat imgR = imread("data/orange.jpg");
 
     int imgH = imgR.rows;
     int imgW = imgR.cols;
     Mat mask = Mat::zeros(imgL.size(), CV_8UC1);
     mask(Rect(,, imgW*0.5, imgH)).setTo();
     cv::imshow("mask", mask);
     Point center(imgW*0.25, imgH*0.5);
 
     Mat blendImg;
     seamlessClone(imgL, imgR, mask, center, blendImg, NORMAL_CLONE);
 
     cv::imshow("blendimg", blendImg);
     waitKey();
 }

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

AI算法网奇

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值