1 简介
Multimodal image fusion aims to combine relevant information from images acquired with different sensors. In medical imaging, fused images play an essential role in both standard and automated diagnosis. In this paper, we propose a novel multimodal image fusion method based on coupled dictionary learning. The proposed method is general and can be employed for different medical imaging modalities. Unlike many current medical fusion methods, the proposed approach does not suffer from intensity attenuation nor loss of critical information. Specifically, the images to be fused are decomposed into coupled and independent components estimated using sparse representations with identical supports and a Pearson correlation constraint, respectively. An alternating minimization algorithm is designed to solve the resulting optimization problem. The final fusion step uses the max-absolute-value rule. Experiments are conducted using various pairs of multimodal inputs, including real MR-CT and MR-PET images. The resulting performance and execution times show the competitiveness of the proposed method in comparison with state-of-the-art medical image fusion methods.
2 部分代码
%%% color-greyscale mutimodal image fusion (functional-anatomical)clear% clcaddpath('utilities');%% fusion problem% fusion_mods = 'T2-PET';% fusion_mods = 'T2-TC';fusion_mods = 'T2-TI';% fusion_mods = 'Gad-PET';%% parametersopts.k = 5; % maximum nnonzero entries in sparse vectorsopts.rho = 10; % optimization penalty termopts.plot = false; % plot decomposition components%% loading input imagesI1rgb = double(imread(['Source_Images\' fusion_mods '_A.png']))/255;I1ycbcr = rgb2ycbcr(I1rgb);I1 = I1ycbcr(:,:,1);I2 = double(imread(['Source_Images\' fusion_mods '_B.png']))/255;if size(I2,3)>1, I2 = rgb2gray(I2); end%% performing decomposition and fusionn = 32; b = 8;D0 = DCT(n,b); % initializing the dictionaries with DCT matricestic;[~,~,Ie1,Ie2,D1,D2,A1,A2] = perform_Corr_Ind_Decomp(I1,I2,D0,D0,opts); % Decomposition[IF, IF_int] = Fuse_color(Ie2,Ie1,D2,D1,A2,A1,I1ycbcr); % Fusiontoc; % runtime%% resultsF = uint8(IF*255);imwrite(F,['Results\' fusion_mods '_F.png']);figure(23)subplot 131imshow(I1rgb,[])xlabel('I_1')subplot 132imshow(I2,[])xlabel('I_2')subplot 133imshow(IF,[])xlabel('I^F')%% dictionary atoms% ID1 = displayPatches(D1);% ID2 = displayPatches(D2);%% figure(37)% subplot 121% imshow(ID1)% xlabel('D1')% subplot 122% imshow(ID2)% xlabel('D2')
3 仿真结果


4 参考文献
[1] Veshki F G , Ouzir N , Vorobyov S A , et al. Coupled Feature Learning for Multimodal Medical Image Fusion[J]. 2021.
博主简介:擅长智能优化算法、神经网络预测、信号处理、元胞自动机、图像处理、路径规划、无人机等多种领域的Matlab仿真,相关matlab代码问题可私信交流。
部分理论引用网络文献,若有侵权联系博主删除。
本文提出了一种新颖的多模态医学图像融合方法,利用耦合字典学习处理不同传感器获取的图像。方法避免了强度衰减和关键信息丢失,适用于多种成像模态。实验使用了MR-CT和MR-PET等多种模态组合,结果显示了其在医疗诊断中的竞争力。
1259

被折叠的 条评论
为什么被折叠?



