Deformable Probability Maps

本文介绍了一系列计算机视觉领域的研究成果,包括基于条件随机场的变形模型、概率变形图、机器视觉辅助原位鱼类浮游生物成像系统等。这些研究不仅提高了物体识别与分割的准确性,还实现了对复杂场景中多个目标的高效跟踪。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Model-based image segmentation

CRF-driven deformable model. We developed a topology independent solution for segmenting objects with texture patterns of any scale, using an implicit deformable model driven by Conditional Random Fields (CRFs). Our model integrates region and edge information as image driven terms, whereas the probabilistic shape and internal (smoothness) terms use representations similar to the level-set based methods. The evolution of the model is solved as a MAP estimation problem, where the target conditional probability is decomposed into the internal term and the image-driven term. For the later, we use discriminative CRFs in two scales, pixel- and patch- based, to obtain smooth probability fields based on the corresponding image features.

Deformable Probability Maps. Going a step beyond coupling deformable models with classification, we developed the Deformable Probability Maps (DPMs) for object segmentation, which are solidly graphical learning models incorporating deformable model properties among the sites (cliques). The DPM configuration is described by probabilistic energy functionals, which incorporate shape and appearance, and determine 1D and 2D (boundary and surface) smoothness, image region features consistency, and topology with respect to the image salient edges. Similarly to deformable models, DPMs are dynamic, and their evolution is solved as a MAP inference problem. 




Machine Vision-assisted In Situ Ichtyoplankton Imaging System


A collaboration with RSMAS, U of Miami:http://web.mac.com/gavriil/Gavriil_Tsechpenakis/MVISIIS/

R.K. Cowen's team at RSMAS, U of Miami, has designed and built a plankton imaging system (In Situ Ichthyoplankton Imaging System, ISIIS) capable of imaging large water volumes with the goal of quantifying even rare plankton in situ. ISIIS produces very high resolution imagery for extended periods of times necessitating automated data analysis and recognition.

Since we require the identification and quantification of a large number of organisms, we are developing a fully automated software for detection and recognition of organisms of interest using machine vision and learning tools. Our framework aims at (i) the detection of all organisms of interest automatically, directly from the raw data, while filtering out noise and out-of-focus instances, (ii) the extraction and modeling of the appearance of each segmented organism, and (iii) the fully automated recognition of all the detected organisms simultaneously, using appearance and topology information in a novel classification framework. What differentiates our work from existing systems is that we are aiming at recognizing simultaneously all existing org 




Integration of active learning in a collaborative Conditional Random Field

We developed an active learning approach for visual multiple object class recognition, using a Conditional Random Field (CRF) formulation. We name our graphical model 'collaborative', because it infers class posteriors in instances of occlusion and missing information by assessing the joint appearance and geometric assortment of neighboring sites. The model can handle scenes containing multiple classes and multiple objects inherently while using the confidence of its predictions to enforce label uniformity in areas where evidence supports similarity. Our method uses classification uncertainty to dynamically select new training samples to retrain the discriminative classifiers used in the CRF. We demonstrated the performance of our approach using cluttered scenes containing multiple objects and multiple class instances. 




Learning-based dynamic coupling of pose estimation (static) and tracking (temporal)

There are generally two major approaches in deformable and articulated object tracking: (i) continuous (or temporal) methods that use both temporal and static information from the input sequence, and (ii) discrete methods, which handle each frame separately, using only static information and some kind of prior knowledge.

Continuous trackers provide high accuracy and low complexity, exploiting the continuity constraints over time, but when they lose track, they usually cannot recover easily. On the other hand, discrete approaches do not suffer from error accumulation over time, giving independent solutions at each time instance, but their accuracy depends on the generality of the prior knowledge they utilize; also, when this prior knowledge is derived from databases, the computational time increases dramatically.

We developed a new framework for robust 3D tracking, to achieve high accuracy and robustness, combining the aforementioned advantages of the continuous and discrete approaches. Our approach consists of a data-driven dynamic coupling between a continuous tracker and a novel discrete shape estimation method. Our discrete tracker utilizes a database, which contains object shape sequences, instead of single shape samples, introducing a temporal continuity constraint. The two trackers work in parallel, giving solutions for each frame separately. While a tightly coupled system would require high computational complexity, our framework chooses instantly the best solution from the two trackers, based on an error. This is the actual 3D error, i.e., the difference between the expected 3D shape and the estimated one. When tracking objects with high degrees of freedom and abrupt motions, it is difficult to obtain such 3D information, since there is no ground-truth shape available. In our framework, we learn off-line the 3D shape error, based on the 2D appearance error, i.e., the difference between the tracked object's edges and the edges of the utilized model's projection on the image plane. 




Dynamically adaptive tracking of gestures and facial expressions

Behavioral indicators of deception and behavioral states are extremely difficult for humans to analyze. Our framework aims at analyzing nonverbal behavior on video, by tracking the gestures and facial expressions of an individual that is being interviewed.

The system uses two cameras (one for the face and one for the whole body view), for analysis in two different scales, and consists of the following modules: (a) head and hands tracking, using Kalman filtering and a data-driven adaptive (to each specific individual) skin regions detection method, (b) shoulders tracking, based on a novel texture-based edge localization method, (c) 2D facial features tracking, using a fusion between the KLT tracker and different Active Shape Models, and (d) 3D face and facial features tracking, using the 2D tracking results and a model-based 3D face tracker. The main advantages of our framework is that we can track both gestures and facial expressions with great accuracy and robustness, in rates higher than 20 fps. 




The infinite Hidden Markov Random Field model

Hidden Markov random field (HMRF) models are parametric statistical models widely used for image segmentation, as they appear naturally in problems where a spatially-constrained clustering scheme is asked for. A major limitation of HMRF models concerns the automatic selection of the proper number of their states, i.e. the number of segments derived by the image segmentation procedure. Typically, for this purpose, various likelihood based criteria are employed. Nevertheless, such methods often fail to yield satisfactory results, while their use entails a significant computational burden. Recently, Dirichlet process mixture (DPM) models have emerged in the cornerstone of nonparametric Bayesian statistics as promising candidates for clustering applications where the number of clusters is unknown a priori.

Inspired by these advances, to resolve the aforementioned issues of HMRF models, we introduced a novel, nonparametric Bayesian formulation for the HMRF model, the infinite HMRF (iHMRF) model, formulated on the basis of a joint DPM and HMRF construction. We derived an efficient variational Bayesian inference algorithm for the proposed model, and we applied it to a series of image segmentation problems demonstrating its advantages over existing learning-based methodologies. 



来源:http://web.mac.com/gavriil/Gavriil_Tsechpenakis/research_vision.html

资源下载链接为: https://pan.quark.cn/s/22ca96b7bd39 在当今的软件开发领域,自动化构建与发布是提升开发效率和项目质量的关键环节。Jenkins Pipeline作为一种强大的自动化工具,能够有效助力Java项目的快速构建、测试及部署。本文将详细介绍如何利用Jenkins Pipeline实现Java项目的自动化构建与发布。 Jenkins Pipeline简介 Jenkins Pipeline是运行在Jenkins上的一套工作流框架,它将原本分散在单个或多个节点上独立运行的任务串联起来,实现复杂流程的编排与可视化。它是Jenkins 2.X的核心特性之一,推动了Jenkins从持续集成(CI)向持续交付(CD)及DevOps的转变。 创建Pipeline项目 要使用Jenkins Pipeline自动化构建发布Java项目,首先需要创建Pipeline项目。具体步骤如下: 登录Jenkins,点击“新建项”,选择“Pipeline”。 输入项目名称和描述,点击“确定”。 在Pipeline脚本中定义项目字典、发版脚本和预发布脚本。 编写Pipeline脚本 Pipeline脚本是Jenkins Pipeline的核心,用于定义自动化构建和发布的流程。以下是一个简单的Pipeline脚本示例: 在上述脚本中,定义了四个阶段:Checkout、Build、Push package和Deploy/Rollback。每个阶段都可以根据实际需求进行配置和调整。 通过Jenkins Pipeline自动化构建发布Java项目,可以显著提升开发效率和项目质量。借助Pipeline,我们能够轻松实现自动化构建、测试和部署,从而提高项目的整体质量和可靠性。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值