[论文收集] HCOMP 2011概况及收录论文

第三届HCOMP学术研讨会于2011年8月8日在加州旧金山召开,活动持续一天,汇集了来自全球的学者,探讨如何构建涉及人类与计算机合作的智能系统,特别关注那些挑战最先进AI算法的任务,如图像分类、翻译和蛋白质折叠。

这是第三届HCOMP, 前两届都是傍着KDD, 这次傍了AAAI。

全称The 3rd Human Computation Workshop (HCOMP 2011)

时间:August 8, 2011 (这次开了一整天)

地点:San Francisco, CA

收录论文情况:一份technical report, 32篇论文(其中16篇是poster)

主页和CFP中一些介绍信息

  • 对human computation的介绍:"a relatively new research area that studies how to build intelligent systems that involves human computers, with each of them performing computation (e.g., image classification, translation, and protein folding) that leverage human intelligence, but challenges even the most sophisticated AI algorithms that exist today. "
    • 可与上一届中的介绍对比着看: "a relatively new research area that studies the process of channeling the vast internet population to perform tasks or provide data towards solving difficult problems that no known efficient computer algorithms can yet solve."
  • 提到的交叉领域:
    • machine learning
    • mechanism and market design
    • information retrieval
    • decision-theoretic planning
    • optimization
    • human computer interaction

CFP中列出的方向:

•Programming languages, tools and platforms to support human computation

•Domain-specific challenges in human computation

•Methods for estimating the cost, reliability, and skill of labelers

•Methods for designing and controlling workflows for human computation tasks

•Empirical and formal models of incentives in human computation systems

•Benefits of one-time versus repeated labeling

•Design of manipulation-resistance mechanisms in human computation

•Concerns regarding the protection of labeler identities

•Active learning from imperfect human labelers

•Techniques for inferring expertise and routing tasks

•Theoretical limitations of human computation

 

Workshop提供的tutorial

以下是收录的论文列表TOC@aaai)(可下载)

SESSION 1: AI / Machine Learning (4篇)

  • Large-Scale Live Active Learning: Training Object Detectors with Crawled Data and Crowds

          Sudheendra Vijayanarasimhan, Kristen Grauman (UT Austin)

  • Robust Active Learning using Crowdsourced Annotations for Activity Recognition

          Liyue Zhao, Gita Sukthankar (UCF); Rahul Sukthankar (Google Research/CMU)

  • Beat the Machine: Challenging workers to find the unknown unknowns

          Josh Attenberg, Panos Ipeirotis, Foster Provost (NYU)

  • Human Intelligence Needs Artificial Intelligence

          Daniel Weld, Mausam, Peng Dai (University of Washington)

Poster Session 1 (8篇)

  • Towards Task Recommendation in Micro-Task Markets

          Vamsi Ambati, Stephan Vogel, Jaime Carbonell (CMU)

  • On Quality Control and Machine Learning in Crowdsourcing

          Matthew Lease (UT Austin)

  • CollabMap: Augmenting Maps using the Wisdom of Crowds

          Ruben Stranders, Sarvapali Ramchurn, Bing Shi, Nicholas Jennings (University of Southampton)

  • Improving Consensus Accuracy via Z-score and Weighted Voting

          Hyun Joon Jung, Matthew Lease (UT Austin)

  • Making Searchable Melodies: Human vs. Machine

          Mark Cartwright, Zafar Rafii, Jinyu Han, Bryan Pardo (Northwestern University)

  • PulaCloud: Using Human Computation to Enable Development at the Bottom of the Economic Ladder

          Andrew Schriner (University of Cincinnati); Daniel Oerther (Missouri University of Science and Technology); James Uber (University of Cincinnati)

  • Towards Large-Scale Processing of Simple Tasks with Mechanical Turk

          Paul Wais, Shivaram Lingamneni, Duncan Cook, Jason Fennell, Benjamin Goldenberg, Daniel Lubarov, David Marin, Hari Simons (Yelp, inc.)

  • Learning to Rank From a Noisy Crowd

          Abhimanu  Kumar, Matthew Lease (UT Austin)

SESSION 2: The “Humans” in the Loop (3篇)

  • Worker Motivation in Crowdsourcing and Human Computation

           Nicolas Kaufmann; Thimo Schulze (University of Mannheim)

  • Honesty in an Online Labor Market

          Winter Mason, Siddharth Suri, Daniel Goldstein (Yahoo! Research)

  • Building a Persistent Workforce on Mechanical Turk for Multilingual Data Collection

          David Chen (UT Austin); William Dolan (Microsoft Research)

SESSION 3: Tools / Applications (2篇)

  • CrowdSight: Rapidly Prototyping Intelligent Visual Processing Apps

          Mario Rodriguez (UCSC); James Davis

  • Digitalkoot: Making Old Archives Accessible Using Crowdsourcing

          Otto Chrons, Sami Sundell (Microtask)

SESSION 4: Quality Control (4篇)

  • Error Detection and Correction in Human Computation: Lessons from the WPA

          David Alan Grier (GWU)

  • Programmatic gold: targeted and scalable quality assurance in crowdsourcing

          Dave Oleson, Vaughn Hester, Alex Sorokin, Greg Laughlin, John Le, Lukas Biewald (CrowdFlower)

  • An Iterative Dual Pathway Structure for Speech-to-Text Transcription

          Beatrice Liem, Haoqi Zhang, Yiling Chen (Harvard University)

  • An Extendable Toolkit for Managing Quality of Human-based Electronic Services

          David Bermbach, Robert Kern, Pascal Wichmann, Sandra Rath, Christian Zirpins (KIT)

Poster Session 2 (8篇)

  • Developing Scripts to Teach Social Skills: Can the Crowd Assist the Author?

          Fatima Boujarwah, Jennifer Kim, Gregory Abowd, Rosa Arriaga (Georgia Tech)

  • CrowdLang - First Steps Towards Programmable Human Computers for General Computation

          Patrick Minder, Abraham Bernstein (University of Zurich)

  • Ranking Images on Semantic Attributes using CollaboRank

          Jeroen Janssens, Eric Postma, Jaap Van den Herik (Tilburg University)

  • Artificial Intelligence for Artificial Artificial Intelligence

          Peng Dai, Mausam, Daniel Weld (University of Washington)

  • One Step beyond Independent Agreement: A Tournament Selection Approach for Quality Assurance of Human Computation Tasks

          Yu-An Sun, Shourya Roy (Xerox); Greg Little (MIT CSAIL)

  • Turkomatic: Automatic, Recursive Task and Workflow Design for Mechanical Turk

          Anand Kulkarni, Matthew Can, Bjoern Hartmann (UC Berkeley)

  • MuSweeper: Collect Mutual Exclusions with Extensive Game

          Tao-Hsuan Chang, Cheng-wei Chan, Jane Yung-jen Hsu (National Taiwan University)

  • MobileWorks: A Mobile Crowdsourcing Platform for Workers  at the Bottom of the Pyramid

          Prayag Narula, Philipp Gutheim, David Rolnitzky, Anand Kulkarni, Bjoern Hartmann (UC Berkeley)

SESSION 5: Pricing and Allocation (3篇)

  • What’s the Right Price? Pricing Tasks for Finishing on Time

          Siamak Faridani, Bjoern Hartmann (UC Berkeley); Panos Ipeirotis (NYU)

  • Pricing Mechanisms for Online Labor Market

          Yaron Singer, Manas Mittal (UC Berkeley EECS)

  • Labor Allocation in Paid Crowdsourcing: Experimental Evidence on Positioning, Nudges and Prices

          John Horton (ODesk); Dana Chandler (MIT)

转载于:https://www.cnblogs.com/yuquanlaobo/archive/2011/12/23/2295116.html

内容概要:本文系统介绍了算术优化算法(AOA)的基本原理、核心思想及Python实现方法,并通过图像分割的实际案例展示了其应用价值。AOA是一种基于种群的元启发式算法,其核心思想来源于四则运算,利用乘除运算进行全局勘探,加减运算进行局部开发,通过数学优化器加速函数(MOA)和数学优化概率(MOP)动态控制搜索过程,在全局探索与局部开发之间实现平衡。文章详细解析了算法的初始化、勘探与开发阶段的更新策略,并提供了完整的Python代码实现,结合Rastrigin函数进行测试验证。进一步地,以Flask框架搭建前后端分离系统,将AOA应用于图像分割任务,展示了其在实际工程中的可行性与高效性。最后,通过收敛速度、寻优精度等指标评估算法性能,并提出自适应参数调整、模型优化和并行计算等改进策略。; 适合人群:具备一定Python编程基础和优化算法基础知识的高校学生、科研人员及工程技术人员,尤其适合从事人工智能、图像处理、智能优化等领域的从业者;; 使用场景及目标:①理解元启发式算法的设计思想与实现机制;②掌握AOA在函数优化、图像分割等实际问题中的建模与求解方法;③学习如何将优化算法集成到Web系统中实现工程化应用;④为算法性能评估与改进提供实践参考; 阅读建议:建议读者结合代码逐行调试,深入理解算法流程中MOA与MOP的作用机制,尝试在不同测试函数上运行算法以观察性能差异,并可进一步扩展图像分割模块,引入更复杂的预处理或后处理技术以提升分割效果。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值