What Computers Do: Model, Connect, Engage

本文探讨了计算机在未来三十年将如何以非传统方式与物理世界交互,包括自动驾驶汽车、智能眼镜、虚拟现实会议系统等应用,并阐述了这些创新背后的硬件、算法、抽象和概率原理。
Butler Lampson

Microsoft Research

bstract. Every 30 years there is a new wave of things that computers do. Around 1950 they began to model events in the world (simulation), and around 1980 to connect people (communication). Since 2010 they have begun to engage with the physical world in a non-trivial way (embodiment—giving them bodies). Today there are sensor networks like the Inrix traffic information system, robots like the Roomba vacuum cleaner, and cameras that can pick out faces and even smiles. But these are just the beginning. In a few years we will have cars that drive themselves, glasses that overlay the person you are looking at with their name and contact information, telepresence systems that make
most business travel unnecessary, and other applications as yet unimagined.
Computer systems are built on the physical foundation of hardware (steadily improving according to Moore’s law) and the intellectual foundations of algorithms, abstraction and probability. Good systems use a few basic methods: approximate, incrementally change, and divide and conquer. Latency, bandwidth, availability and complexity determine performance. In the future systems will deal with uncertainty much better than today, and many of them will be safety critical and hence much more dependable.

Extended Abstract
The first uses of computers, around 1950, were to model or simulate other things. Whether the target is a nuclear weapon or a payroll, the method is the same: build a computer system that behaves in some important ways like the target, observe the system, and infer something about the behavior of the target. The key idea is abstraction: there is an ideal system, often defined by a system of equations, which behaves like both the target system and the computer model. Modeling has been enormously successful; today it is used to understand, and often control, galaxies, proteins, inventories, airplanes in flight and many other systems, both physical and conceptual, and it has only begun to be exploited.
Models can be very simple or enormously complex, quite sketchy or very detailed, so they can be adapted to the available hardware capacity even when it is very small. Using early computers to connect people was either impossible or too expensive, compared to letters, telephones and meetings. But around 1980 Moore’s law improvements in digital hardware made it economic to use computers for word processing, e-mail, mobile phones, the web, search, music, social networks, e-books, and video. Much of this communication is real time, but even more involves stored information, often many petabytes of it.
So modeling and connection are old stories—there must be little more to do. Not so. Both the physical and the conceptual worlds are enormously complex, and there are great opportunities to model them more accurately: chemical reactions, airplane wings, disposable diapers, economies, and social networks are still far from being well understood. Telepresence is still much worse than face-to-face meetings between people, real time translation of spoken language is primitive, and the machine can seldom understand what the user doing a search is actually looking for. So there’s still lots of opportunity for innovations in modeling and connection. This is especially true in education, where computers could provide teachers with power tools. Nonetheless, I think that the most exciting applications of computing in the next 30 years will engage with the physical world in a non-trivial way. Put another way, computers will become embodied. Today this is in its infancy, with surgical robots and airplanes that are operated remotely by people, autonomous vacuum cleaners, adaptive cruise control for cars, and cellphone-based sensor networks for traffic data. In a few years we will have cars that drive themselves, prosthetic eyes and ears, health sensors in our homes and bodies, and effective automated personal assistants. I have a very bad memory for people’s names and faces, so my own dream (easier than a car) is a tiny camera I can clip to my shirt that will whisper in my ear, “That’s John Smith, you met him in Los Angeles last year.” In addition to saving many lives, these systems will have vast economic consequences. Autonomous cars alone will make the existing road system much more productive, as well as freeing drivers to do something more useful or pleasant, and using less fuel.
What is it that determines when a new application of computing is feasible? Usually it’s improvements in the underlying hardware, driven by Moore’s law (2× gain / 18 months). Today’s what-you-see-is-what-you-get word processors were not possible in the 1960s, because the machines were too slow and expensive. The first machine that was recognizably a modern PC was the Xerox Alto in 1973, and it could support a decent word processor or spreadsheet, but it was much too small and slow to handle photographs or video, or to store music or books. Engagement needs vision, speech recognition, world modeling, planning, processing of large scale data, and many other things that are just beginning to become possible at reasonable cost. It’s not clear how to compare the capacity of a human brain with that of a computer, but the brain’s 1015 synapses (connections) and cycle time of 5 ms yield 2×1017 synapse events/sec, compared to 1012 bit events/sec for a 2 GHz, 8 core, 64 bit processor. It will take another 27 years of Moore’s law to make these numbers equal, but a mouse has only 1012 synapses, so perhaps we’ll have a digital mouse in 12 years (but it will draw more power than a real mouse).
Hardware is not the whole story, of course. It takes software to make a computer do anything, and the intellectual foundations of software are algorithms (for making each machine cycle do more useful work) and abstraction (for mastering complexity). We measure a computer or communication system externally by its bandwidth (jobs done per unit time), latency (start to finish time for one job) and availability (probability that a job gets done on time). Internally we measure the complexity, albeit much less precisely; it has something to do with how many component parts there are, how many and how complex are the connections between parts, and how well we can organize groups of parts into a single part with only a few external connections. There are many methods for building systems, but most of them fit comfortably under one of three headings: Approximate, Increment, and Divide and conquer—AID for short.
• An approximate result is usually a good first step that’s easy to take, and often suffices. Even more important, there are many systems in which there is no right answer, or in which timeliness and agility are more important than correctness: internet packet delivery, search engines, social networks, even retail web sites. These systems are fundamentally different from the flight control, accounting, word processing and email systems that are the traditional bread and butter of computing.
• Incrementally adjusting the state as conditions change, rather than recomputing it from scratch, is the best way to speed up a system (lacking a better algorithm). Caches in their many forms, copy on write, load balancing, dynamic scale out, and just in time compilation are a few examples. In development, it’s best to incrementally change and test a functioning system. Device drivers, apps, browser plugins and JavaScript incrementally extend a platform, and plug and play and hot swapping extend the hardware.
• Divide and conquer is the best single rule: break a big problem down into smaller pieces. Recursion, path names such as file or DNS names, redo logs for failure recovery, transactions, striping and partitioning, and replication are examples. Modern systems are structured hierarchically, and they are built out of big components such as an operating system, database, a browser or a vision system such as Kinect.
For engagement, algorithms and abstraction are not enough. Probability is also essential, since the machine’s model of the physical world is necessarily uncertain. We are just beginning to learn how to write programs that can handle uncertainty. They use the techniques of statistics, Bayesian inference and machine learning to combine models of the connections among random variables, both observable and hidden, with observed data to learn parameters of the models and then to infer hidden variables such as the location of vehicles on a road from observations such as the image data from a camera.
Some applications of engagement are safety critical, such as driving a car or performing surgery, and these need to be much more dependable than typical computer systems. There are methods for building dependable systems: writing careful specifications of their desired behavior, giving more or less formal proofs that their code actually implements the specs, and using replicated state machines to ensure that the system will work even when some of its components fail. Today these methods only work for fairly simple systems. There’s much to be learned about how to scale them up, and also about how to design systems so that the safety critical part is small enough to be dependable.
Engagement can be very valuable to users, and when it is they will put up with a lot of hassle to get the value; consider an artificial eye for a blind person, for example. But other applications, such as a system that tells you which of your friends are nearby, are examples of ubiquitous computing that although useful, have only modest value. These systems have to be very well engineered, so that the hassle of using them is less than their modest value. Many such systems have failed because they didn’t meet this requirement.
The computing systems of the next few decades will expand the already successful application domains that model the world and connect people, and exploit the new domain that engages computers with the physical world in non-trivial ways. They will continue to be a rich source of value to their users, who will include almost everyone in the world, and an exciting source of problems, both intellectual and practical, for their builders.

参考

http://www.youkuaiyun.com/article/2012-12-08/2812611-What-Computers-Do

http://blog.sina.com.cn/s/blog_4caedc7a0102ecbc.html

http://download.springer.com/static/pdf/213/chp%253A10.1007%252F978-3-642-29952-0_5.pdf?auth66=1355198162_7886830c63ce4edf2c3d3107045c8c6b&ext=.pdf


这是一个基于AI视觉识别与3D引擎技术打造的沉浸式交互圣诞装置。 简单来说,它是一棵通过网页浏览器运行的数字智慧圣诞树,你可以用真实的肢体动作来操控它的形态,并将自己的回忆照片融入其中。 1. 核心技术组成 这个作品是由三个尖端技术模块组成的: Three.js 3D引擎:负责渲染整棵圣诞树、动态落雪、五彩挂灯和树顶星。它创建了一个具备光影和深度感的虚拟3D空间。 MediaPipe AI手势识别:调用电脑摄像头,实时识别手部的21个关键点。它能读懂你的手势,如握拳、张开或捏合。 GSAP动画系统:负责处理粒子散开与聚合时的平滑过渡,让成百上千个物体在运动时保持顺滑。 2. 它的主要作用与功能 交互式情感表达: 回忆挂载:你可以上传本地照片,这些照片会像装饰品一样挂在树上,或者像星云一样环绕在树周围。 魔法操控:握拳时粒子迅速聚拢,构成一棵挺拔的圣诞树;张开手掌时,树会瞬间炸裂成星光和雪花,照片随之起舞;捏合手指时视线会拉近,让你特写观察某一张选中的照片。 节日氛围装饰: 在白色背景下,这棵树呈现出一种现代艺术感。600片雪花在3D空间里缓缓飘落,提供视觉深度。树上的彩色粒子和白色星灯会周期性地呼吸闪烁,模拟真实灯串的效果。 3. 如何使用 启动:运行代码后,允许浏览器开启摄像头。 装扮:点击上传照片按钮,选择温馨合照。 互动:对着摄像头挥动手掌可以旋转圣诞树;五指张开让照片和树化作满天星辰;攥紧拳头让它们重新变回挺拔的树。 4. 适用场景 个人纪念:作为一个独特的数字相册,在节日陪伴自己。 浪漫惊喜:录制一段操作手势让照片绽放的视频发给朋友。 技术展示:作为WebGL与AI结合的案例,展示前端开发的潜力。
【顶级EI复现】计及连锁故障传播路径的电力系统 N-k 多阶段双层优化及故障场景筛选模型(Matlab代码实现)内容概要:本文提出了一种计及连锁故障传播路径的电力系统N-k多阶段双层优化及故障场景筛选模型,并提供了基于Matlab的代码实现。该模型旨在应对复杂电力系统中可能发生的N-k故障(即多个元件相继失效),通过构建双层优化框架,上层优化系统运行策略,下层模拟故障传播过程,从而实现对关键故障场景的有效识别与筛选。研究结合多阶段动态特性,充分考虑故障的时序演化与连锁反应机制,提升了电力系统安全性评估的准确性与实用性。此外,模型具备良好的通用性与可扩展性,适用于大规模电网的风险评估与预防控制。; 适合人群:电力系统、能源互联网及相关领域的高校研究生、科研人员以及从事电网安全分析、风险评估的工程技术人员。; 使用场景及目标:①用于电力系统连锁故障建模与风险评估;②支撑N-k故障场景的自动化筛选与关键脆弱环节识别;③为电网规划、调度运行及应急预案制定提供理论依据和技术工具;④服务于高水平学术论文复现与科研项目开发。; 阅读建议:建议读者结合Matlab代码深入理解模型构建细节,重点关注双层优化结构的设计逻辑、故障传播路径的建模方法以及场景削减技术的应用,建议在实际电网数据上进行测试与验证,以提升对模型性能与适用边界的认知。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值