1.8 The Sea Change: The switch from Uniprocessors to Multiprocessors

The power limit has forced a dramatic change in the design of microprocessors. Figure 1.17 shows the improvement in response time of programs for desktop microprocessors over time. Since 2002, the rate has slowed from a factor of 1.5 per year to a factor of 1.2 per year.

  Rather than continuing to decrease the response time of a single program running on the single processor, as of 2006 all desktop and server companies are shipping microprocessors with multiple processor per chip, where the benefit is often more on throughput than on response time. To reduce confusion between the words processor and microprocessor, companies refer to processors as "cores," and such microprocessors are generically called multicore microprocessors. Hence, a "quadcore" microprocessors is a chip that contains four processors or four cores.

  In the past, programmers could relay on innovations in hardware, architecture, and comilers to double performance of their programs every 18 months without having to change a line of code.  Today, for programmers to get significant improvement in respone time, they need to rewrite their programs to take advantage of multiple processors.Moreover, to get the histotoric benefit of running faster on new microprocessors, programmers will have to continue to improve performance of their code as the number of cores increase.

  To reinforce how the software and hardware systems work hand in hand, we use a special section, Hardware/software Interface, througheout the book, with the first one appearing below. These elements summarize important insights at this critical interface.

  

 

Parallelism  has  always been critical to performance in computing, but it was often hidden. Chapter 4 will explain pipelining, an elegant technique that runs programs faster by overlopping the execution of instructions. This is one example of instruction-level parallelism, where the parallel nature of the hardware is abstracted away so the programmer and comipler can think of the hardware as executing instructions sequentially.

  Forcing programmers to be aware of the parallel hardware and to explicitly rewrite their programs to be parallel had been the "third rail" of computer architecture, for companies in the past that depended on such a change in behavior failed(see Section  6.15). From this historical perspective, it's starting that the whole IT industry has bet its future that programmers will finally successfully switch to explicitly parallel programming.

  Why has it been so hard for pragrammers to write explicity parallel programs? 

The first reason is that parallel programming is by definition performance program need to be correct, solve an important problem, and provide a useful interface to the people or other programs that invoke it, the program must also be fast.

  The second reason is that  fast for parallel hardware means that the programmer must divide an application so that each processor has roughtly the same amount to do at the same time, and that the overhead of scheduling and coordination doesn't fritter away the potential performance benefits of parallelism.

  As an analogy, suppose the task was to write a newspaper story. Eight reporters working on the same story could potentially write a story eight  times faster. To achieve this increased speed, one would need to break up the task so that each reporter had something to do at the same time. Thus, we must schedule the sub-tasks. If anything went wrong and just one reporter took longer than the seven others did, then the benefits of having eight writers would be diminished. Thus, we must balance the load evenly to get the desired speedup. Another danger would be if reporters had to spend a lot of time talking to each other to write their sections. You would also fall short if one part of the story, such as the conclusion, couldn't be written until all of the other parts were completed. Thus, care must be taken to reduce communication and synchronization overhead. For both this analogy and parallel programming, the challenges include scheduling, load balancing time for synchronization, and overhead for communication between the parties. As you might guess, the challenge is stiffer with more reports for a newspaper story and more processors for parallel programming.

  To reflect this sea change in the industry, the next five chapters in this edition of the book each have a section on the implications of the parallel revolution to that chapter.

 

  

 

转载于:https://www.cnblogs.com/666638zhangqiang/p/5007354.html

内容概要:本文档详细介绍了Python从下载安装到实际应用的全流程。首先,针对不同操作系统(Windows、macOS、Linux)提供了详细的Python下载与安装指南,并强调了安装时的关键步骤如路径选择和环境变量配置。其次,文档讲解了开发环境的搭建,推荐了VS Code、PyCharm等编辑器以及Anaconda作为环境管理工具。接着,通过代码实例讲解了Python的基础语法,包括数据类型操作等简单实用的例子。最后,通过三个经典案例——排序算法可视化、文件自动化处理、数据可视化(Matplotlib),展示了Python在实际项目中的应用。此外,还提供了一些常见问题的解决方案,帮助初学者避开常见的陷阱。 适合人群:对编程有一定兴趣但缺乏Python经验的新手开发者,尤其是那些希望快速上手并应用于实际项目的学员。 使用场景及目标:①为初次接触Python的学习者提供完整的入门指导;②帮助用户顺利完成Python的安装配置;③通过具体案例让学习者掌握Python的基本语法和常用库的应用;④解决新手在学习过程中可能遇到的问题,提高学习效率。 阅读建议:建议读者按照文档顺序逐步学习,先掌握Python的安装配置,再深入理解基础语法,最后通过实战案例巩固所学知识。对于遇到的问题,可以参考“避坑指南”部分提供的解决方案。同时,在学习过程中应多动手实践,尝试修改示例代码,加深理解和记忆。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值