Cache: a place for concealment and safekeeping

http://duartes.org/gustavo/blog/post/intel-cpu-caches

This post shows briefly how CPU caches are organized in modern Intel processors. Cache discussions often lack concrete examples, obfuscating the simple concepts involved. Or maybe my pretty little head is slow. At any rate, here’s half the story on how a Core 2 L1 cache is accessed:

Selecting an L1 cache set (row)

The unit of data in the cache is the line, which is just a contiguous chunk of bytes in memory. This cache uses 64-byte lines. The lines are stored in cache banks or ways, and each way has a dedicateddirectory to store its housekeeping information. You can imagine each way and its directory as columns in a spreadsheet, in which case the rows are the sets. Then each cell in the way column contains a cache line, tracked by the corresponding cell in the directory. This particular cache has 64 sets and 8 ways, hence 512 cells to store cache lines, which adds up to 32KB of space.

In this cache’s view of the world, physical memory is divided into 4KB physical pages. Each page has4KB / 64 bytes == 64 cache lines in it. When you look at a 4KB page, bytes 0 through 63 within that page are in the first cache line, bytes 64-127 in the second cache line, and so on. The pattern repeats for each page, so the 3rd line in page 0 is different than the 3rd line in page 1.

In a fully associative cache any line in memory can be stored in any of the cache cells. This makes storage flexible, but it becomes expensive to search for cells when accessing them. Since the L1 and L2 caches operate under tight constraints of power consumption, physical space, and speed, a fully associative cache is not a good trade off in most scenarios.

Instead, this cache is set associative, which means that a given line in memory can only be stored in one specific set (or row) shown above. So the first line of any physical page (bytes 0-63 within a page) must be stored in row 0, the second line in row 1, etc. Each row has 8 cells available to store the cache lines it is associated with, making this an 8-way associative set. When looking at a memory address, bits 11-6 determine the line number within the 4KB page and therefore the set to be used. For example, physical address 0x800010a0 has 000010 in those bits so it must be stored in set 2.

But we still have the problem of finding which cell in the row holds the data, if any. That’s where the directory comes in. Each cached line is tagged by its corresponding directory cell; the tag is simply the number for the page where the line came from. The processor can address 64GB of physical RAM, so there are 64GB / 4KB == 224 of these pages and thus we need 24 bits for our tag. Our example physical address 0x800010a0 corresponds to page number 524,289. Here’s the second half of the story:

Finding cache line by matching tags

Since we only need to look in one set of 8 ways, the tag matching is very fast; in fact, electrically all tags are compared simultaneously, which I tried to show with the arrows. If there’s a valid cache line with a matching tag, we have a cache hit. Otherwise, the request is forwarded to the L2 cache, and failing that to main system memory. Intel builds large L2 caches by playing with the size and quantity of the ways, but the design is the same. For example, you could turn this into a 64KB cache by adding 8 more ways. Then increase the number of sets to 4096 and each way can store 256KB. These two modifications would deliver a 4MB L2 cache. In this scenario, you’d need 18 bits for the tags and 12 for the set index; the physical page size used by the cache is equal to its way size.

If a set fills up, then a cache line must be evicted before another one can be stored. To avoid this, performance-sensitive programs try to organize their data so that memory accesses are evenly spread among cache lines. For example, suppose a program has an array of 512-byte objects such that some objects are 4KB apart in memory. Fields in these objects fall into the same lines and compete for the same cache set. If the program frequently accesses a given field (e.g., the vtable by calling a virtual method), the set will likely fill up and the cache will start trashing as lines are repeatedly evicted and later reloaded. Our example L1 cache can only hold the vtables for 8 of these objects due to set size. This is the cost of the set associativity trade-off: we can get cache misses due to set conflicts even when overall cache usage is not heavy. However, due to the relative speeds in a computer, most apps don’t need to worry about this anyway.

A memory access usually starts with a linear (virtual) address, so the L1 cache relies on the paging unit to obtain the physical page address used for the cache tags. By contrast, the set index comes from the least significant bits of the linear address and is used without translation (bits 11-6 in our example). Hence the L1 cache is physically tagged but virtually indexed, helping the CPU to parallelize lookup operations. Because the L1 way is never bigger than an MMU page, a given physical memory location is guaranteed to be associated with the same set even with virtual indexing. L2 caches, on the other hand, must be physically tagged and physically indexed because their way size can be bigger than MMU pages. But then again, by the time a request gets to the L2 cache the physical address was already resolved by the L1 cache, so it works out nicely.

Finally, a directory cell also stores the state of its corresponding cached line. A line in the L1 code cache is either Invalid or Shared (which means valid, really). In the L1 data cache and the L2 cache, a line can be in any of the 4 MESI states: Modified, Exclusive, Shared, or Invalid. Intel caches areinclusive: the contents of the L1 cache are duplicated in the L2 cache. These states will play a part in later posts about threading, locking, and that kind of stuff. Next time we’ll look at the front side bus and how memory access really works. This is going to be memory week.

UpdateDave brought up direct-mapped caches in a comment below. They’re basically a special case of set-associative caches that have only one way. In the trade-off spectrum, they’re the opposite of fully associative caches: blazing fast access, lots of conflict misses.


内容概要:该PPT详细介绍了企业架构设计的方法论,涵盖业务架构、数据架构、应用架构和技术架构四大核心模块。首先分析了企业架构现状,包括业务、数据、应用和技术四大架构的内容和关系,明确了企业架构设计的重要性。接着,阐述了新版企业架构总体框架(CSG-EAF 2.0)的形成过程,强调其融合了传统架构设计(TOGAF)和领域驱动设计(DDD)的优势,以适应数字化转型需求。业务架构部分通过梳理企业级和专业级价值流,细化业务能力、流程和对象,确保业务战略的有效落地。数据架构部分则遵循五大原则,确保数据的准确、一致和高效使用。应用架构方面,提出了分层解耦和服务化的设计原则,以提高灵活性和响应速度。最后,技术架构部分围绕技术框架、组件、平台和部署节点进行了详细设计,确保技术架构的稳定性和扩展性。 适合人群:适用于具有一定企业架构设计经验的IT架构师、项目经理和业务分析师,特别是那些希望深入了解如何将企业架构设计与数字化转型相结合的专业人士。 使用场景及目标:①帮助企业和组织梳理业务流程,优化业务能力,实现战略目标;②指导数据管理和应用开发,确保数据的一致性和应用的高效性;③为技术选型和系统部署提供科学依据,确保技术架构的稳定性和扩展性。 阅读建议:此资源内容详尽,涵盖企业架构设计的各个方面。建议读者在学习过程中,结合实际案例进行理解和实践,重点关注各架构模块之间的关联和协同,以便更好地应用于实际工作中。
资 源 简 介 独立分量分析(Independent Component Analysis,简称ICA)是近二十年来逐渐发展起来的一种盲信号分离方法。它是一种统计方法,其目的是从由传感器收集到的混合信号中分离相互独立的源信号,使得这些分离出来的源信号之间尽可能独立。它在语音识别、电信和医学信号处理等信号处理方面有着广泛的应用,目前已成为盲信号处理,人工神经网络等研究领域中的一个研究热点。本文简要的阐述了ICA的发展、应用和现状,详细地论述了ICA的原理及实现过程,系统地介绍了目前几种主要ICA算法以及它们之间的内在联系, 详 情 说 明 独立分量分析(Independent Component Analysis,简称ICA)是近二十年来逐渐发展起来的一种盲信号分离方法。它是一种统计方法,其目的是从由传感器收集到的混合信号中分离相互独立的源信号,使得这些分离出来的源信号之间尽可能独立。它在语音识别、电信和医学信号处理等信号处理方面有着广泛的应用,目前已成为盲信号处理,人工神经网络等研究领域中的一个研究热点。 本文简要的阐述了ICA的发展、应用和现状,详细地论述了ICA的原理及实现过程,系统地介绍了目前几种主要ICA算法以及它们之间的内在联系,在此基础上重点分析了一种快速ICA实现算法一FastICA。物质的非线性荧光谱信号可以看成是由多个相互独立的源信号组合成的混合信号,而这些独立的源信号可以看成是光谱的特征信号。为了更好的了解光谱信号的特征,本文利用独立分量分析的思想和方法,提出了利用FastICA算法提取光谱信号的特征的方案,并进行了详细的仿真实验。 此外,我们还进行了进一步的研究,探索了其他可能的ICA应用领域,如音乐信号处理、图像处理以及金融数据分析等。通过在这些领域中的实验和应用,我们发现ICA在提取信号特征、降噪和信号分离等方面具有广泛的潜力和应用前景。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值