C语言代写 C语言代做 C程序代写 C程序作业代写

C语言Lab代写,OS作业代写,操作系统作业代写,C语言大作业代写

C语言Lab代写,OS作业代写,操作系统作业代写,C语言大作业代写

1 Cache Simulation

A cache is a collection of cache lines, each of which may store a block of memory along with some additional information about the block (for example, which addresses it contains). Each block contains the same number of bytes, known as the block size. The block size will always be a power of two. The cache size is the block size multiplied by the number of cache lines (that is, the additional information is not counted in the cache size).

Consider a system with 48-bit addresses and a block size of 16 bytes. Each block will begin with an address divisible by 16. Because 16 = 24, the last 4 bits of an address will determine its offset within a particular block. For example, the address ffff0000abcd (hexadecimal) will have offset d. The remaining 44 bits of the address may be considered a block identifier.

If the block size were 8 bytes, then the last 3 bits of the address would be its block offset. The last three bits of ffff000abcd are 101 (binary) or 5 (hexadecimal). The most-significant 45 bits will be the block identifier.


For a cache with a single cache line, that cache line will store the 16 bytes of the block (the data) along with the validity bit and block identifier. Each memory access will first check whether the address is part of the block currently in the cache (if any). Since we are only simulating memory reads and writes, you cache will not need to store the actual blocks.


1.1 Memory operations

Your simulator will simulate two memory operations: reading and writing individual bytes. Your program will read a trace file describing addresses to read or write from, and will keep track of which blocks are stored in which cache lines in order to determine when these memory operations result in actual reads and writes to memory.

Note that your program only keeps enough information to simulate the behavior of the cache. It does not need to know the actual contents of memory and the blocks stored in the cache lines; this information is, in fact, not available.

Your program will simulate a write-through, write-allocate cache. The operations supported are reading and writing from an address A.

read A Simulate reading the byte at address A. If the block containing A is already loaded in the cache, this is a cache hit. Otherwise, this is a cache miss: note a memory read and load the block into a chosen cache line (see sections 1.2 and 1.4).

write A Simulate writing a byte at address A. If the block containing A is already loaded in the cache, this is a cache hit: note a memory write. Otherwise, this is a cache miss: note a memory read, load the block into a chosen cache line, and then note a memory write.

Your program will track the number of cache hits, cache misses, memory reads, and memory writes.

Note that loading an address or block into the cache means changing the information about a particular cache line to indicate that it holds a particular block. Since your simulator does not simulate the contents of memory, no data will be loaded or stored from the address itself.

1.1.1 Prefetching

Prefetching is a technique used to increase the benefit of spatial locality for a cache. In the operations described above, blocks are read from memory only after a cache miss. If a cache uses prefetching, each cache miss will ensure that the next block is also loaded into the cache.

For example, a program accesses address A, which is part of block X. If this access is a cache miss (meaning X is not in the cache), the cache will load block X into the cache. A prefetching cache will also check whether block X + 1 is present in the cache, and load it into the cache if not. Note that the prefetching step is not considered a cache hit or a cache miss.

Algorithm 1 illustrates how to simulate a memory read when prefetching.

Prefetching occurs when loading blocks into the cache. Because cachesim simulates a write- allocate cache, a cache miss during a write will result in loading the relevant block, which may then result in prefetching the next block. Note that prefetching does not increase or decrease the number of writes.

1.2 Mapping

For caches with multiple cache lines, there are several ways of determining which cache lines will store which blocks. Generally, the cache lines will be grouped into one or more sets. Whenever a block is loaded into a cache line, it will always be stored in a cache line belonging to its set. The number of sets will always be a power of two.


1.3 Calculating block, set, and tag


We will simulate a cache in a system with 48-bit addresses. Since int has 32 bits on the iLab machines, you will want to use a long or unsigned long to represent the addresses. Using some other type, such as a string containing hexadecimal digits, is not recommended for efficiency reasons.

Your program will need to extract sequences of bits from the addresses it works with. When simulating a cache with block size B = 2b, the least significant b bits of an address (the block offset) may be ignored. Code such as X >> b will effectively remove the block offset, leaving only the block identifier.


1.4 Replacement policies

Each cache line is either valid, meaning it contains a copy of a block from memory, or invalid, meaning it does not. Initially, all cache lines are invalid. As cache misses occur and blocks are loaded into the cache, those lines will become valid. Eventually, the cache will need to load a block into a set which has no invalid lines. To load this new block, it will select a line in the set according to its replacement policy and put the new block in that line, replacing the block that was already fifo (“First-in, first-out”) In this policy, the cache line that was loaded least recently will be replaced. lru (“Least-recently used”) In this policy, the cache line that was accessed least recently will be.


Your simulator will implement fifo. Implementing lru is left for extra credit.

Note that each set contains a fixed, finite number of cache lines. To determine the oldest block

in the cache, it is sufficient to store the relative age of each cache line. That is, the first time a block

is loaded into a cache line, its relative age is set to 0 and the ages of all other valid cache lines are

increased by 1. This ensures that every cache line will have a unique age. 

You will write a program cachesim that reads a trace of memory accesses and simulates the behavior of a cache for that trace, with and without prefetching. The program will be awarded up to 100

3For direct-mapped caches, no decision needs to be made, because each set only contains one cache line.

4For a prefetching cache, the prefetched block is considered to be accessed if it is loaded from memory, but is not accessed if it is already in the cache.

5Sequential search is O(n), but for our purposes n will always be fairly small.


京ICP备2025144562号-1
微信
程序代写,编程代写
使用微信扫一扫关注
在线客服
欢迎在线资讯
联系时间: 全天