CS316 Autumn 2013 2014 C. Kozyrakis



Download 410.65 Kb.
Page1/6
Date31.01.2017
Size410.65 Kb.
#13195
  1   2   3   4   5   6

CS316 – Autumn 2013 – 2014

C. Kozyrakis





CS316 Advanced Multi-Core Systems
HW2 – Superscalar Techniques and Cache Coherence
Due Tuesday 11/12/12 at 5PM
(online on Submission portal or in box in front of Gates 303)


Notes

  • Collaboration on this assignment is encouraged (groups of 3 students)

  • This HW set provides practice for the exam. Make sure you work on or review ALL problems in this assignment.


Group: Member1 - _______________ Member2 - ________________ Member3 - ________________
Problem 1: Branch Prediction [9 points]
The figure below shows the control flow of a simple program. The CFG is annotated with three different execution trace paths. For each execution trace, circle which branch predictor (bimodal, local, or gshare) will best predict the branching behavior of the given trace. More than one predictor may perform equally well on a particular trace. However, you are to use each of the three predictors exactly once in choosing the best predictors for the three traces. Assume each trace is executed many times and every node in the CFG is a conditional branch. The branch history register for the local and gshare predictors is limited to 4 bits. Bimodal is a common name for a simple branch history table (BHT). Provide a brief explanation for your answer.



Circle one:
Bimodal
Local
gshare



Circle one:
Bimodal
Local
gshare



Circle one:
Bimodal
Local
gshare


Problem 2: Renaming [6 points]

Consider a MIPS-like instruction set. All instructions are of the form:

LD DST, offset(addr)

SD SRC, offset(addr)

OP DST, SRC1, SRC2
Part A: [3 points]

Computers spend most of their time in loops, so multiple loop iterations are great places to speculatively find more work to keep CPU resources busy. Nothing is ever easy, though; the compiler emitted only one copy of that loop’s code, so even though multiple iterations are handling distinct data, they will appear to use the same registers. To keep register usages multiple iterations from colliding, we rename their registers. The following code segment shows an example that we would like our hardware to rename.

Loop: LD F2, 0(Rx)

I0: MULTD F5, F0, F2

I1: DIVD F8, F0, F2

I2: LD F4, 0(Ry)

I3: ADDD F6, F0, F4

I4: ADDD F10, F8, F2

I5: SD F4, 0(Ry)

A compiler could have simply unrolled the loop and used different registers to avoid conflicts, but if we expect our hardware to unroll the loop, it must also do the register renaming. How? Assume your hardware has a pool of temporary registers (call them T registers, and assume there are 64 of them, T0 through T63) that it can substitute for those registers designated by the compiler. This rename hardware is indexed by the source register designation, and the value in the table is the T register of the last destination that targeted that register. (Think of these table values as producers, and the src registers are the consumers; it doesn’t much matter where the producer puts its result as long as its consumers can find it.) Consider the code sequence. Every time you see a destination register in the code, substitute the next available T, beginning with T9. Then update all the src registers accordingly, so that true data dependences are maintained. Show the resulting code. (Hint: See following sample)

I0: LD T9, 0(Rx)
I1: MULTD T10, F0, T9

Part B: [3 points]

Part A explored simple register renaming: when the hardware register renamer sees a source register, it substitutes the destination T register of the last instruction to have targeted that source register. When the rename table sees a destination register, it substitutes the next available T for it. But superscalar designs need to handle multiple instructions per clock cycle at every stage in the machine, including the register renaming. A simple scalar processor would therefore look up both src register mappings for each instruction, and allocate a new destination mapping per clock cycle. Superscalar processors must be able to do that as well, but they must also ensure that any dest-to-src relationships between the two concurrent instructions are handled correctly. Consider the following sample code sequence:
I0: MULTD F5, F0, F2

I1: ADDD F9, F5, F4

I2: ADDD F5, F5, F2

I3: DIVD F2, F9, F0


Assume that we would like to simultaneously rename the first two instructions (2-way superscalar). Further assume that the next two available T registers to be used are known at the beginning of the clock cycle in which these two instructions are being renamed. Conceptually, what we want is for the first instruction to do its rename table lookups, and then update the table per its destination’s T register. Then the second instruction would do exactly the same thing, and any inter-instruction dependency would thereby be handled correctly. But there’s not enough time to write the T register designation into the renaming table and then look it up again for the second instruction, all in the same clock cycle. That register substitution must instead be done live (in parallel with the register rename table update). Figure 2.1 shows a circuit diagram, using multiplexers and comparators, that will accomplish the necessary on-the-fly register renaming. Your task is to show the cycle-by-cycle state of the rename table & destination / sources register mappings for every instruction of the code. Assume the table starts out with every entry equal to its index (T0 = 0; T1 = 1….).

Figure 2-1. Rename table and on-the-fly register substitution logic for superscalar machines. (Note: “src” is source, “dst” is destination.)


You only need to fill in mappings for registers that have been renamed from their starting values (e.g. no need to write in F60=T60, but if F60=T3 that needs to be filled in). Not all fields may be used.
Cycle 0:

Architectural

Machine

Instruction I0

dst =


src1 =

src2 =


F

T

F

T

F

T

F

T

Instruction I1

dst =


src1 =

src2 =


F

T

F

T

F

T



Cycle 1:

Architectural

Machine

Instruction I2

dst =


src1 =

src2 =


F

T

F

T

F

T

F

T

Instruction I3

dst =


src1 =

src2 =


F

T

F

T

F

T




Problem 3: Coherence [30 points]

Part A: Single processor coherence [5 points]

A processor such as the PowerPC G3, widely deployed in Apple Macintosh systems, is primarily intended for use in uniprocessor systems, and hence has a very simple MEI cache coherence protocol. MEI is the same as MESI, except the Shared (S) state is omitted. Identify and discuss one reason why even a uniprocessor design should support cache coherence. Is the MEI protocol of the G3 adequate for this purpose? Why or why not? (Hint: think about Direct Memory Access (DMA))


Part B: MOESIF cache coherence protocol [10 points]

Many modern systems use cache-to-cache transfer as a way to avoid penalties of going off-chip for a memory access. MOESIF cache coherency protocol extends from the MESI protocol, where the semantics of the additional states are as follows: O state indicates that the line is shared-dirty: i.e., multiple copies may exist, but the other copies are in the S state, and the cache that has the line in the O state is responsible for writing the line back if it is evicted. F state indicates that the line is shared-clean but multiple copies may exist in the S state and this cache is responsible for a transfer on fill request. Fill in the table below for actions on every event trigger. If nothing needs to be done, write in “Do nothing.” If an event is invalid for a given state, write in “Error.”




Current State s

Event and Local Coherence Controller Responses and Actions (s’ refers to next state)

Local Read

Local Write

Local Eviction

Bus Read

Bus Write

Bus Upgrade

Invalid (I)



















Shared (S)



















Forwarding(F)



















Exclusive (E)



















Owned (O)



















Modified (M)




















Part C: Snoopy Coherence [5 points]

Assuming a processor frequency of 1 GHz, a target CPI of 2, a per-instruction level-2 cache miss rate of 1% per instruction, a snoop-based cache coherent system with 32 processors, and 8-byte address messages (including command and snoop addresses), compute the inbound and outbound snoop bandwidth required at each processor node.



Part D: Memory Consistency [10 points]

Consider a simple multicore processor using a snoopy MSI cache coherence protocol. Each processor has a single, private cache that is direct-mapped with four blocks each holding two words. The initial cache state of the system is shown in the figure below. To simplify the illustration, the cache-address tag contains the full address.




P0




Coherency
State


Address
tag


B0

I

100

B1

S

108

B2

M

110

B3

I

118




P1




Coherency
State


Address
tag


B0

I

100

B1

M

128

B2

I

110

B3

S

118




P2




Coherency
State


Address
tag


B0

S

120

B1

S

108

B2

I

110

B3

I

118



Reads and writes will experience stall cycles depending on the state of the cache line:



  • CPU read and write hits generate no stall cycles

  • CPU read and write misses generate Nmemory and Ncache stall cycles if satisfied by memory and cache, respectively

  • CPU write hits that generate an invalidate incur Ninvalidate stall cycles

  • A write-back of a block, due to either a conflict or another processor’s request to an exclusive block, incurs an additional Nwriteback stall cycles

The exact cycle count for each event is given in the table below:

Parameter

Cycles

Nmemory

100

Ncache

40

Ninvalidate

15

Nwriteback

10

Sequential consistency (SC) requires that all reads and writes appear to have executed in some total order. This may require the processor to stall in certain cases before committing a read or write instruction. Consider the following code sequence:

write A
read B

where the write A results in a cache miss and the read B results in a cache hit. Under SC, the processor must stall read B until after it can order (and thus perform) write A. Simple implementations of SC will stall the processor until the cache receives the data and can perform the write. Weaker consistency models relax the ordering constraints on reads and writes, reducing the cases that the processor must stall. The Total Store Order (TSO, or Processor Order) consistency model requires that all writes appear to occur in a total order but allows a processor’s reads to pass its own writes. This allows processor to implement write buffers that hold committed writes that have not yet been ordered with respect to other processors’ writes. Reads are allowed to pass (and potentially bypass) the write buffer in TSO (which they could not do under SC). Assume that one memory operation can be performed per cycle and that operations that hit in the cache or that can be satisfied by the write buffer introduce no stall cycles. Operations that miss incur the latencies stated above. How many stall cycles occur prior to each operation for both the SC and TSO consistency models for the cases listed below? Show your work; a correct answer without any work shown will receive no credit.



Instructions

Stall cycles

P0: write 110  80

P0: read 108



SC


TSO


P0: write 100  80

P0: read 108



SC


TSO


P0: write 110  80

P0: write 100  90



SC


TSO


P0: write 100  80

P0: write 110  90



SC


TSO


P0: read 118

P0: write 110  80



SC

TSO

Problem 4: Instruction Flow and Branch Prediction [30 points]

This problem investigates the effects of branches and control flow changes on program performance for a scalar pipeline (to keep the focus on branch prediction). Branch penalties increase as the number of pipeline stages increases between instruction fetch and branch resolution (or condition and target resolution). This effect of pipelined execution drives the need for branch prediction. This problem explores both static branch prediction in Part C and dynamic branch prediction in Part D. For this problem the base machine is a 5-Stage pipeline.


The 5-Stage Pipeline without Dynamic Branch Prediction


Execution Assumptions:

  1. unconditional branches execute in the decode stage

  2. conditional branches execute in the execute stage

  3. Effective address calculation is performed in the execute stage

  4. All memory access is performed in the memory access stage

  5. All necessary forwarding paths exist

  6. The register file is read after write


The fetch address is a choice between the sequential address generation logic and the branch correction logic. If a mispredicted branch is being corrected the correction address is chosen over the sequential address for the next fetch address.




Part A: Branch Penalties. [2 points]

What are the branch penalties for unconditional and conditional branches?

Unconditional ______________ Conditional _______________

Part B: No Branch Prediction. [4 points]
This problem will use the insertion sort program. An execution trace, or a sequence of executed basic blocks, is provided for this problem. A basic block is a group of consecutive instructions that are always executed together in sequence.

Example Code: Insertion Sort

BB Line# Label Assembly_Instruction Comment
1 1 main: addi r2, r0, ListArray r2 <- ListArray

2 addi r3, r0, ListLength r3 <- ListLength

3 add r4, r0, r0 i = 0;
2 4 loop1: bge r4, r3, end while (i < Length)

{

3 5 add r5, r4, r0 j = i ;



4 6 loop2: ble r5, r0, cont while (j > 0)

{

5 7 addi r6, r5, -1 k=j-1;



8 lw r7, r5(r2) temp1 = ListArray[j];

9 lw r8, r6(r2) temp2 = ListArray[k];

10 bge r7, r8, cont if (temp1 >= temp2) break;
6 11 sw r8, r5(r2) ListArray[j]  temp2;

12 sw r7, r6(r2) ListArray[k]  temp1;



13 addi r5, r5, -1 j--;

14 ba loop2 }


7 15 cont: addi r4, r4, 1 i++;

16 ba loop1 }


8 17 end: lw r1, (sp) r1 <- Return Pointer

18 ba r1


Execution Trace: Sequence of Basic Blocks Executed:

1 2 3 4 5 7 2 3 4 5 6 4 5 6 4 7 2 3 4 5 6 4 5 7 2 3 4 5 6 4 5 6 4 5 7 2 8
[Hint: An alternate way to represent the same execution trace above is to use the sequence of branch instructions, both conditional and unconditional (i.e. ba), executed.]

1. Fill in the branch execution table with an N for not taken and a T for taken. This table is recording the execution pattern for each (static) branch instruction. Use the execution trace on the previous page.


Branch Execution - Assume No Branch Prediction:


Branch Instruction No. (i.e. Line#)

Branch Instruction Execution (dynamic executions of each branch)

1

2

3

4

5

6

7

8

9

10

4































6































10































14































16































18






























Using the branch execution table above to calculate the statistics requested in the following table.


Branch Execution Statistics:


Branch Instr. No.

Times Executed

Times Taken

Times Not Taken

% Taken

%Not Taken

4
















6
















10
















14
















16
















18















2. How many cycles does the trace take to execute (include all pipeline fill and drain cycles)? [Hint: you don’t need to physically simulate the execution trace, just compute the cycle count.]

3. How many cycles are lost to control dependency stalls?


Download 410.65 Kb.

Share with your friends:
  1   2   3   4   5   6




The database is protected by copyright ©ininet.org 2024
send message

    Main page