-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
store forwarding does not have fixed latency #81
Comments
Oh, and loop:
mov edi, [rsp]
dec ecx
mov [rsp], edi
nop
nop
nop
nop
nop
nop
nop
nop
nop
nop
nop
nop
jnz loop is even faster: runs at 4.1 cyc/iter on Sandybridge! |
It's also possible to have two interleaved chains, each forwarded in exactly 4 cycles on Sandybridge, though I didn't manage to add a third one: mov eax, [rsp]
mov [rsp], eax
nop
nop
nop
nop
nop
nop
nop
nop
nop
nop
dec ebp
mov ebx, [rsp+4]
mov [rsp+4], ebx
jnz loop |
Apparently this got improved in Ivybridge, as not just three, but four interleaved 4-cycle chains are easily achieved: loop:
mov eax, [rsp]
mov [rsp], eax
nop
nop
mov edx, [rsp+12]
mov [rsp+12], edx
nop
nop
mov ecx, [rsp+8]
mov [rsp+8], ecx
nop
nop
dec ebp
mov ebx, [rsp+4]
mov [rsp+4], ebx
jnz loop |
Ooh, and I can stick an ALU op in the dependency chain without increasing overall latency, making best-case forwarding latency 3 cycles on SNB and IVB: loop:
mov eax, [rsp]
not eax
mov [rsp], eax
nop
nop
nop
nop
nop
mov ebx, [rsp+4]
not ebx
mov [rsp+4], ebx
nop
dec ebp
nop
nop
jnz loop |
From Twitter:
Right well I talk about replay and horizons it is only really clear how it works (and even then not really that clear) for register dependencies. So this applies to load-load or load-alu, etc, and there things mostly make sense. It also applies to load-store (e.g., the store data comes from the load, via register). It doesn't, however apply to store-load, since that dependency is through memory and there is some "other" mechanism to handle it. So I think a load-store-load-store chain (with the dependency through the load and store data as in your examples) is quite different. From a reg-reg scheduling dep PoV I would see it as many decoupled load-store pairs, with some "other" mechanism (described in part here) handling the store-load part. So at least to me it is not surprising there are no load replays here. Of course, this doesn't really explain why there are so many store replays: but a long replay chain involving multiple stores (as I think you are suggesting does not occur) is not the only way: you could have the same store replayed multiple times. E.g., maybe in your second example, the store is dispatched 2 or 3 cycles after the load (best case latency?), then again every cycle thereafter when it finds that it didn't get its data. Or maybe multiple stores awoken by one load, because of a limitation in the way waiting uops are tracked in the "load matrix" which holds uops waiting on loads. |
So this loop runs in 4 cycles per iteration? |
Amazingly, yes! |
Yeah, I think that indicates best-case latency is 3 cycles on those chips too, better than I had thought (not that I ever tested it: it's just that SKL advertised improvements in SLF latency: but maybe that was in the "other" scenario where the load arrives too early: I generally measure 4.5 cycles in that case). FWIW, maybe you can create a 3-cycle loop without the ALU op if you space out the load from the earlier store using a dependency chain, rather than nops, because this removes a lot of variables about what executes when. That's how I did all my testing. |
You can check out the BTW, this effect is a cause of several "I added extra code and things sped up" paradoxes that I looked into. The extra code (say an extra function call, etc) changes the from load arriving "too early" (slower forwarding) to "just right" (data is available, fast forwarding). For example, it explained this semi-famous case which I where I originally started looking into this. I explains several similar questions on StackOverflow. |
Indeed, this runs at 3 cycles per iteration too. Perfection. loop:
mov [rsp], rdi
imul rsp, 1
mov rdi, [rsp]
dec ecx
jnz loop |
BTW, if you want to run the basic store forwarding tests in uarch bench, they are available at:
Which gives results on my Skylake like:
The "delay" is how many cycles of delay between the store and the load that will read that store, enforced using dependency chain to the load address to slow it down relative to the store. The magic number of 3 is clear here, as is the + "0.5" behavior of the shorter delays. The concurrency test is running multiple chains as you did, showing that there is no real throughput limit to store forwarding: you can get to 1 store-load per cycle once you have 4 chains. Here are the same results with p2347 ops shown, run using:
You can clearly see the extra p4 uops, almost 1 per cycle. They disappear at delay 3, as soon as you switch into the "other" mode. It would be interesting too to set up some benchmarks where the store address dependent on the previous store, instead of the data. I did play around with it at one point, and recall the performance was much worse: I guess it's a less optimized case and harder to handle because the hardware is not even sure if forwarding is going to happen until the address arrives (but the disambiguation predictors should help). |
You note the effect for Skylake on the wiki ("Minimum store-forwarding latency is 3 on new(ish) chips, but the load has to arrive at exactly the right time to achieve this"), but I'm seeing a similar effect on earlier CPUs too; tested with Sandybridge and Ivybridge so far.
runs at 6.3 cyc/iter
runs at 5 cyc/iter
run a 5 cyc/iter
I'm also seeing stores being apparently re-issued a lot? For the second variant:
What are your thoughts on this? Have you seen instruction replay explained somewhere? Did you see variable forwarding latency mentioned anywhere?
The text was updated successfully, but these errors were encountered: