-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slic pong frame #3273
Comments
I don't have a preference for one or the other.
The choice I made here was to check for bi-directional activity and this doesn't necessarily imply that a pong frame should be tied to a ping frame.
As of today, the Slic protocol doesn't say anything 🙂.
I would open another issue for looking into Slic performance improvements.
No real reason. If I recall correctly, I choose ping/pong because HTTP/2 uses Ping/Ping+Ack. Using the ice approach is fine with me. |
And yet:
And the description for FrameType::Pong is "Acknowledges the receipt of a Ping frame.". For me, two designs make sense:
Currently with Slic, SendPingAsync doesn't await anything for the peer. And Pong is definitely not an ack for a Ping frame. |
This is not directly a performance issue but a design issue. Which activities are ok in the "read loop" of Slic/IceProtocolConnection? Ok and necessary:
Not acceptable:
It's not clear to me where sending the Pong frame should be. See:
|
Implementing this proposal is probably best. It also allows to determine latency which can be useful for a dynamic flow control algorithm (see #3304).
To me, it's an implementation detail issue not a protocol design issue (which is what this issue is about). Opened #3318. |
It's actually designed after the WebSocket Ping/Pong frames: https://www.rfc-editor.org/rfc/rfc6455#section-5.5.2 I suggest that we implement Slic pings along the lines of the ping frame of the HTTP/2 protocol:
Our Slic implementation will send a new ping frame if there's no pending ping waiting for a pong frame and if writes are idle since (idleTimeout / 2). Pings are only sent by the client side of the connection. For now, we could just use an incrementing counter for the payload or even a constant long=0 payload since we don't send a new ping frame before getting the pong frame for the previous ping. The 64 bits payload can be used later by the Slic implementation to distinguish different uses of the Slic ping/pong mechanism. For instance, the .NET HTTP/2 stream dynamic window size is computed from the min RTT value. This value is obtained by performing a bunch of pings on the HTTP/2 connection. It's important here to be able to figure out the pings used for idle timeout and the pings used for the RTT. The .NET HTTP/2 implementation uses negative values for RTT, positive ones for idle timeout. |
Slic sends a pong frame to acknowledge the receipt of a ping frame.
There is no equivalent frame in QUIC: QUIC has ping frames but no pong frame. See https://www.rfc-editor.org/rfc/rfc9000.html#frame-ping.
Then, with Slic, only the client-side of a connection sends ping frames:
icerpc-csharp/src/IceRpc/Transports/Slic/Internal/SlicConnection.cs
Line 236 in 9e443a6
and in turns, only the server-side sends pong frame.
Note: It's not clear if that's an implementation detail of this Slic implementation or a rule of the Slic protocol.
Now, we receive Ping frames here:
icerpc-csharp/src/IceRpc/Transports/Slic/Internal/SlicConnection.cs
Line 1025 in 9e443a6
There are several issues with this code.
Here, we lock a mutex and call
SendPongFrameAsync
:https://github.com/icerpc/icerpc-csharp/blob/9e443a67bef5e32804023b403881646b7b0ba125/src/IceRpc/Transports/Slic/Internal/SlicConnection.cs#LL1189C20-L1189C38
There is no Task.Yield so the sending could actually occur synchronously. We should not do that in the "read loop" task.
It would make sense to use a
Task.Run
to schedule the sending of the Pong frame vs sending it from the "read loop" task.Then, since Ping/Pong is Slic-specific (doesn't come from QUIC), why not use the simpler Ping & no-Pong approach we use for ice? Is the goal to use Ping/Pong someday for measure latency?
The text was updated successfully, but these errors were encountered: