-
-
Notifications
You must be signed in to change notification settings - Fork 524
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unified sockets (1): server #2718
base: master
Are you sure you want to change the base?
Conversation
d8eae20
to
03f29f8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got too tired to understand the sockets part but didn't wanna lose my review progress, will take a proper look at that tmrw
@@ -475,7 +685,8 @@ fn try_connect( | |||
if let Err(e) = connection_pipeline( | |||
Arc::clone(&ctx), | |||
lifecycle_state, | |||
proto_socket, | |||
socket, | |||
connection_result, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the error case of connection_result
should be handled directly here already, tho that might mean needing to repackage the data from the accepted case into it's own struct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I can do. It's all a matter of code formatting, if to include this into the nested call or not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems it's more unergonomic to handle the result before calling connection_pipeline because we would need to pass more variables around. In any case I moved this as the first thing done in connection_pipeline.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's why I mentioned you might need to repackage it into a struct. But passing around a result is really weird. (I'll allow it if you fix it in a later pr tho)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is probably fixed after v21. Because we will remove the "standby" state. So that will not be called "result" anymore
// to thos client, no other client can connect until handshake is finished. It will then be | ||
// temporarily relocked while shutting down the threads. | ||
let mut session_manager_lock = SESSION_MANAGER.write(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This lock is partly worthless since we're already starting the connection beforehand...
Tho I'm pretty sure it doesn't actually matter because we seem to still only read config stuff after the lock is acquired, but I'm not really sure
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This RwLock is released using a function I crafted, alvr_common::wait_rwlock
which mirrors the API for Mutex. The purpose of locking the session is to make sure the same device is not attempted more than one connection at the same time. You may have a point, we should lock the session before even acquiring a socket connection, although it's not possible to send a locked RwLock or Mutex across threads
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried and unfortunately it doesn't work. in any case yeah, everything except the initialization fo the connection is done after the lock is acquired
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, it's fine for the time being if you fix it in a later pr
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I literally can't fix this. It's prohibited by Rust aliasing rules, to send a lock across threads
Address the other comments and I'll let the actual socket mess slide for the time being. |
I made a mistake in a comment: PR (2) will be about porting the client to SocketConnection, (3) will refactor the SocketConnection internal in a non-breaking way, and (4) will be breaking. Still, after this PR I will be able to work on the deferred HMD initialization. |
I'm going to delay this PR until after the deferred HMD initialization PR |
This is the first of a series of PRs to refactor and merge the control socket and stream socket. Like the tracking rewrite, it will not be 100% completed before making breaking changes for v21. This PR also includes some refactoring in connection.rs and a fix for glitchy restarting where the client would temporarily reconnect to the server while it's restarting.