-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
red-knot: Introduce program.check
#11148
Conversation
c852acb
to
cd17300
Compare
|
cd17300
to
236e70a
Compare
@@ -32,7 +32,7 @@ impl Files { | |||
self.inner.read().try_get(path) | |||
} | |||
|
|||
// TODO Can we avoid using an `Arc` here? salsa can return references for some reason. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could by using an unsafe transmute similar to salsa.
ce9962f
to
59fc0e7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great! Much easier to follow, I think I got my head around all of it.
receiver: crossbeam_channel::Receiver<OrchestratorMessage>, | ||
} | ||
|
||
impl Orchestrator { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me see if I understand the division of responsibilities between MainLoop
and Orchestrator
threads. MainLoop
does the actual checking, using a rayon thread pool to parallelize checking. The job of Orchestrator
is to a) collect file changes from a notification source, debounce them, and then hand them off to MainLoop
, which applies them to the Program
and then re-checks, and b) receive check results (diagnostics) from the rayon thread running the actual check, and hand those off to MainLoop
for display?
The term "Orchestrator" initially had me thinking it would actually coordinate the checking work, but it doesn't really do that; that's RayonCheckScheduler
. Orchestrator
is really just orchestrating at the higher level of check requests and results.
It's not clear to me yet how these pieces will fit together in an editor context when we are handling requests much smaller than "check the whole program."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's correct. The Orchestrator
orchestrates the individual moving pieces. It needs to run on it' own thread so that it can react to incoming events. But it's the main loop that does the actual work
It's not clear to me yet how these pieces will fit together in an editor context when we are handling requests much smaller than "check the whole program."
program.check
should only recheck files that need rechecking since the last program.check
call. That's what needs to be called inside of the LSPs pull diagnostics/push_diagnostics handlers.
The term "Orchestrator" initially had me thinking it would actually coordinate the checking work, but it doesn't really do that; that's RayonCheckScheduler. Orchestrator is really just orchestrating at the higher level of check requests and results.
I'm open to better names for Orchestrator
.
|
||
started_analysis.fetch_add(1, Ordering::SeqCst); | ||
OrchestratorMessage::CheckProgramCompleted(diagnostics) => { | ||
self.pending_analysis |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess pending_analysis
represents the fact that we know MainLoop
is currently running an analysis. Would it make sense for each analysis request to have a unique ID that gets carried along with all the relevant messages (CheckProgram
, CheckProgramStarted
, CheckProgramCompleted
, CheckProgramCancelled
, etc) and also stored in the PendingAnalysisState
, just to help us validate our assumptions about what corresponds to what, rather than implicitly relying on the two state machines always having their states matched up correctly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I consider it a bug if the state machines are out of sync and would prefer if we panic over trying to make it work somehow (and leak memory).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's reasonable. I wasn't suggesting we try to recover, more just that this could help clarify what happened in the bug case. But maybe tracing is sufficient for that.
}) | ||
.unwrap(); | ||
|
||
rayon::in_place_scope(|scope| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This blocks until checking is all done, right? So then we'll go to the next tick and probably find a CheckCompleted
message waiting for us already? (Or else just a new ApplyChanges
and CheckProgram
, if this check got cancelled.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's correct. Although the ApplyChanges
might gets debounced first to wait for new incoming file change message. It would be nice if this could all be in a single place but the challenge I faced is that we need to have a single "input" on which the orchestrator thread can wait on (suspend).
crates/red_knot/src/program/check.rs
Outdated
} | ||
|
||
#[tracing::instrument(level = "debug", skip(self, context))] | ||
fn check_file_with_context( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This name seems confusing -- it doesn't really clarify how it is different from check_file
. It's more like actually_check_file
. The fact that it takes a context is clear from the signature, doesn't really add anything in the name. Maybe just do_check
?
Right now it doesn't even really need the context (CheckFileTask::run
could check for cancellation), but in future when we do more complex and slower checks, we'll probably want to do more cancellation checks internally here, not just one at the start?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My idea is to make cancellation checks on every query, and expose an API so that we can perform checks even inside of queries.
b73a81f
to
aca0d3d
Compare
aca0d3d
to
636ca68
Compare
This PR refactors the
main.rs
by:check
logic and moving it toprogram.check
. There's some complication involved because we want the host environment to control how checks are scheduled (concurrently, or on the same thread). What I have now kind of works and isn't too much of a mess, but I feel like I manually implementedFuture
s. @BurntSushi do you know if it's possible to useFuture
s with out async? Yeah, it sounds stupid, hehe but what I want is something that starts some work and an orchestration thread can later poll the results without knowing if the computation runs on the same or another thread.main
loop to prevent that we create a new orchestration thread for every loop. Instead, the orchestration thread is now started once and it sends messages to the main thread, telling it what the next operation is that it should perform (react to file changes, print diagnostics, check the program)