-
Notifications
You must be signed in to change notification settings - Fork 812
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a goroutine worker pool to send data from distributors to ingesters. #6406
Create a goroutine worker pool to send data from distributors to ingesters. #6406
Conversation
for _, i := range instances { | ||
go func(i instance) { | ||
e.Submit(func() { | ||
err := callback(i.desc, i.indexes) | ||
tracker.record(i, err) | ||
wg.Done() | ||
}(i) | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The closure var should be fine now:
https://tip.golang.org/doc/go1.22
Previously, the variables declared by a “for” loop were created once and updated by each iteration. In Go 1.22, each iteration of the loop creates new variables, to avoid accidental sharing bugs. The transition support tooling described in the proposal continues to work in the same way it did in Go 1.21.
bb63874
to
2a0e956
Compare
ea066bd
to
3c5ff63
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work.
Is it worth a changelog? And experimental feature?
Signed-off-by: alanprot <[email protected]>
Signed-off-by: alanprot <[email protected]>
3c5ff63
to
4bbfa0c
Compare
I forgot about changelog. :P Yeah.. lemme mark as experimental. |
Signed-off-by: alanprot <[email protected]>
What this PR does:
Implement an option to use a goroutine worker pool to handle requests from the distributor to ingesters, rather than spawning a new goroutine for each request.
This PR was inspired on: grpc/grpc-go#3204
Looking at the distributor flame graph, we could see that lots of cpu was being used on the

runtime.newstack
call:This problem increases with the RemoteWrite TPS and the number of ingesters in the cluster (as, for request and ingesters, we create a new go routine).
With this PR enabled, we could see a reduction on 20% of CPU on some cortex cluster with hundreds of ingesters.
We could not see any
runtime.newstack
on the cpu flamegraph after the option was enabled.Which issue(s) this PR fixes:
Fixes #
Checklist
CHANGELOG.md
updated - the order of entries should be[CHANGE]
,[FEATURE]
,[ENHANCEMENT]
,[BUGFIX]