-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Broadcast channel leak? #337
Comments
Interesting, how many clients are you using in total? Or if you're using a pool what's the pool size? |
I'm using a single client to receive. My pool size is 10 in total. |
Huh, that doesn't quite add up so something might be wrong here. Some back of the napkin math: I'll take a closer look over the next few days and let you know if anything jumps out. |
Well it builds up very slow. Dhat-ra reports that tokio broadcast has it which is backed by fred.rs. Also in my project FuturesUnordered also took the crown because I also use for_each_concurrent for this: https://docs.rs/tokio-stream/latest/tokio_stream/wrappers/struct.UnboundedReceiverStream.html
|
Interesting, thanks for looking into it already with dhat. Can you describe the use case a bit? How many messages are you receiving roughly per second/minute/whatever, how big are the payloads, etc? I'll try to get a repro as well. And roughly how long did you have to wait to see a noticeable build up? I don't think fred is holding on to anything here, especially since the frame decoding uses a zero-copy decoding implementation, but it's hard to say. The underlying payloads use |
I identified a possible issue within my project. So this can be a false positive from my side. Fred isn't causing any issues in another of my 20 projects it's integrated in |
I noticed Fred uses tokio broadcast channel for it's pubsub. I only create a single single RX so it should (Clone) the value to one receiver. Could it be possible fred also uses Notification struct which also has over 300mb usage. Whenever I generate a flamegraph it reports Fred uses the most ram and keeps it allocated. I use Jemallocator. Other, allocators also do not work.
Reproducing this can be done with serde_json en deserializing an enum in my opinion. Nowhere, else is this leak found or no other part of my program uses over 50 mb except for fred that sits at 300MB+.
I use Redis 7.4.2 Cluster on Fred 10.0.4. Pool size of 10. And channel capacity set to 10000.
But the length does reach to 0 after it has processed and it keeps receiving. First fast, but there is something in between that prevents spam so eventually it becomes 0. Could you give me any pointers why Fred has too use so much ram? And why it all leads back to tokio sync broadcast channel when pubsub rx is used.
The text was updated successfully, but these errors were encountered: