-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some acknowledged messages not being deleted from stream with sources and WorkQueue retention #5148
Comments
Thanks for the report. Best for you to upgrade to latest patch version, 2.10.11. If issue persists let us know. |
Will leave open for now. |
I was facing this with nats-server |
Thanks for the reply. |
We have updated all our NATS server environments to version We have cleared the affected streams of any messages and recreated the consumers. The issue is still there, but different. The difference now is that only one of the instances see the message being stuck. |
I think #5270 fixes and is available in 2.10.14 |
Thanks for the update @zatlodan, that is a condition that we were able to reproduce and was addressed in the v2.10.14 release from last week. |
Okay, thank you for the response, we will update to 2.10.14 and will let you know. |
Seems that the issue is no longer present after the update to 2.10.14. Thank you all for help and I will now close this issue 👍 |
Observed behavior
Stream (STREAM_B_Q) with a single consumer and retention set to
WorkQueue
reporting non zero message count after all messages are consumed and acknowledged by said consumer.This stream (STREAM_B_Q) is sourcing from another stream with retention set to
Limits
(STREAM_A).This behavior has occurred after a large amount of data was inserted into the source stream (STREAM_A).
Some more details:
STREAM_A
This is the source stream into which the data were published.
Config
State
STREAM_B_Q
This is the
WorkQueue
with issues.Config
State
Consumer
View from metrics
This is the
jetstream_stream_total_messages
metric on the streamSTREAM_B_Q
in the time the issue has arrisen.You can see 0 messages in the stream before the published bulk and
1980
after.Cluster info
4 nodes, all same version and HW specs, same private network.
No leaf nodes connected.
Expected behavior
All messages are removed from the stream after acknowledgement.
Stream reporting 0 total messages.
Server and client version
Server:
Version: 2.10.5
Git Commit: 0883d32
Go Version: go1.21.4
Consuming JS client:
https://www.npmjs.com/package/nats
Version: 2.15.1
CLI used to check
Version: 0.0.35
Host environment
No response
Steps to reproduce
The issue is quite flaky and occurs randomly throughout the month.
But it seems to be triggered by sudden spikes in published data in the source stream.
The text was updated successfully, but these errors were encountered: