Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce event segment to store multiple events under one key #25

Merged
merged 2 commits into from
Mar 10, 2025

Conversation

NingLin-P
Copy link
Member

With StorageValue the Events is a single array in the state that keep growing as more extrinsic is executed and more events append to this array, which makes execution and storage root calculation slower and slower as allocation/copying/hashing this array takes more and more time.

This PR introduces event segment to store multiple events under one key in a StorageMap. The size of the segment is defined by a new config item EventSegmentSize, while a larger segment leads toward the same issue as the StorageValue Events, a smaller segment brings more items to the Trie and may impact performance. From the benchmark result 100 maybe a good parameter for us.

Also, when EventSegmentSize is set to zero the event segment is disabled to avoid unexpected breaking change.

Copy link
Member

@teor2345 teor2345 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a helpful solution to our performance problems.

What happens if there is a block with a large number of events? Are those events ever cleared, or do they take up storage forever?

We could, for example, clear one unused segment per block. That would minimise the performance cost, but still make sure we eventually return storage to average levels.

@NingLin-P
Copy link
Member Author

What happens if there is a block with a large number of events? Are those events ever cleared, or do they take up storage forever?

With the current implementation, they won't be cleared but some of the events will be overwritten by the event of the next block. For example, block #1 deposits 100 events and block #2 deposited 10 events, the first 10 events of block #1 will be overwritten by the 10 events of block #2 while the rest of the event will stay in the storage.

We could, for example, clear one unused segment per block. That would minimise the performance cost, but still make sure we eventually return storage to average levels.

Yeah, I did propose something similar to upstream to periodically clear the event or introduce an extrinsic to clear the event manually, but haven't received response yet. At the moment I only added an UnclearedEventCount storage to keep track of how many uncleared event. Let's see what they say during the review of the upstream PR , we can add this kind of mitigation any time since it is not break change.

Copy link
Member

@teor2345 teor2345 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, looks good to me!

@NingLin-P NingLin-P merged commit e77327d into subspace-v9 Mar 10, 2025
60 of 138 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants