Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add storage bounds for pallet staking and clean up deprecated non paged exposure storages #6445

Closed
wants to merge 148 commits into from

Conversation

re-gius
Copy link
Contributor

@re-gius re-gius commented Nov 11, 2024

This is part of #6289 and necessary for the Asset Hub migration.

Building on the observations and suggestions from #255 .

Changes

  • Add MaxInvulnerables to bound Invulnerables Vec -> BoundedVec.
    • Set to constant 20 in the pallet (must be >= 17 for backward compatibility of runtime westend).
  • Add MaxDisabledValidators to bound DisabledValidators Vec -> BoundedVec
    • Set to constant 100 in the pallet (it should be <= 1/3 * MaxValidatorsCount according to the current disabling strategy)
  • Remove ErasStakers and ErasStakersClipped (see Tracker issue for cleaning up old non-paged exposure logic in staking pallet #433 )
  • Use MaxExposurePageSize to bound ErasStakersPaged mapping to exposure pages: each ExposurePage.others Vec is turned into a WeakBoundedVec to allow easy and quick changes to this bound
  • Add MaxBondedEras to bound BondedEras Vec -> BoundedVec
    • Set to BondingDuration::get() + 1 everywhere to include both time interval endpoints in [current_era - BondingDuration::get(), current_era]. Notice that this was done manually in every test and runtime, so I wonder if there is a better way to ensure that MaxBondedEras::get() == BondingDuration::get() + 1 everywhere.
  • Add MaxRewardPagesPerValidator to bound ClaimedRewards Vec of pages -> WeakBoundedVec
    • Set to constant 20 in the pallet. The vector of pages is now a WeakBoundedVec to allow easy and quick changes to this parameter
  • Remove MaxValidatorsCount optional storage item to add MaxValidatorsCount mandatory config parameter
    • Using it to to bound EraRewardPoints.individual BTreeMap -> BoundedBTreeMap;
    • Set to dynamic parameter in runtime westend so that changing it should not require migrations for it

TO DO
Slashing storage items will be bounded in another PR.

  • UnappliedSlashes
  • SlashingSpans

@@ -400,7 +401,7 @@ impl pallet_staking::Config for Runtime {
type DisablingStrategy = pallet_staking::UpToLimitWithReEnablingDisablingStrategy;
type MaxInvulnerables = ConstU32<20>;
type MaxRewardPagesPerValidator = ConstU32<20>;
type MaxValidatorsCount = ConstU32<300>;
type MaxValidatorsCount = MaxAuthorities;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a huge jump, is there a reason why the MaxAuthorities is so big in the test runtime?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, right! Then it's probably better to revert the change in that case, adding also a comment: 6c9806c

Copy link
Contributor

@gui1117 gui1117 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good, but there is still some comments to resolve.

Comment on lines 763 to 766
pub fn from_clipped(exposure: Exposure<AccountId, Balance>) -> Result<Self, ()> {
let old_exposures = exposure.others.len();
let others = WeakBoundedVec::try_from(exposure.others).unwrap_or_default();
defensive_assert!(old_exposures == others.len(), "Too many exposures for a page");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function is not used, we can make it better or remove it.

Suggested change
pub fn from_clipped(exposure: Exposure<AccountId, Balance>) -> Result<Self, ()> {
let old_exposures = exposure.others.len();
let others = WeakBoundedVec::try_from(exposure.others).unwrap_or_default();
defensive_assert!(old_exposures == others.len(), "Too many exposures for a page");
pub fn try_from_clipped(exposure: Exposure<AccountId, Balance>) -> Result<Self, ()> {
let others = WeakBoundedVec::try_from(exposure.others).map_err(|_| ())?;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed in 3b7bc95

claimed_pages.push(page);
// try to add page to claimed entries
if claimed_pages.try_push(page).is_err() {
defensive!("Limit reached for maximum number of pages.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The proof should be more precise. In what circumstance can this limit be reached, why is it impossible in practice.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean by "the proof"?

Comment on lines +75 to +88
let mut eras_stakers_keys =
v16::ErasStakers::<T>::iter_keys().map(|(k1, _k2)| k1).collect::<Vec<_>>();
eras_stakers_keys.dedup();
for k in eras_stakers_keys {
let mut removal_result =
v16::ErasStakers::<T>::clear_prefix(k, u32::max_value(), None);
while let Some(next_cursor) = removal_result.maybe_cursor {
removal_result = v16::ErasStakers::<T>::clear_prefix(
k,
u32::max_value(),
Some(&next_cursor[..]),
);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to try to remove all keys in one time, if we don't need multi-block migration and we are sure it is ok then we can do:

Suggested change
let mut eras_stakers_keys =
v16::ErasStakers::<T>::iter_keys().map(|(k1, _k2)| k1).collect::<Vec<_>>();
eras_stakers_keys.dedup();
for k in eras_stakers_keys {
let mut removal_result =
v16::ErasStakers::<T>::clear_prefix(k, u32::max_value(), None);
while let Some(next_cursor) = removal_result.maybe_cursor {
removal_result = v16::ErasStakers::<T>::clear_prefix(
k,
u32::max_value(),
Some(&next_cursor[..]),
);
}
}
v16::ErasStakers::<T>::clear(u32::max_value(), None);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can be sure for Polkadot or Kusama but how can we be sure for every other chain? Maybe there are some limits I'm not aware of

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These storage items are already empty in Polkadot and Kusama (you can validate in an UI), and this pallet is not parachian-ready, so it should be fine.

Comment on lines +90 to +104
let mut eras_stakers_clipped_keys = v16::ErasStakersClipped::<T>::iter_keys()
.map(|(k1, _k2)| k1)
.collect::<Vec<_>>();
eras_stakers_clipped_keys.dedup();
for k in eras_stakers_clipped_keys {
let mut removal_result =
v16::ErasStakersClipped::<T>::clear_prefix(k, u32::max_value(), None);
while let Some(next_cursor) = removal_result.maybe_cursor {
removal_result = v16::ErasStakersClipped::<T>::clear_prefix(
k,
u32::max_value(),
Some(&next_cursor[..]),
);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
let mut eras_stakers_clipped_keys = v16::ErasStakersClipped::<T>::iter_keys()
.map(|(k1, _k2)| k1)
.collect::<Vec<_>>();
eras_stakers_clipped_keys.dedup();
for k in eras_stakers_clipped_keys {
let mut removal_result =
v16::ErasStakersClipped::<T>::clear_prefix(k, u32::max_value(), None);
while let Some(next_cursor) = removal_result.maybe_cursor {
removal_result = v16::ErasStakersClipped::<T>::clear_prefix(
k,
u32::max_value(),
Some(&next_cursor[..]),
);
}
}
v16::ErasStakersClipped::<T>::clear(u32::max_value(), None);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, are we assuming that this is safe?

} else {
log!(info, "v17 applied successfully.");
}
T::DbWeight::get().reads_writes(1, 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can count the number of operation as we do them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How can we do that?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As you iterate and remove/set any storage item, bump a counter (let mut x = 0, y) in this code, and use that as the final reads_writes(x, y)

@paritytech-review-bot paritytech-review-bot bot requested a review from a team January 21, 2025 16:22
@paritytech-workflow-stopper
Copy link

All GitHub workflows were cancelled due to failure one of the required jobs.
Failed workflow url: https://github.com/paritytech/polkadot-sdk/actions/runs/12891905705
Failed job name: fmt

@kianenigma
Copy link
Contributor

@re-gius can you please target #7282, and once the pallet compiles incl benchmarks, merge it over?

///
/// When this value is not set, no limits are enforced.
#[pallet::storage]
pub type MaxValidatorsCount<T> = StorageValue<_, u32, OptionQuery>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this removed? it should not be. This counter is still used, and is a moving target for maximum number of validators that the system can have.

In my branch, there is a static target declared as the maximum number of validators:

type MaxValidatorSet

And MaxValidatorsCount should still be dynamic, but always less thanMaxValidatorsSet

The names suck, I know :(

@re-gius
Copy link
Contributor Author

re-gius commented Feb 5, 2025

This PR is in the process of being closed to be re-built as #7483 on top of #7282

kianenigma added a commit that referenced this pull request Feb 10, 2025
… exposures (#7483)

Building from #6445 on top of #7282 

**Changes**

- [x] Bound `Invulnerables`, vector of validators invulnerable to
slashing.
- Add `MaxInvulnerables` to bound `Invulnerables` Vec -> `BoundedVec`.
- Set to constant 20 in the pallet (must be >= 17 for backward
compatibility with runtime `westend`).
- [x] Bound `Disabled Validators`, vector of validators that have
offended in a given era and have been disabled.
- Add `MaxDisabledValidators` to bound `DisabledValidators` Vec ->
`BoundedVec`.
- Set to constant 100 in the pallet (it should be <= 1/3 *
`MaxValidatorsCount` according to the current disabling strategy).
- [x] Remove `ErasStakers` and `ErasStakersClipped` (see #433 ),
non-paged validators exposures.
- They were deprecated in v14 and could have been removed since staking
era 1504 (now it's > 1700).
    - They are already empty on Polkadot and Kusama.
    - Completing the task from #5986.

Migrating pallet `staking` storage to v17 to apply all changes.  

**TO DO** (in a follow-up PR)
- [ ] Bound `ErasStakersPaged`
    - this needs bounding `ExposurePage.others` vector
- [ ] Bound `BondedEras` vector
- [ ] Bound `ClaimedRewards` pages vector
- [ ] Bound `ErasRewardPoints`
    - this needs bounding `EraRewardPoints.individual` BTreeMap 
- [ ] Bound `UnappliedSlashes`
- [ ] Bound `SlashingSpans` 
    - this needs bounding `SlashingSpans.prior` vector

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: kianenigma <[email protected]>
@kianenigma kianenigma closed this Feb 12, 2025
@re-gius re-gius deleted the re-gius/staking/add-storage-bounds branch February 17, 2025 12:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T1-FRAME This PR/Issue is related to core FRAME, the framework. T2-pallets This PR/Issue is related to a particular pallet.
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

6 participants