Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JMcK sample tests for review #4147

Open
wants to merge 25 commits into
base: dev
Choose a base branch
from

Conversation

jamesmckenna77
Copy link

@jamesmckenna77 jamesmckenna77 commented Feb 27, 2025

Adding 2 tests to the existing Capella repo:

  1. In test_process_deposit.py:
    added "test_validator_invalid_high_funding_after_withdrawal" which is a negative scenario, based on the previous test, to cover a user depositing 2x the MAX_EFFECTIVE_BALANCE, post state is None as per the readme

  2. Created test_process_attestation and added a more functional test - "test_validator_appears_only_once_in_attestation" to confirm that the validators in a slot committee are unique and do not contain duplicates

As for feedback on the repository itself, my feeling was that the overall structure could be amended to keep the test files for each fork closer to the actual config/spec files.
For example, in the eth2spec directory, we have a series of folders for each update - capella, bellatricx, altair. These directories contain the minimal and mainnet.py configs associated with that update.
Alongside them is a test directory and inside that we again have a full list of folders for each update.
In my opinion, the repo would be a bit more intuitive by consolidating these tests and config files closer together. So, have one capella folder, and inside it there should be a folder for tests and another for config/spec. This keeps them together and clearly links them.

While the readme are mostly very good, clear and helpful, the comments within the actual tests and functions are sometimes less than instructive about what the function actually does. Thorough documentation would make the ramp up and correct usage easier to achieve.

1. In Capella, test_process_deposit.py
test_validator_invalid_high_funding_after_withdrawal which is a negative scenario to cover a user depositing 2x the MAX_EFFECTIVE_BALANCE, post state is None
2. Added a more functional test in test_process_attestation to confirm that the validators in a slot committee are unique
1. In Capella, test_process_deposit.py
test_validator_invalid_high_funding_after_withdrawal which is a negative scenario to cover a user depositing 2x the MAX_EFFECTIVE_BALANCE, post state is None
2. Added a more functional test in test_process_attestation to confirm that the validators in a slot committee are unique
Comment on lines 61 to 86
@with_capella_and_later
@spec_state_test
def test_validator_invalid_high_funding_after_withdrawal(spec, state):
"""
Simple extension of the previous test_success_top_up_to_withdrawn_validator in order to add a negative scenario
where a user deposits 2x MAX_EFFECTIVE_BALANCE.
"""
validator_index = 5 # Example validator index

# Fully withdraw the validator
set_validator_fully_withdrawable(spec, state, validator_index)
assert state.balances[validator_index] > 0

next_epoch_via_block(spec, state)
assert state.balances[validator_index] == 0
assert state.validators[validator_index].effective_balance > 0

next_epoch_via_block(spec, state)
assert state.validators[validator_index].effective_balance == 0

# Make a top-up balance to validator - attempt to deposit double the max deposit)
deposit = spec.MAX_EFFECTIVE_BALANCE * 2

# Handle the negative test based on the readme - no post state
if deposit > spec.MAX_EFFECTIVE_BALANCE:
yield 'post', None
Copy link
Member

@jtraglia jtraglia Feb 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test is flawed for a few reasons.

  • It's possible deposit as much ETH as you want, but the validators effective balance is limited.
  • If it were invalid, you should use run_deposit_processing(..., valid=False) instead manually yielding.
  • It doesn't handle the electra case (spec.MAX_EFFECTIVE_BALANCE_ELECTRA).
  • What readme are you referring to? The test format?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Ok, I got it. I need to run_deposit_processing = False to get a proper rejection.
  • I can add an If statement to use electra if the fork is electra
  • I pulled that from /consensus-specs/tests/README.md. It mentions that this method is used to indicate a test that was designed to fail, but maybe I've misread it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey again. No worries, this stuff is pretty complicated. I would suggest updating this test to check that a top up deposit greater than MAX_EFFECTIVE_BALANCE (before and after electra) still works. And that the validator's balance (not its effective balance) is not limited.

Comment on lines 7 to 42
@with_capella_and_later
@spec_state_test
def test_validator_appears_only_once_in_attestation(spec, state):
"""
A test to confirm that the validator_index of the validators in an attestation committee are unique and there
are no duplicate validators
"""
slot = state.slot
epoch = slot // spec.SLOTS_PER_EPOCH

# Get the number of committees assigned per slot in the current epoch
slot_committee_count = spec.get_committee_count_per_slot(state, epoch)

# Get the list of validators for each committee in the slot
validators = []
for committee in range(slot_committee_count):
validator_index = spec.get_committee_assignment(state, epoch, committee)

# There are tuples and individual validators in the list, so they need to be extracted
if isinstance(validator_index, tuple):
validators.extend(validator_index[0])
elif isinstance(validator_index, list):
validators.extend(validator_index)
else:
validators.append(validator_index)

# Check that the assigned_validators list is not empty
assert len(validators) > 0

# Confirm that the same validator does not appear more than once in the list of validators
validator_ids = set()
for validator_id in validators:
assert validator_id not in validator_ids
validator_ids.add(validator_id)

yield "committee_assignments", validator_ids
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test is interesting. I have a few questions.

  • Why did you include this in the capella tests?
  • Is this test applicable to previous forks?
  • Is this the easiest way to check if validators only appear once?
  • Why do you yield committee assignments?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • It is in Capella purely because that was the directory I was working in at the time. Happy to move it to more appropriate location - the phase0 directory looks like the right spot so I'll move it there.
  • This test would indeed be applicable for pretty much every fork/version. Attestations are one of the pillars so I guess I can change that to "@with_all_phases"
  • I yielded the committee_assignments (validator_ids) so that if an issue was encountered, the list, containing evidence any duplicate values would be available. It isn't essential to the test

I'll submit an updated version for both tests.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey James. Yes, it would go into the phase0 directory with @with_all_phases. And I see. Unfortunately, we cannot yield committee assignments here as there's a standard format that clients expect. Please refer to other test_process_attestation.py files to see the way things are done. For example:

@with_all_phases
@spec_state_test
@always_bls
def test_invalid_attestation_signature(spec, state):
attestation = get_valid_attestation(spec, state)
next_slots(spec, state, spec.MIN_ATTESTATION_INCLUSION_DELAY)
yield from run_attestation_processing(spec, state, attestation, valid=False)

I believe this is the relevant format file.

jamesmckenna77 and others added 20 commits February 28, 2025 17:24
1. Removal of test_process_attestation.py from Capella
2. test_validator_appears_only_once_in_attestation has been updated to work with all phases and moved to phase0. Assertion simplified
3. Updates to test_validator_invalid_high_funding_after_withdrawal to handle electra deposits and to correctly handle processing of the deposit through run_deposit_processing
1. In Capella, test_process_deposit.py
test_validator_invalid_high_funding_after_withdrawal which is a negative scenario to cover a user depositing 2x the MAX_EFFECTIVE_BALANCE, post state is None
2. Added a more functional test in test_process_attestation to confirm that the validators in a slot committee are unique
1. In Capella, test_process_deposit.py
test_validator_invalid_high_funding_after_withdrawal which is a negative scenario to cover a user depositing 2x the MAX_EFFECTIVE_BALANCE, post state is None
2. Added a more functional test in test_process_attestation to confirm that the validators in a slot committee are unique
1. Removal of test_process_attestation.py from Capella
2. test_validator_appears_only_once_in_attestation has been updated to work with all phases and moved to phase0. Assertion simplified
3. Updates to test_validator_invalid_high_funding_after_withdrawal to handle electra deposits and to correctly handle processing of the deposit through run_deposit_processing
Comment on lines 92 to 93
# Process the invalid deposit, setting valid=False as we expect it to be rejected
run_deposit_processing(spec, state, validator_index, deposit, valid=False)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So like I said before, this is a valid situation.

The invalid usage of run_deposit_processing is throwing an exception for the wrong reason.

It should be:

Suggested change
# Process the invalid deposit, setting valid=False as we expect it to be rejected
run_deposit_processing(spec, state, validator_index, deposit, valid=False)
# Process deposit with twice the max effective balance
yield from run_deposit_processing(spec, state, deposit, validator_index)

Comment on lines 95 to 96
# Validate that the deposit was rejected by checking the effective balance
assert state.validators[validator_index].effective_balance == 0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's keep this check, but also add one that ensures the validator's balance is the deposit amount.

Copy link
Author

@jamesmckenna77 jamesmckenna77 Mar 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey Justin,
Might need some guidance from you here to make the invalid deposits test work for all forks. I'm using

deposit = prepare_state_and_deposit(spec, state, validator_index, amount, signed=True)
yield from run_deposit_processing(spec, state, deposit, validator_index)

and below is the output from a test I'm running at the moment:

('Fork:', 'deneb', 'Actual Balance:', 64000000000, 'Expected Balance:', 64000000000, 'Actual Effective Balance:', 0, 'Expected Effective Balance:', 0)
('Fork:', 'capella', 'Actual Balance:', 64000000000, 'Expected Balance:', 64000000000, 'Actual Effective Balance:', 0, 'Expected Effective Balance:', 0)
('Fork:', 'fulu', 'Actual Balance:', 0, 'Expected Balance:', 64000000000, 'Actual Effective Balance:', 0, 'Expected Effective Balance:', 0)
('Fork:', 'electra', 'Actual Balance:', 0, 'Expected Balance:', 4096000000000, 'Actual Effective Balance:', 0, 'Expected Effective Balance:', 0)

2 observations:

  • I'm using @with_capella_and_later as the decorator, but fulu is not being processed correctly and is returning the actual balance = 0 when it should be 64000000000.

From what I can gather, if I run my test using is_post_electra(spec), and the current spec if fulu, it runs through the electra path, so perhaps it is also going through the wrong path in run_deposit_processing as well? I believe fulu is before electra?

  • Similar story for Electra - it returns 0 instead of 4096000000000 as the current balance.
    run_deposit_processing appears to support electra, so I'm using it as the primary way to try and process the deposit.

BUT I also tried with this version as I noticed it had been used in some spots in the electra directory.

deposit_request = prepare_deposit_request(spec, validator_index, amount, signed=True)
yield from run_deposit_request_processing(spec, state, deposit_request, validator_index)

Neither have resulted in a successfully processed deposit.

Comment on lines 586 to 616
@with_all_phases
@spec_state_test
def test_validator_appears_only_once_in_attestation(spec, state):
"""
A test to confirm that the validator_index of the validators in an attestation committee are unique and there
are no duplicate validators
"""
slot = state.slot
epoch = slot // spec.SLOTS_PER_EPOCH

# Get the number of committees assigned per slot in the current epoch
slot_committee_count = spec.get_committee_count_per_slot(state, epoch)

# Get the list of validators for each committee in the slot
validators = []
for committee in range(slot_committee_count):
validator_index = spec.get_committee_assignment(state, epoch, committee)

# There are tuples and individual validators in the list, so they need to be extracted
if isinstance(validator_index, tuple):
validators.extend(validator_index[0])
elif isinstance(validator_index, list):
validators.extend(validator_index)
else:
validators.append(validator_index)

# Check that the list of validators is not empty
assert len(validators) > 0

# Confirm that the same amount of unique validators appear in the list
assert len(validators) == len(set(validators))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this isn't yielding the required data for this test format. That's okay though. Let's move this to the unittests directory which doesn't require that. I think right here would be a decent spot for this.

Copy link
Author

@jamesmckenna77 jamesmckenna77 Mar 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback Justin.
Yeah, probably more useful as a unittest. It isn't giving anything meaningful that a user can run with, but I think it adds some value.
I'll move it to the unittests

…a to unitest folder

2. For the purposes of the interview process, I'm going to remove test_validator_invalid_high_funding_after_withdrawal for now.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants