Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add coretime test using zombienet-sdk #4883

Merged
merged 63 commits into from
Sep 18, 2024
Merged

Conversation

pepoviola
Copy link
Contributor

@pepoviola pepoviola commented Jun 26, 2024

Related to #4882
cc: @s0me0ne-unkn0wn

RUST_LOG=info,zombie=debug cargo test -p polkadot-zombienet-sdk-tests smoke::coretime_revenue::coretime_revenue_test --features zombie-metadata  -- --exact

Update: This pr is now ready for review. warp-sync failing test are not related.

@pepoviola pepoviola added R0-silent Changes should not be mentioned in any release notes T10-tests This PR/Issue is related to tests. labels Jun 26, 2024
@pepoviola pepoviola requested a review from a team as a code owner June 26, 2024 09:50
@bkchr
Copy link
Member

bkchr commented Jun 26, 2024

We are clearly not going to add 100k lines of code 🙈

These files should be generated by subxt before building the project.

@pepoviola
Copy link
Contributor Author

We are clearly not going to add 100k lines of code 🙈

These files should be generated by subxt before building the project.

@s0me0ne-unkn0wn, we can subxt as part of the job in CI to generate the needed code. This needs to be on top of any particular branch?
Thx!

@s0me0ne-unkn0wn
Copy link
Contributor

This needs to be on top of any particular branch?

Locally, I generated them from metadata scraped from a running zombienet with rococo-local and coretime-rococo-local, respectively. In principle, we may even use live networks for that if they're accessible from the CI environment. If I remember correctly, subxt derive macros allow for specifying metadata URL instead of path. Those huge .rs files are just so much more convenient for the local development

@bkchr
Copy link
Member

bkchr commented Jun 26, 2024

In principle, we may even use live networks for that if they're accessible from the CI environment. If I remember correctly, subxt derive macros allow for specifying metadata URL instead of path. Those huge .rs files are just so much more convenient for the local development

This is exactly what I predicted on what would happen when we use zombienet in Rust... We clearly will not use any live network, because when there are breaking stuff in the branch, they are probably not yet on the live network.

@jsdw we really need some improvements for the subxt macro. We should be able to specify a crate. Then it compiles the crate and fetches the wasm file. The name of the wasm file should be configurable and by default be detected.

@s0me0ne-unkn0wn
Copy link
Contributor

This is exactly what I predicted on what would happen when we use zombienet in Rust... We clearly will not use any live network, because when there are breaking stuff in the branch, they are probably not yet on the live network.

That's a fair point, but we should take into account the worst and the best cases. Not every PR introduces breaking changes — not even most of them do. And if one doesn't, we can use a live network as a fallback and a straightforward development path. That gives us the advantage of testing in a real-world-like environment.

If it does introduce changes to the metadata, we can still spin up a dummy network from the PR, fetch the metadata from it, and use it to build the test. It costs some time, but it's a holistic approach that resembles real-world usage much more than "build-a-crate" stuff. In my opinion, trying to use knowledge about the underlying data structures and logic lowers the level of tests, pushing them back from integration to system or even unit tests.

P.S. Please take it not as an objection but as an invitation to a discussion from someone who had one hard day today :)

@pepoviola
Copy link
Contributor Author

We can still use pjs from zombienet-sdk, we will need to rewrite the subxt call into js but is doable. Another option is to just fetche the metadata in the CI pipeline (in the build step), since ee can make the build of polkadot/polkadot-parachain a requirement and use the artifacts from there. Other option is to just add the metadata files, but the problem there is that will need to updates those when something change and that is very error prone.

I think option 1(embeded pjs) or 2(fetch the metadata in the build process) are options that worth to explore. Wdyt?

@bkchr
Copy link
Member

bkchr commented Jun 27, 2024

If it does introduce changes to the metadata, we can still spin up a dummy network from the PR, fetch the metadata from it, and use it to build the test. It costs some time, but it's a holistic approach that resembles real-world usage much more than "build-a-crate" stuff. In my opinion, trying to use knowledge about the underlying data structures and logic lowers the level of tests, pushing them back from integration to system or even unit tests.

I just want to run cargo test and not jump through 1000 rings to achieve this. People will just be confused why stuff isn't working. One type changing will just result in some decoding error somewhere. People will have no fucking clue on what is going and what they need to do.

@bkchr
Copy link
Member

bkchr commented Jun 27, 2024

@pepoviola the commit you have pushed breaks this stuff locally. I don't get it. I mean I proposed a solution above.

@pepoviola
Copy link
Contributor Author

@pepoviola the commit you have pushed breaks this stuff locally. I don't get it. I mean I proposed a solution above.

Yes, for running locally you need to first fetch the metadata files. I can also include those in the repo (and also fetch at build stage to catch any update). So, from the perspective of run cargo test and just works I think we can switch the network interactions to js and wait for the improvements in subxt macro. wdyt?
Also, this test is run as a separate binary and will not be pick-up by cargo test. I can change that also so will be run as part of the test suite.

Thanks!

@bkchr
Copy link
Member

bkchr commented Jun 27, 2024

Then at least put all the stuff into some script to the crate. Use the same script in the CI as well. At least this makes it easy to run this stuff locally. I would like to prevent having the metadata files in CI, but then also stuff like cargo check would require those metadata files to at least check the source code. However, we can probably but the stuff behind some feature that is then activated by the script.

In the future I would like to see that subxt is a little bit more flexible to fetch this stuff at compile time.

@pepoviola
Copy link
Contributor Author

Sounds good, I will work in that approach. Thanks!!

@s0me0ne-unkn0wn
Copy link
Contributor

I just want to run cargo test and not jump through 1000 rings to achieve this.

You won't have to. To run zombienet tests locally you need prebuilt node binaries anyway. So the test just spins up a noop network from those binaries, fetches metadata with subxt, builds the actual test and runs it. Yes, that involves some boilerplate code. But

  1. We only have to write it once
  2. It is only needed if the PR itself changes the metadata
  3. We wouldn't ever have to store metadata or generated code in repo.

Actually, there's one even more straightforward option: add a command to the Substrate node to export SCALE-encoded metadata to a file. Then you don't even need a noop network. You just do that export in build.rs of the test and subxt macro picks it up in the test code.

Anyway, you just run your cargo test without any additional movements.

@pepoviola
Copy link
Contributor Author

@joepetrowski @seadanda please confirm that the test logic itself makes sense, that is, the total issuance should indeed change in the manner it is described in the test comments: https://github.com/paritytech/polkadot-sdk/pull/4883/files#diff-275b35fb5cb16898b64ab5ba6b7da61a5107b3941aee88303cc298e735acbaa7 (need to "Load diff" for coretime_revenue.rs)

Hi @joepetrowski / @seadanda, did you have time to review the logic of this test?

Thanks!!

@seadanda
Copy link
Contributor

Hey, sorry I started but it's a big PR and I've been focusing on the coretime release. I'll review that part specifically now

@tdimitrov tdimitrov mentioned this pull request Aug 26, 2024
8 tasks
Copy link
Contributor

@seadanda seadanda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test steps look correct in general, but I think the assertions are only ever covering the Relay Chain issuance


log::info!("Waiting for a full-length sale to begin");

// Skip the first sale completeley as it may be a short one. Also, `request_code_count` requires
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean by a short sale?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean, when the very first sale starts, it can start in the middle of the region (IIUC). Given that we want to keep everything as deterministic as possible, I better wait for the beginning of the second sale. Thus, I will synchronize the block number where we begin the actual test to the beginning of the second region, which is deterministic. I hope that makes sense.

log::info!("Waiting for a full-length sale to begin");

// Skip the first sale completeley as it may be a short one. Also, `request_code_count` requires
// two session boundaries to propagate. Given that the `fast-runtime` session is 10 blocks and
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should be able to wait for an event emitted at a session change on the relay. If you wait for two and then go to the sale after it, that means that the relative timings are less important.

@paritytech-cicd-pr
Copy link

The CI pipeline was cancelled due to failure one of the required jobs.
Job name: test-linux-stable 2/3
Logs: https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/7204135

@pepoviola
Copy link
Contributor Author

ping @s0me0ne-unkn0wn are you agree to merge this one?
Thx!

@s0me0ne-unkn0wn
Copy link
Contributor

Yeah let's just merge it finally. Any improvements may be follow-ups. What we currently have just works, and it's already taken a ridiculous amount of time 🫠

@pepoviola pepoviola enabled auto-merge September 16, 2024 19:48
@pepoviola pepoviola added this pull request to the merge queue Sep 16, 2024
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Sep 16, 2024
@pepoviola pepoviola enabled auto-merge September 18, 2024 06:31
@pepoviola pepoviola added this pull request to the merge queue Sep 18, 2024
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Sep 18, 2024
@pepoviola pepoviola added this pull request to the merge queue Sep 18, 2024
Merged via the queue into master with commit ba38d31 Sep 18, 2024
203 of 209 checks passed
@pepoviola pepoviola deleted the jv-add-coretime-revenue-test branch September 18, 2024 10:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
R0-silent Changes should not be mentioned in any release notes T10-tests This PR/Issue is related to tests.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants