-
Notifications
You must be signed in to change notification settings - Fork 856
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add coretime
test using zombienet-sdk
#4883
Conversation
We are clearly not going to add 100k lines of code 🙈 These files should be generated by subxt before building the project. |
@s0me0ne-unkn0wn, we can subxt as part of the job in CI to generate the needed code. This needs to be on top of any particular branch? |
Locally, I generated them from metadata scraped from a running zombienet with |
This is exactly what I predicted on what would happen when we use zombienet in Rust... We clearly will not use any live network, because when there are breaking stuff in the branch, they are probably not yet on the live network. @jsdw we really need some improvements for the |
That's a fair point, but we should take into account the worst and the best cases. Not every PR introduces breaking changes — not even most of them do. And if one doesn't, we can use a live network as a fallback and a straightforward development path. That gives us the advantage of testing in a real-world-like environment. If it does introduce changes to the metadata, we can still spin up a dummy network from the PR, fetch the metadata from it, and use it to build the test. It costs some time, but it's a holistic approach that resembles real-world usage much more than "build-a-crate" stuff. In my opinion, trying to use knowledge about the underlying data structures and logic lowers the level of tests, pushing them back from integration to system or even unit tests. P.S. Please take it not as an objection but as an invitation to a discussion from someone who had one hard day today :) |
We can still use I think option 1(embeded pjs) or 2(fetch the metadata in the build process) are options that worth to explore. Wdyt? |
I just want to run |
@pepoviola the commit you have pushed breaks this stuff locally. I don't get it. I mean I proposed a solution above. |
Yes, for running locally you need to first fetch the Thanks! |
Then at least put all the stuff into some script to the crate. Use the same script in the CI as well. At least this makes it easy to run this stuff locally. I would like to prevent having the metadata files in CI, but then also stuff like In the future I would like to see that |
Sounds good, I will work in that approach. Thanks!! |
You won't have to. To run zombienet tests locally you need prebuilt node binaries anyway. So the test just spins up a noop network from those binaries, fetches metadata with subxt, builds the actual test and runs it. Yes, that involves some boilerplate code. But
Actually, there's one even more straightforward option: add a command to the Substrate node to export SCALE-encoded metadata to a file. Then you don't even need a noop network. You just do that export in Anyway, you just run your |
Hi @joepetrowski / @seadanda, did you have time to review the logic of this test? Thanks!! |
Hey, sorry I started but it's a big PR and I've been focusing on the coretime release. I'll review that part specifically now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test steps look correct in general, but I think the assertions are only ever covering the Relay Chain issuance
|
||
log::info!("Waiting for a full-length sale to begin"); | ||
|
||
// Skip the first sale completeley as it may be a short one. Also, `request_code_count` requires |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean by a short sale?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean, when the very first sale starts, it can start in the middle of the region (IIUC). Given that we want to keep everything as deterministic as possible, I better wait for the beginning of the second sale. Thus, I will synchronize the block number where we begin the actual test to the beginning of the second region, which is deterministic. I hope that makes sense.
log::info!("Waiting for a full-length sale to begin"); | ||
|
||
// Skip the first sale completeley as it may be a short one. Also, `request_code_count` requires | ||
// two session boundaries to propagate. Given that the `fast-runtime` session is 10 blocks and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should be able to wait for an event emitted at a session change on the relay. If you wait for two and then go to the sale after it, that means that the relative timings are less important.
The CI pipeline was cancelled due to failure one of the required jobs. |
ping @s0me0ne-unkn0wn are you agree to merge this one? |
Yeah let's just merge it finally. Any improvements may be follow-ups. What we currently have just works, and it's already taken a ridiculous amount of time 🫠 |
Related to #4882
cc: @s0me0ne-unkn0wn
RUST_LOG=info,zombie=debug cargo test -p polkadot-zombienet-sdk-tests smoke::coretime_revenue::coretime_revenue_test --features zombie-metadata -- --exact
Update: This pr is now ready for review.
warp-sync
failing test are not related.