Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[move-prover] Sharding feature + re-animate Prover.toml #15775

Merged
merged 1 commit into from
Jan 21, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions aptos-move/e2e-benchmark/data/calibration_values.tsv
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,14 @@ VectorPicture { length: 30720 } 34 0.924 1.081 6900.0
VectorPictureRead { length: 30720 } 34 0.938 1.089 6900.0
SmartTablePicture { length: 30720, num_points_per_txn: 200 } 34 0.972 1.074 42970.1
SmartTablePicture { length: 1048576, num_points_per_txn: 300 } 34 0.960 1.066 73865.4
ResourceGroupsSenderWriteTag { string_length: 1024 } 34 0.898 1.062 16.2
ResourceGroupsSenderWriteTag { string_length: 1024 } 34 0.898 1.062 19.2
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR does not touch anything in the execution stack, so this most be from previous PRs. (Perhaps permissioned signers? @igor-aptos

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like signer has indeed added noise - here it the recalibration PR - #15778

generated by doing:

./testsuite/single_node_performance_calibration.py --time-interval=3d
./testsuite/single_node_performance_calibration.py --move-e2e --time-interval=3d

note - this particular line didn't regress as much as you saw from the first run - you can see your second run much closer to the original value. But I assume all the changes create way too much noise anyways.

ResourceGroupsSenderMultiChange { string_length: 1024 } 34 0.923 1.101 33.4
TokenV1MintAndTransferFT 34 0.947 1.073 390.1
TokenV1MintAndTransferNFTSequential 34 0.925 1.063 600.3
TokenV2AmbassadorMint { numbered: true } 34 0.927 1.076 466.4
LiquidityPoolSwap { is_stable: true } 34 0.920 1.043 689.8
LiquidityPoolSwap { is_stable: false } 34 0.929 1.061 628.8
CoinInitAndMint 34 0.945 1.072 207.6
CoinInitAndMint 34 0.945 1.072 215.6
FungibleAssetMint 34 0.934 1.063 236.7
IncGlobalMilestoneAggV2 { milestone_every: 1 } 34 0.913 1.040 33.7
IncGlobalMilestoneAggV2 { milestone_every: 2 } 34 0.915 1.050 19.1
Expand Down
2 changes: 2 additions & 0 deletions aptos-move/framework/aptos-framework/Prover.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
[backend]
shards = 5
116 changes: 57 additions & 59 deletions aptos-move/framework/src/prover.rs
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ use std::{
};
use tempfile::TempDir;

#[derive(Debug, Clone, clap::Parser, serde::Serialize, serde::Deserialize)]
#[derive(Debug, Clone, Default, clap::Parser, serde::Serialize, serde::Deserialize)]
pub struct ProverOptions {
/// Verbosity level
#[clap(long, short)]
Expand Down Expand Up @@ -50,20 +50,30 @@ pub struct ProverOptions {
pub cvc5: bool,

/// The depth until which stratified functions are expanded.
#[clap(long, default_value_t = 6)]
pub stratification_depth: usize,
#[clap(long)]
pub stratification_depth: Option<usize>,

/// A seed for the prover.
#[clap(long, default_value_t = 0)]
pub random_seed: usize,
#[clap(long)]
pub random_seed: Option<usize>,

/// The number of cores to use for parallel processing of verification conditions.
#[clap(long, default_value_t = 4)]
pub proc_cores: usize,
#[clap(long)]
pub proc_cores: Option<usize>,

/// The number of shards to split the verification problem into. Shards are
/// processed sequentially. This can be used to ease memory pressure for verification
/// of large packages.
#[clap(long)]
pub shards: Option<usize>,

/// If there are multiple shards, the shard to which verification shall be narrowed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From looking at other related code, it looks like the shards are numbered 1.. and not 0... We should document this, or perhaps make it numbered 0..?

#[clap(long)]
pub only_shard: Option<usize>,

/// A (soft) timeout for the solver, per verification condition, in seconds.
#[clap(long, default_value_t = 40)]
pub vc_timeout: usize,
#[clap(long)]
pub vc_timeout: Option<usize>,

/// Whether to disable global timeout overwrite.
/// With this flag set to true, the value set by "--vc-timeout" will be used globally
Expand Down Expand Up @@ -118,32 +128,6 @@ pub struct ProverOptions {
pub for_test: bool,
}

impl Default for ProverOptions {
fn default() -> Self {
Self {
verbosity: None,
filter: None,
only: None,
trace: false,
cvc5: false,
stratification_depth: 6,
random_seed: 0,
proc_cores: 4,
vc_timeout: 40,
disallow_global_timeout_to_be_overwritten: false,
check_inconsistency: false,
unconditional_abort_as_inconsistency: false,
keep_loops: false,
loop_unroll: None,
stable_test_output: false,
dump: false,
benchmark: false,
for_test: false,
skip_instance_check: false,
}
}
}

impl ProverOptions {
/// Runs the move prover on the package.
pub fn prove(
Expand Down Expand Up @@ -173,7 +157,7 @@ impl ProverOptions {
known_attributes.clone(),
experiments.to_vec(),
)?;
let mut options = self.convert_options();
let mut options = self.convert_options(package_path)?;
options.language_version = language_version;
options.model_builder.language_version = language_version.unwrap_or_default();
if compiler_version.unwrap_or_default() >= CompilerVersion::V2_0
Expand Down Expand Up @@ -225,65 +209,79 @@ impl ProverOptions {
Ok(())
}

fn convert_options(self) -> move_prover::cli::Options {
fn convert_options(self, package_path: &Path) -> anyhow::Result<Options> {
let prover_toml = package_path.join("Prover.toml");
let base_opts = if prover_toml.exists() {
Options::create_from_toml_file(prover_toml.to_string_lossy().as_ref())?
} else {
Options::default()
};
let verbosity_level = if let Some(level) = self.verbosity {
level
} else if self.for_test {
LevelFilter::Warn
} else {
LevelFilter::Info
base_opts.verbosity_level
};
let opts = move_prover::cli::Options {
let opts = Options {
output_path: "".to_string(),
verbosity_level,
prover: move_prover_bytecode_pipeline::options::ProverOptions {
verify_scope: if let Some(name) = self.only {
VerificationScope::Only(name)
} else {
VerificationScope::All
base_opts.prover.verify_scope.clone()
},
stable_test_output: self.stable_test_output,
stable_test_output: self.stable_test_output || base_opts.prover.stable_test_output,
auto_trace_level: if self.trace {
move_prover_bytecode_pipeline::options::AutoTraceLevel::VerifiedFunction
} else {
move_prover_bytecode_pipeline::options::AutoTraceLevel::Off
base_opts.prover.auto_trace_level
},
report_severity: Severity::Warning,
dump_bytecode: self.dump,
dump_bytecode: self.dump || base_opts.prover.dump_bytecode,
dump_cfg: false,
check_inconsistency: self.check_inconsistency,
unconditional_abort_as_inconsistency: self.unconditional_abort_as_inconsistency,
skip_loop_analysis: self.keep_loops,
..Default::default()
check_inconsistency: self.check_inconsistency
|| base_opts.prover.check_inconsistency,
unconditional_abort_as_inconsistency: self.unconditional_abort_as_inconsistency
|| base_opts.prover.unconditional_abort_as_inconsistency,
skip_loop_analysis: self.keep_loops
|| base_opts.prover.unconditional_abort_as_inconsistency,
..base_opts.prover.clone()
},
backend: move_prover_boogie_backend::options::BoogieOptions {
use_cvc5: self.cvc5,
use_cvc5: self.cvc5 || base_opts.backend.use_cvc5,
boogie_flags: vec![],
generate_smt: self.dump,
stratification_depth: self.stratification_depth,
proc_cores: self.proc_cores,
vc_timeout: self.vc_timeout,
generate_smt: self.dump || base_opts.backend.generate_smt,
stratification_depth: self
.stratification_depth
.unwrap_or(base_opts.backend.stratification_depth),
proc_cores: self.proc_cores.unwrap_or(base_opts.backend.proc_cores),
shards: self.shards.unwrap_or(base_opts.backend.shards),
only_shard: self.only_shard.or(base_opts.backend.only_shard),
vc_timeout: self.vc_timeout.unwrap_or(base_opts.backend.vc_timeout),
global_timeout_overwrite: !self.disallow_global_timeout_to_be_overwritten,
keep_artifacts: self.dump,
stable_test_output: self.stable_test_output,
keep_artifacts: self.dump || base_opts.backend.keep_artifacts,
stable_test_output: self.stable_test_output || base_opts.backend.stable_test_output,
z3_trace_file: if self.dump {
Some("z3.trace".to_string())
} else {
None
},
custom_natives: None,
loop_unroll: self.loop_unroll,
skip_instance_check: self.skip_instance_check,
..Default::default()
loop_unroll: self.loop_unroll.or(base_opts.backend.loop_unroll),
skip_instance_check: self.skip_instance_check
|| base_opts.backend.skip_instance_check,
..base_opts.backend
},
..Default::default()
..base_opts
};
if self.for_test {
opts.setup_logging_for_test();
} else {
opts.setup_logging()
}
opts
Ok(opts)
}

pub fn default_for_test() -> Self {
Expand Down
41 changes: 34 additions & 7 deletions aptos-move/framework/tests/move_prover_tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,11 @@ pub fn read_env_var(v: &str) -> String {
std::env::var(v).unwrap_or_else(|_| String::new())
}

pub fn run_prover_for_pkg(path_to_pkg: impl Into<String>) {
pub fn run_prover_for_pkg(
path_to_pkg: impl Into<String>,
shards: usize,
only_shard: Option<usize>,
) {
let pkg_path = path_in_crate(path_to_pkg);
let mut options = ProverOptions::default_for_test();
let no_tools = read_env_var("BOOGIE_EXE").is_empty()
Expand All @@ -47,12 +51,15 @@ pub fn run_prover_for_pkg(path_to_pkg: impl Into<String>) {
let unconditional_abort_inconsistency_flag =
read_env_var(ENV_TEST_UNCONDITIONAL_ABORT_AS_INCONSISTENCY) == "1";
let disallow_timeout_overwrite = read_env_var(ENV_TEST_DISALLOW_TIMEOUT_OVERWRITE) == "1";
options.shards = Some(shards);
options.only_shard = only_shard;
options.check_inconsistency = inconsistency_flag;
options.unconditional_abort_as_inconsistency = unconditional_abort_inconsistency_flag;
options.disallow_global_timeout_to_be_overwritten = disallow_timeout_overwrite;
options.vc_timeout = read_env_var(ENV_TEST_VC_TIMEOUT)
.parse::<usize>()
.unwrap_or(options.vc_timeout);
.ok()
.or(options.vc_timeout);
let skip_attribute_checks = false;
options
.prove(
Expand All @@ -71,21 +78,41 @@ pub fn run_prover_for_pkg(path_to_pkg: impl Into<String>) {
}

#[test]
fn move_framework_prover_tests() {
run_prover_for_pkg("aptos-framework");
fn move_framework_prover_tests_shard1() {
run_prover_for_pkg("aptos-framework", 5, Some(1));
}

#[test]
fn move_framework_prover_tests_shard2() {
run_prover_for_pkg("aptos-framework", 5, Some(2));
}

#[test]
fn move_framework_prover_tests_shard3() {
run_prover_for_pkg("aptos-framework", 5, Some(3));
}

#[test]
fn move_framework_prover_tests_shard4() {
run_prover_for_pkg("aptos-framework", 5, Some(4));
}

#[test]
fn move_framework_prover_tests_shard5() {
run_prover_for_pkg("aptos-framework", 5, Some(5));
}

#[test]
fn move_token_prover_tests() {
run_prover_for_pkg("aptos-token");
run_prover_for_pkg("aptos-token", 1, None);
}

#[test]
fn move_aptos_stdlib_prover_tests() {
run_prover_for_pkg("aptos-stdlib");
run_prover_for_pkg("aptos-stdlib", 1, None);
}

#[test]
fn move_stdlib_prover_tests() {
run_prover_for_pkg("move-stdlib");
run_prover_for_pkg("move-stdlib", 1, None);
}
Original file line number Diff line number Diff line change
Expand Up @@ -59,11 +59,15 @@ use move_stackless_bytecode::{
Operation, PropKind,
},
};
use std::collections::{BTreeMap, BTreeSet};
use std::{
collections::{BTreeMap, BTreeSet},
hash::{DefaultHasher, Hash, Hasher},
};

pub struct BoogieTranslator<'env> {
env: &'env GlobalEnv,
options: &'env BoogieOptions,
for_shard: Option<usize>,
writer: &'env CodeWriter,
spec_translator: SpecTranslator<'env>,
targets: &'env FunctionTargetsHolder,
Expand All @@ -85,12 +89,14 @@ impl<'env> BoogieTranslator<'env> {
pub fn new(
env: &'env GlobalEnv,
options: &'env BoogieOptions,
for_shard: Option<usize>,
targets: &'env FunctionTargetsHolder,
writer: &'env CodeWriter,
) -> Self {
Self {
env,
options,
for_shard,
targets,
writer,
spec_translator: SpecTranslator::new(writer, env, options),
Expand All @@ -113,7 +119,24 @@ impl<'env> BoogieTranslator<'env> {
.unwrap_or(default_timeout)
}

pub fn is_not_verified_timeout(&self, fun_target: &FunctionTarget) -> bool {
/// Checks whether the given function is a verification target.
fn is_verified(&self, fun_variant: &FunctionVariant, fun_target: &FunctionTarget) -> bool {
if !fun_variant.is_verified() {
return false;
}
if let Some(shard) = self.for_shard {
// Check whether the shard is included.
if self.options.only_shard.is_some() && self.options.only_shard != Some(shard + 1) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: could use is_some_and() instead.

return false;
}
// Check whether it is part of the shard.
let mut hasher = DefaultHasher::new();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this hasher is non-deterministic? Thus, if you decide to shard one by one, a target might fall into different shards each time? Thus, we may want to use a deterministic hasher.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wrwg Checking in to see if you missed this before merging.

fun_target.func_env.get_full_name_str().hash(&mut hasher);
if (hasher.finish() as usize) % self.options.shards != shard {
return false;
}
}
// Check whether the estimated duration is too large for configured timeout
let options = self.options;
let estimate_timeout_opt = fun_target
.func_env
Expand All @@ -123,9 +146,9 @@ impl<'env> BoogieTranslator<'env> {
.func_env
.get_num_pragma(TIMEOUT_PRAGMA)
.unwrap_or(options.vc_timeout);
estimate_timeout > timeout
estimate_timeout <= timeout
} else {
false
true
}
}

Expand Down Expand Up @@ -279,7 +302,7 @@ impl<'env> BoogieTranslator<'env> {
continue;
}
for (variant, ref fun_target) in self.targets.get_targets(fun_env) {
if variant.is_verified() && !self.is_not_verified_timeout(fun_target) {
if self.is_verified(&variant, fun_target) {
verified_functions_count += 1;
debug!(
"will verify primary function `{}`",
Expand Down Expand Up @@ -363,7 +386,15 @@ impl<'env> BoogieTranslator<'env> {
}
// Emit any finalization items required by spec translation.
self.spec_translator.finalize();
info!("{} verification conditions", verified_functions_count);
let shard_info = if let Some(shard) = self.for_shard {
format!(" (for shard #{} of {})", shard + 1, self.options.shards)
} else {
"".to_string()
};
info!(
"{} verification conditions{}",
verified_functions_count, shard_info
);
}
}

Expand Down Expand Up @@ -1106,7 +1137,7 @@ impl<'env> FunctionTranslator<'env> {
}

// Initial assumptions
if variant.is_verified() && !self.parent.is_not_verified_timeout(fun_target) {
if self.parent.is_verified(variant, fun_target) {
self.translate_verify_entry_assumptions(fun_target);
}

Expand Down
Loading
Loading