-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for command execution after successful execution #445
Conversation
The "-run_command_after_success" flag is nearly identical to the "-run_command_after" flag, but instead of operating unconditionally, it only executes if the previous command was successful. This can provide value for exeucting a command which operates on the output of a bazel build, and which should only execute if there is new output to act upon.
If I'm holding it wrong, happy to update my approach! 😁 Just figured I'd provide this flag - I'm currently building a tool with ibazel which "builds a package, and then exports that package to another machine upon successful build". Being able to track when things should be rebuilt and executing follow-up commands is super useful, but it only really makes sense if the build hasn't failed. This PR is my attempt to pull out that "only do something if the build hasn't failed" ability. |
Would you be willing to add an e2e test to cover this? It should be as simple as adding 2 lines here |
And since I forgot to say it.... HURRAY! New features are great. Thanks for doing this! |
Also, would you be willing to add a |
Is there some sort of setup I need before running the tests? I'm hitting the following, even on upstream master: ~/repos/bazel-watcher $ bazel test //e2e/lifecycle_hooks:*
WARNING: Download from https://mirror.bazel.build/github.com/bazelbuild/rules_proto/archive/84ba6ec814eebbf5312b2cc029256097ae0042c3.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
INFO: Analyzed 3 targets (0 packages loaded, 0 targets configured).
INFO: Found 2 targets and 1 test target...
INFO: From Testing //e2e/lifecycle_hooks:go_default_test (run 1 of 5):
==================== Test output for //e2e/lifecycle_hooks:go_default_test (run 1 of 5):
=== RUN TestLifecycleHooks
TestLifecycleHooks: lifecycle_hooks_test.go:56: ibazel invoked as: /home/smklein/.cache/bazel/_bazel_smklein/4a4fed321b175fac7f10dd14d11b2eb8/execroot/__main__/bazel-out/k8-fastbuild/bin/e2e/lifecycle_hooks/linux_amd64_stripped/go_default_test.runfiles/__main__/ibazel/linux_amd64_pure_stripped/ibazel --bazel_path=/usr/bin/bazel -run_command_before=echo hi-before -run_command_after=echo hi-after run //:test
external/bazel_tools/tools/test/test-setup.sh: line 274: warning: wait_for: recursively setting old_sigint_handler to wait_sigint_handler: running_trap = 16
-- Test timed out at 2020-12-08 21:50:36 UTC --
================================================================================ If it helps, I'm using bazel 3.7.1, go version go1.15.5 linux/amd64 |
Added, plus tests (my local error was resolved by boosting the timeout to 30+ seconds; these are long-running tests!) - recommendations welcome for how to best include a proper test for "run_command_after_error" too. |
I'm glad you were able to figure that out. Where did you increase the timeout to 30 sec? Maybe I should increase it by default. Yes, the tests are very large but they actually launch ibazel (which launches real Bazel) inside each test. It's extremely helpful for validating that things actually work.
At the top of the test, there is this section bazel-watcher/e2e/lifecycle_hooks/lifecycle_hooks_test.go Lines 14 to 22 in 221e065
You could add into the
and a new file called
Then you could make a new test case that's a dupe of
|
So the "failure" target is defined here as an Re-creating this manually, |
Okay, I think I have a solution that can patch this, but it'll need to slightly modify the ibazel e2e helpers. Uploading in just a sec... |
This most recent revision fails by omitting the target file - as a result, the target does not "build", and the test utilities are modified to avoid that check explicitly. With this revision, I'm seeing "hi-after-error" explicitly, without "hi-after-success", for the intended negative case. |
@@ -92,6 +92,12 @@ func (i *IBazelTester) RunWithAdditionalArgs(target string, additionalArgs []str | |||
i.run(target, []string{}, additionalArgs) | |||
} | |||
|
|||
func (i *IBazelTester) RunUnverifiedWithAdditionalArgs(target string, additionalArgs []string) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does it mean to be unverified? Is there maybe a different word or can we add a description for what this is?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really it's "execute bazel run without bazel build". I suppose "RunUnbuilt" might be a more accurate name?
Looks great to me! Thanks so much for your contribution. This should be included in the next release. Is there anything I could have done to make it easier for you to contribute in the future? |
It was pretty straightforward! As you're probably aware, I think I spent 5% of my time adding the feature and 95% of my time figuring out the steps to get a test in place. I'll see if I can brainstorm ways to simplify the test code for newbies - #446 seems like a start? |
The "-run_command_after_success" flag is nearly identical to
the "-run_command_after" flag, but instead of operating
unconditionally, it only executes if the previous command
was successful.
This can provide value for exeucting a command which operates
on the output of a bazel build, and which should only
execute if there is new output to act upon.