Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for command execution after successful execution #445

Merged
merged 4 commits into from
Dec 9, 2020

Conversation

smklein
Copy link
Contributor

@smklein smklein commented Dec 8, 2020

The "-run_command_after_success" flag is nearly identical to
the "-run_command_after" flag, but instead of operating
unconditionally, it only executes if the previous command
was successful.

This can provide value for exeucting a command which operates
on the output of a bazel build, and which should only
execute if there is new output to act upon.

The "-run_command_after_success" flag is nearly identical to
the "-run_command_after" flag, but instead of operating
unconditionally, it only executes if the previous command
was successful.

This can provide value for exeucting a command which operates
on the output of a bazel build, and which should only
execute if there is new output to act upon.
@smklein smklein requested a review from achew22 as a code owner December 8, 2020 20:53
@google-cla google-cla bot added the cla: yes label Dec 8, 2020
@smklein
Copy link
Contributor Author

smklein commented Dec 8, 2020

If I'm holding it wrong, happy to update my approach! 😁

Just figured I'd provide this flag - I'm currently building a tool with ibazel which "builds a package, and then exports that package to another machine upon successful build". Being able to track when things should be rebuilt and executing follow-up commands is super useful, but it only really makes sense if the build hasn't failed.

This PR is my attempt to pull out that "only do something if the build hasn't failed" ability.

@achew22
Copy link
Member

achew22 commented Dec 8, 2020

Would you be willing to add an e2e test to cover this? It should be as simple as adding 2 lines here

https://github.com/bazelbuild/bazel-watcher/blob/master/e2e/lifecycle_hooks/lifecycle_hooks_test.go#L57

@achew22
Copy link
Member

achew22 commented Dec 8, 2020

And since I forgot to say it.... HURRAY! New features are great. Thanks for doing this!

@achew22
Copy link
Member

achew22 commented Dec 8, 2020

Also, would you be willing to add a --run_command_after_error and an else statement? I don't know when I'd use that, but it feels like it it'll be a fast-followed feature request that's easy to add here.

@smklein
Copy link
Contributor Author

smklein commented Dec 8, 2020

Is there some sort of setup I need before running the tests? I'm hitting the following, even on upstream master:

~/repos/bazel-watcher $ bazel test //e2e/lifecycle_hooks:*
WARNING: Download from https://mirror.bazel.build/github.com/bazelbuild/rules_proto/archive/84ba6ec814eebbf5312b2cc029256097ae0042c3.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
INFO: Analyzed 3 targets (0 packages loaded, 0 targets configured).
INFO: Found 2 targets and 1 test target...
INFO: From Testing //e2e/lifecycle_hooks:go_default_test (run 1 of 5):
==================== Test output for //e2e/lifecycle_hooks:go_default_test (run 1 of 5):
=== RUN   TestLifecycleHooks
    TestLifecycleHooks: lifecycle_hooks_test.go:56: ibazel invoked as: /home/smklein/.cache/bazel/_bazel_smklein/4a4fed321b175fac7f10dd14d11b2eb8/execroot/__main__/bazel-out/k8-fastbuild/bin/e2e/lifecycle_hooks/linux_amd64_stripped/go_default_test.runfiles/__main__/ibazel/linux_amd64_pure_stripped/ibazel --bazel_path=/usr/bin/bazel -run_command_before=echo hi-before -run_command_after=echo hi-after run //:test
external/bazel_tools/tools/test/test-setup.sh: line 274: warning: wait_for: recursively setting old_sigint_handler to wait_sigint_handler: running_trap = 16
-- Test timed out at 2020-12-08 21:50:36 UTC --
================================================================================

If it helps, I'm using bazel 3.7.1, go version go1.15.5 linux/amd64

@smklein
Copy link
Contributor Author

smklein commented Dec 8, 2020

Also, would you be willing to add a --run_command_after_error and an else statement? I don't know when I'd use that, but it feels like it it'll be a fast-followed feature request that's easy to add here.

Added, plus tests (my local error was resolved by boosting the timeout to 30+ seconds; these are long-running tests!) - recommendations welcome for how to best include a proper test for "run_command_after_error" too.

@achew22
Copy link
Member

achew22 commented Dec 8, 2020

I'm glad you were able to figure that out. Where did you increase the timeout to 30 sec? Maybe I should increase it by default. Yes, the tests are very large but they actually launch ibazel (which launches real Bazel) inside each test. It's extremely helpful for validating that things actually work.

Added, plus tests (my local error was resolved by boosting the timeout to 30+ seconds; these are long-running tests!) - recommendations welcome for how to best include a proper test for "run_command_after_error" too.

At the top of the test, there is this section

const mainFiles = `
-- BUILD.bazel --
sh_binary(
name = "test",
srcs = ["test.sh"],
)
-- test.sh --
printf "action"
`

You could add into the -- BUILD.bazel -- section a new target

sh_binary(
  name = "failure",
  srcs = ["failure.sh"],
)

and a new file called failure.sh with a new entry, just like -- test.sh --

-- failure.sh --
printf "Failing"
exit 1

Then you could make a new test case that's a dupe of TestLifecycleHooks called TestLifecycleHooks_failure and a body of

	ibazel := e2e.SetUp(t)
	defer ibazel.Kill()

	ibazel.RunWithAdditionalArgs("//:failure", []string{
		"-run_command_before=echo hi-before",
		"-run_command_after=echo hi-after",
		"-run_command_after_success=echo hi-after-success",
		"-run_command_after_error=echo hi-after-error",
	})
	ibazel.ExpectOutput("hi-before")
	ibazel.ExpectOutput("hi-after")
	ibazel.ExpectOutput("hi-after-error")

@smklein
Copy link
Contributor Author

smklein commented Dec 8, 2020

I'm glad you were able to figure that out. Where did you increase the timeout to 30 sec? Maybe I should increase it by default. Yes, the tests are very large but they actually launch ibazel (which launches real Bazel) inside each test. It's extremely helpful for validating that things actually work.

Added, plus tests (my local error was resolved by boosting the timeout to 30+ seconds; these are long-running tests!) - recommendations welcome for how to best include a proper test for "run_command_after_error" too.

At the top of the test, there is this section

const mainFiles = `
-- BUILD.bazel --
sh_binary(
name = "test",
srcs = ["test.sh"],
)
-- test.sh --
printf "action"
`

You could add into the -- BUILD.bazel -- section a new target

sh_binary(
  name = "failure",
  srcs = ["failure.sh"],
)

and a new file called failure.sh with a new entry, just like -- test.sh --

-- failure.sh --
printf "Failing"
exit 1

Then you could make a new test case that's a dupe of TestLifecycleHooks called TestLifecycleHooks_failure and a body of

	ibazel := e2e.SetUp(t)
	defer ibazel.Kill()

	ibazel.RunWithAdditionalArgs("//:failure", []string{
		"-run_command_before=echo hi-before",
		"-run_command_after=echo hi-after",
		"-run_command_after_success=echo hi-after-success",
		"-run_command_after_error=echo hi-after-error",
	})
	ibazel.ExpectOutput("hi-before")
	ibazel.ExpectOutput("hi-after")
	ibazel.ExpectOutput("hi-after-error")

So the "failure" target is defined here as an sh_binary, not an sh_test - will ibazel.RunWithAdditionalArgs("//:failure", ...) actually trigger a failure, or will it just check that the target has been built?

Re-creating this manually, ibazel run //e2e/lifecycle_hooks:failure (with that same bash file and target, as a sh_binary) succeeds, rather than returning an error. This means that the "run_command_after_error" values never get printed, since it isn't interpreted as a failure.

@smklein
Copy link
Contributor Author

smklein commented Dec 8, 2020

I'm glad you were able to figure that out. Where did you increase the timeout to 30 sec? Maybe I should increase it by default. Yes, the tests are very large but they actually launch ibazel (which launches real Bazel) inside each test. It's extremely helpful for validating that things actually work.

Added, plus tests (my local error was resolved by boosting the timeout to 30+ seconds; these are long-running tests!) - recommendations welcome for how to best include a proper test for "run_command_after_error" too.

At the top of the test, there is this section

const mainFiles = `
-- BUILD.bazel --
sh_binary(
name = "test",
srcs = ["test.sh"],
)
-- test.sh --
printf "action"
`

You could add into the -- BUILD.bazel -- section a new target

sh_binary(
  name = "failure",
  srcs = ["failure.sh"],
)

and a new file called failure.sh with a new entry, just like -- test.sh --

-- failure.sh --
printf "Failing"
exit 1

Then you could make a new test case that's a dupe of TestLifecycleHooks called TestLifecycleHooks_failure and a body of

	ibazel := e2e.SetUp(t)
	defer ibazel.Kill()

	ibazel.RunWithAdditionalArgs("//:failure", []string{
		"-run_command_before=echo hi-before",
		"-run_command_after=echo hi-after",
		"-run_command_after_success=echo hi-after-success",
		"-run_command_after_error=echo hi-after-error",
	})
	ibazel.ExpectOutput("hi-before")
	ibazel.ExpectOutput("hi-after")
	ibazel.ExpectOutput("hi-after-error")

So the "failure" target is defined here as an sh_binary, not an sh_test - will ibazel.RunWithAdditionalArgs("//:failure", ...) actually trigger a failure, or will it just check that the target has been built?

Re-creating this manually, ibazel run //e2e/lifecycle_hooks:failure (with that same bash file and target, as a sh_binary) succeeds, rather than returning an error. This means that the "run_command_after_error" values never get printed, since it isn't interpreted as a failure.

Okay, I think I have a solution that can patch this, but it'll need to slightly modify the ibazel e2e helpers. Uploading in just a sec...

@smklein
Copy link
Contributor Author

smklein commented Dec 8, 2020

This most recent revision fails by omitting the target file - as a result, the target does not "build", and the test utilities are modified to avoid that check explicitly.

With this revision, I'm seeing "hi-after-error" explicitly, without "hi-after-success", for the intended negative case.

@@ -92,6 +92,12 @@ func (i *IBazelTester) RunWithAdditionalArgs(target string, additionalArgs []str
i.run(target, []string{}, additionalArgs)
}

func (i *IBazelTester) RunUnverifiedWithAdditionalArgs(target string, additionalArgs []string) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does it mean to be unverified? Is there maybe a different word or can we add a description for what this is?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really it's "execute bazel run without bazel build". I suppose "RunUnbuilt" might be a more accurate name?

@achew22
Copy link
Member

achew22 commented Dec 9, 2020

Looks great to me! Thanks so much for your contribution. This should be included in the next release.

Is there anything I could have done to make it easier for you to contribute in the future?

@achew22 achew22 merged commit a924d85 into bazelbuild:master Dec 9, 2020
@smklein
Copy link
Contributor Author

smklein commented Dec 9, 2020

Looks great to me! Thanks so much for your contribution. This should be included in the next release.

Is there anything I could have done to make it easier for you to contribute in the future?

It was pretty straightforward! As you're probably aware, I think I spent 5% of my time adding the feature and 95% of my time figuring out the steps to get a test in place.

I'll see if I can brainstorm ways to simplify the test code for newbies - #446 seems like a start?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants