-
Notifications
You must be signed in to change notification settings - Fork 130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set numpy and scipy as direct dependencies #1120
Conversation
dd7381e
to
09757a3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If
Good idea to add these dependencies explicitly. Part of the post-merge checklist is to update the Bioconda recipe accordingly. I'd also vote for ignoring these warnings in the Augur functional test instead of waiting for Richard to make the fix in TreeTime and make a new release. The functional tests act a smoke tests (does |
I think it's beneficial to have tests that check for changes in stderr/stdout, to catch these kinds of external warnings (e.g. deprecation warnings) in CI. Although this one was from usage in TreeTime (and arguably should be caught by TreeTime CI), there's still a chance for warnings to surface from usage in Augur, which would prompt changes in Augur. We could create separate tests where the goal is explicitly to check output, but I don't think it hurts to overload Cram tests like this for convenience of test authoring. It'd be tedious to maintain separate tests for (1) smoke testing, (2) functional testing, and (3) console output testing, especially when commands such as Ignoring these warnings also means ignoring other Augur warnings – see example 4694133. |
I see what you're saying, @victorlin. My perspective might be biased in that I think of these functional tests as purely focused on a) does the augur command run and b) is the output correct. In that sense, I would only write tests that explicitly check output in that context where the correctness of the code is reflected by stdout/stderr. The fact that tests fail on unexpected third-party warnings is a side effect of how most of these functional tests were written (that they didn't also ignore stderr in addition to stdout). In this case, the scipy warning hurts in the sense that it's a false positive; the related augur and TreeTime code is still correct and the test otherwise passes as expected. If we didn't have ownership over TreeTime to change the affected code, we'd be forced to either pin a different version, submit a PR and wait for its release, or ignore the warning. An example of this was BioPython's deprecation warning about their sequence alphabets interface that was generated by their own code not using the alphabets correctly. In that case, I had to ignore the warning in the pytest config, because there was no way to address the warning short of submitting a BioPython PR and new release. In other cases, those warnings can be canaries for deeper issues with code or future issues like the deprecation warnings you mention. Pandas deprecation warnings have been critical in the past for keeping our usage of their API up-to-date. So, in those cases it doesn't hurt and actually helps to have the warnings appear through functional tests. Maybe there's not a single satisfactory approach and we should keep the status quo of not ignoring stderr, handling each new warning as an independent event. In that case, we just need a plan for how we handle these types of warnings when they happen again. It seems like pinning versions is the only solution we have complete control over, but maybe there are other approaches I'm not seeing? |
Ignores spurious scipy warnings that cause CI to fail. See PR 1120 [1] for a long-term solution to this issue. This short-term solution allows us to run CI on new code and ensure tests pass. [1] #1120
Ignores spurious scipy warnings that cause CI to fail. See PR 1120 [1] for a long-term solution to this issue. This short-term solution allows us to run CI on new code and ensure tests pass. [1] #1120
Ignores spurious scipy warnings that cause CI to fail. See PR 1120 [1] for a long-term solution to this issue. This short-term solution allows us to run CI on new code and ensure tests pass. [1] #1120
I created and merged #1123, so this PR should be scoped to just adding direct dependencies. But since there is already discussion on the stderr issue, I'll summarize the plan here:
I think the approach should work fine for other libraries as well. We'd just need to make sure this is followed through if/when new versions are released. Downside is that the new minimum version requirement might be too strict for some users – these can be evaluated on a case-by-case basis to determine if allowing older versions outweighs the benefits of capturing stderr of affected tests in CI. |
09757a3
to
a7413a3
Compare
a7413a3
to
4735697
Compare
These are currently installed as dependencies of dependencies. Since they are directly imported in this codebase, they should be direct dependencies with pinned versions.
4735697
to
0a91629
Compare
Codecov ReportBase: 68.03% // Head: 68.04% // Increases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #1120 +/- ##
=======================================
Coverage 68.03% 68.04%
=======================================
Files 62 62
Lines 6679 6681 +2
Branches 1639 1640 +1
=======================================
+ Hits 4544 4546 +2
Misses 1829 1829
Partials 306 306
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
Description of proposed changes
These are currently installed as dependencies of dependencies. Since they are directly imported in this codebase, they should be direct dependencies with pinned versions.
Testing
Release tasks
Checklist