Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pytest v6.2.0 causing test_optim_with_value to fail #1219

Closed
matthewfeickert opened this issue Dec 13, 2020 · 0 comments · Fixed by #1220
Closed

pytest v6.2.0 causing test_optim_with_value to fail #1219

matthewfeickert opened this issue Dec 13, 2020 · 0 comments · Fixed by #1220
Assignees
Labels
bug Something isn't working tests pytest

Comments

@matthewfeickert
Copy link
Member

Description

v0.5.4 bump2version changes were swept into master 2020-12-12 with f824afe and the CI on master succeeded. Later that day pytest v6.2.0 was released and the nightly scheduled CI failed on

_______________________ test_optim_with_value[jax-mu=1] ________________________

backend = (<pyhf.tensor.jax_backend.jax_backend object at 0x7f6bf92def50>, None)
source = {'bindata': {'bkg': [100.0, 150.0], 'bkgsys_dn': [98, 100], 'bkgsys_up': [102, 190], 'data': [120.0, 180.0], ...}, 'binning': [2, -0.5, 1.5]}
spec = {'channels': [{'name': 'singlechannel', 'samples': [{'data': [30.0, 95.0], 'modifiers': [{...}], 'name': 'signal'}, {'data': [100.0, 150.0], 'modifiers': [{...}], 'name': 'background'}]}]}
mu = 1.0

    @pytest.mark.parametrize('mu', [1.0], ids=['mu=1'])
    def test_optim_with_value(backend, source, spec, mu):
        pdf = pyhf.Model(spec)
        data = source['bindata']['data'] + pdf.config.auxdata
    
        init_pars = pdf.config.suggested_init()
        par_bounds = pdf.config.suggested_bounds()
    
        optim = pyhf.optimizer
    
        result = optim.minimize(pyhf.infer.mle.twice_nll, data, pdf, init_pars, par_bounds)
        assert pyhf.tensorlib.tolist(result)
    
        result, fitted_val = optim.minimize(
            pyhf.infer.mle.twice_nll,
            data,
            pdf,
            init_pars,
            par_bounds,
            fixed_vals=[(pdf.config.poi_index, mu)],
            return_fitted_val=True,
        )
        assert pyhf.tensorlib.tolist(result)
        assert pyhf.tensorlib.shape(fitted_val) == ()
>       assert pytest.approx(17.52954975, rel=1e-5) == fitted_val
E       assert 17.52954975 ± 1.8e-04 == DeviceArray(17.52954975, dtype=float64)
E        +  where 17.52954975 ± 1.8e-04 = <function approx at 0x7f6cc1747f80>(17.52954975, rel=1e-05)
E        +    where <function approx at 0x7f6cc1747f80> = pytest.approx

tests/test_optim.py:383: AssertionError

Diffing the installed libraries between the two (in f824afe_install.txt and failing_install.txt) shows that the relevant change is pytest

$ diff f824afe_install.txt failing_install.txt 
33a34
> importlib-metadata     3.1.1
83c84
< py                     1.9.0
---
> py                     1.10.0
96c97
< pytest                 6.1.2
---
> pytest                 6.2.0
143a145
> zipp                   3.4.0

This is confirmed as if

--- a/setup.py
+++ b/setup.py
@@ -29,7 +29,7 @@
         + extras_require['contrib']
         + extras_require['shellcomplete']
         + [
-            'pytest~=6.0',
+            'pytest~=6.1.0',
             'pytest-cov>=2.5.1',
             'pytest-mock',
             'pytest-benchmark[histogram]',

the CI installs v6.1.2 and passes.

This behavior is confusing as the only mention of pytest.approxin the v6.2.0 release notes is under "Improvements"

7710: Use strict equality comparison for non-numeric types in pytest.approx instead of
raising TypeError.

This was the undocumented behavior before 3.7, but is now officially a supported feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working tests pytest
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant