-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug Report: Hello World with different backends gives different results #266
Comments
The issue is most likely the approximation of the poisson (poisson_from_normal). In the tests we set up the numpy backend using that flag to make the results comparable. |
@lukasheinrich Yeah, that definitely contributes (thanks, I always forget this). Things look to be within tolerance given what we allow in the tests import pyhf
import pyhf.simplemodels
import pyhf.utils
print('NumPy backend')
pyhf.set_backend(pyhf.tensor.numpy_backend(poisson_from_normal=True))
pdf = pyhf.simplemodels.hepdata_like(signal_data=[12.,11.], bkg_data=[50.,52.], bkg_uncerts=[3.,7.])
*_, CLs_obs,CLs_exp = pyhf.utils.runOnePoint(1.0, [51., 48.] + pdf.config.auxdata, pdf)
print('Observed: {} Expected: {}'.format(CLs_obs, CLs_exp[2]))
print('\nTensorFlow backend')
import tensorflow as tf
sess = tf.Session()
pyhf.set_backend(pyhf.tensor.tensorflow_backend(session=sess))
*_, CLs_obs,CLs_exp = pyhf.utils.runOnePoint(1.0, [51., 48.] + pdf.config.auxdata, pdf)
print('Observed: {} Expected: {}'.format(sess.run(CLs_obs), sess.run(CLs_exp[2])))
print('\nPyTorch backend')
pyhf.set_backend(pyhf.tensor.pytorch_backend())
*_, CLs_obs,CLs_exp = pyhf.utils.runOnePoint(1.0, [51., 48.] + pdf.config.auxdata, pdf)
print('Observed: {} Expected: {}'.format(CLs_obs[0], CLs_exp[2])) now produces
Is it possible for us to set the backend from the CLI? Sorry, I haven't been fully following all what you and @kratsg have done there. |
@matthewfeickert I agree it's confusing, not sure if maybe we should add a warning with non-numpy backends explaining the issue. Probably both torch and TF support a better approximation than the normal one out of the box, so that would be a good improvement (in numpy we use a gamma-function based one) currently we cannot set backend options from the cli, maybe we should be able to. |
@matthewfeickert definitely a good FAQ item :-p |
Thanks @lukasheinrich (I apparently need a lot more ☕ today). I'll go ahead and close this Issue and open up some ones that can be acted upon. |
Description
While making a notebook for a talk I noticed that our Hello World example gives different results for different backends.
Expected Behavior
When running
for the NumPy, TensorFlow, and PyTorch backends the results should be consistent.
Actual Behavior
The 3 backends give different results, with the NumPy backend giving very different results. This may be due to its
divide by zero encountered in log
problem(?).Steps to Reproduce
produces
Checklist
Run(on a recent branch)git fetch
to get the most up to date version ofmaster
The text was updated successfully, but these errors were encountered: