Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix include_self for scatter_reduce #2090

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open

Conversation

xadupre
Copy link
Member

@xadupre xadupre commented Mar 7, 2025

No description provided.

Copy link

codecov bot commented Mar 7, 2025

❌ 19 Tests Failed:

Tests completed Failed Passed Skipped
6663 19 6644 1922
View the top 3 failed test(s) by shortest run time
onnxscript.backend.onnx_export_test.TestOnnxBackEnd::test_export2python_produces_correct_onnx_script_model_0266_test_compress_0
Stack Traces | 0.004s run time
onnxscript\backend\onnx_export_test.py:137: in extract_functions
    mod = importlib.import_module(import_name)
C:\hostedtoolcache\windows\Python\3.11.9\x64\Lib\importlib\__init__.py:126: in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
E   ModuleNotFoundError: No module named 'tests.onnx_backend_test_code.test_compress_0'

The above exception was the direct cause of the following exception:
.nox\test_onnx_weekly\Lib\site-packages\parameterized\parameterized.py:620: in standalone_func
    return func(*(a + p.args), **p.kwargs, **kw)
onnxscript\backend\onnx_export_test.py:271: in test_export2python_produces_correct_onnx_script_model
    functions = extract_functions(backend_test.name, code, self.test_folder)
onnxscript\backend\onnx_export_test.py:139: in extract_functions
    raise AssertionError(
E   AssertionError: Unable to import 'tests.onnx_backend_test_code.test_compress_0' (e=No module named 'tests.onnx_backend_test_code.test_compress_0') (file: 'D:\\a\\onnxscript\\onnxscript\\tests\\onnx_backend_test_code\\test_compress_0.py', absolute path: 'D:\\a\\onnxscript\\onnxscript\\tests\\onnx_backend_test_code\\test_compress_0.py', current folder: D:\a\onnxscript\onnxscript
E   ---- CONTENT --
E   import numpy
E   from onnx import TensorProto
E   from onnx.helper import make_tensor
E   from onnxscript import script, external_tensor
E   from onnxscript.values import Opset
E   from onnxscript.onnx_types import BOOL, FLOAT
E   from onnxscript.onnx_opset import opset11
E   
E   @script()
E   def bck_test_compress_0(input: FLOAT[3,2], condition: BOOL[3]) -> (FLOAT[2,2]):
E       output = opset11.Compress(input, condition, axis=0)
E       return output
onnxscript.backend.onnx_export_test.TestOnnxBackEnd::test_export2python_produces_correct_onnx_script_model_0365_test_erf
Stack Traces | 0.004s run time
onnxscript\backend\onnx_export_test.py:137: in extract_functions
    mod = importlib.import_module(import_name)
C:\hostedtoolcache\windows\Python\3.10.11\x64\lib\importlib\__init__.py:126: in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
E   ModuleNotFoundError: No module named 'tests.onnx_backend_test_code.test_erf'

The above exception was the direct cause of the following exception:
.nox\test\lib\site-packages\parameterized\parameterized.py:620: in standalone_func
    return func(*(a + p.args), **p.kwargs, **kw)
onnxscript\backend\onnx_export_test.py:271: in test_export2python_produces_correct_onnx_script_model
    functions = extract_functions(backend_test.name, code, self.test_folder)
onnxscript\backend\onnx_export_test.py:139: in extract_functions
    raise AssertionError(
E   AssertionError: Unable to import 'tests.onnx_backend_test_code.test_erf' (e=No module named 'tests.onnx_backend_test_code.test_erf') (file: 'D:\\a\\onnxscript\\onnxscript\\tests\\onnx_backend_test_code\\test_erf.py', absolute path: 'D:\\a\\onnxscript\\onnxscript\\tests\\onnx_backend_test_code\\test_erf.py', current folder: D:\a\onnxscript\onnxscript
E   ---- CONTENT --
E   import numpy
E   from onnx import TensorProto
E   from onnx.helper import make_tensor
E   from onnxscript import script, external_tensor
E   from onnxscript.values import Opset
E   from onnxscript.onnx_types import FLOAT
E   from onnxscript.onnx_opset import opset13
E   
E   @script()
E   def bck_test_erf(x: FLOAT[1,3,32,32]) -> (FLOAT[1,3,32,32]):
E       y = opset13.Erf(x)
E       return y
onnxscript.backend.onnx_export_test.TestOnnxBackEnd::test_export2python_produces_correct_onnx_script_model_0530_test_layer_normalization_3d_axis_negative_3_epsilon
Stack Traces | 0.004s run time
onnxscript\backend\onnx_export_test.py:137: in extract_functions
    mod = importlib.import_module(import_name)
C:\hostedtoolcache\windows\Python\3.11.9\x64\Lib\importlib\__init__.py:126: in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
E   ModuleNotFoundError: No module named 'tests.onnx_backend_test_code.test_layer_normalization_3d_axis_negative_3_epsilon'

The above exception was the direct cause of the following exception:
.nox\test_onnx_weekly\Lib\site-packages\parameterized\parameterized.py:620: in standalone_func
    return func(*(a + p.args), **p.kwargs, **kw)
onnxscript\backend\onnx_export_test.py:271: in test_export2python_produces_correct_onnx_script_model
    functions = extract_functions(backend_test.name, code, self.test_folder)
onnxscript\backend\onnx_export_test.py:139: in extract_functions
    raise AssertionError(
E   AssertionError: Unable to import 'tests.onnx_backend_test_code.test_layer_normalization_3d_axis_negative_3_epsilon' (e=No module named 'tests.onnx_backend_test_code.test_layer_normalization_3d_axis_negative_3_epsilon') (file: 'D:\\a\\onnxscript\\onnxscript\\tests\\onnx_backend_test_code\\test_layer_normalization_3d_axis_negative_3_epsilon.py', absolute path: 'D:\\a\\onnxscript\\onnxscript\\tests\\onnx_backend_test_code\\test_layer_normalization_3d_axis_negative_3_epsilon.py', current folder: D:\a\onnxscript\onnxscript
E   ---- CONTENT --
E   import numpy
E   from onnx import TensorProto
E   from onnx.helper import make_tensor
E   from onnxscript import script, external_tensor
E   from onnxscript.values import Opset
E   from onnxscript.onnx_types import FLOAT
E   from onnxscript.onnx_opset import opset17
E   
E   @script()
E   def bck_test_layer_normalization_3d_axis_negative_3_epsilon(X: FLOAT[2,3,5], W: FLOAT[2,3,5], B: FLOAT[2,3,5]) -> (FLOAT[2,3,5], FLOAT[1,1,1], FLOAT[1,1,1]):
E       Y, Mean, InvStdDev = opset17.LayerNormalization(X, W, B, axis=-3, epsilon=0.10000000149011612)
E       return Y, Mean, InvStdDev

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

@justinchuby justinchuby self-assigned this Mar 7, 2025
@justinchuby justinchuby self-requested a review March 7, 2025 16:54
if not include_self:
if onnx_reduce == "max":
value = onh.from_array(
np.array([np.finfo(src.dtype.numpy()).min], dtype=src.dtype.numpy())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

E.g. could you add a comment on why we needed to use np.finfo min/max for this reduction type, for future readers?

# ONNX has not include_self parameter and default is include_self=True mode
matcher=lambda sample: sample.kwargs.get("include_self") is False,
reason="ONNX does't support include_self=False option",
)
.xfail(
variant_name="amax",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we should set dtypes=(torch.float16,), as well

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tried

value = onh.from_array(
np.array([np.finfo(src.dtype.numpy()).max], dtype=src.dtype.numpy())
)
value = ir.tensor([np.finfo(src.dtype.numpy()).max], dtype=src.dtype)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this work for bfloat16 or integer types?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If ml-dtypes is used, that should work as well. I can also switch to pytorch to find the minimum value for each type.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just tested it, it seems it doesn't work:

>>> import ml_dtypes
>>> import numpy as np
>>> np.finfo(ml_dtypes.bfloat16)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.12/site-packages/numpy/_core/getlimits.py", line 525, in __new__
    raise ValueError("data type %r not inexact" % (dtype))
ValueError: data type <class 'ml_dtypes.bfloat16'> not inexact
>>> np.finfo(np.int32)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.12/site-packages/numpy/_core/getlimits.py", line 525, in __new__
    raise ValueError("data type %r not inexact" % (dtype))
ValueError: data type <class 'numpy.int32'> not inexact
>>> np.finfo(np.float32)
finfo(resolution=1e-06, min=-3.4028235e+38, max=3.4028235e+38, dtype=float32)

Would it be possible to do this ir.tensor(np.inf, dtype=src.dtype)? Seems like it would work:

>>> ir.tensor(np.inf, dtype=ir.DataType.BFLOAT16)
Tensor<BFLOAT16,[]>(array(inf, dtype=bfloat16), name=None)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For int types I think we need some special handling. Maybe we should store max and min values to the ir.DataType class?

# whether or not it takes part in it.
# It is -inf if the reduction is max, inf for min, 0 for add, 1 for mul.
# mean is not supported.
dtype = src.dtype or cst.dtype

Check failure

Code scanning / CodeQL

Potentially uninitialized local variable Error

Local variable 'cst' may be used before it is initialized.
# whether or not it takes part in it.
# It is -inf if the reduction is max, inf for min, 0 for add, 1 for mul.
# mean is not supported.
dtype = src.dtype or cst.dtype

Check failure

Code scanning / lintrunner

RUFF/F821 Error

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

Successfully merging this pull request may close these issues.

2 participants