Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Huygens MTF #55

Merged
merged 27 commits into from
Dec 19, 2023
Merged

Huygens MTF #55

merged 27 commits into from
Dec 19, 2023

Conversation

noahrbn
Copy link
Contributor

@noahrbn noahrbn commented Dec 10, 2023

Proposed change

Type of change

  • Example (a notebook demonstrating how to use ZOSPy for a specific application)
  • Bugfix (non-breaking change which fixes an issue)
  • New analysis (a wrapper around an OpticStudio analysis)
  • New feature (other than an analysis)
  • Breaking change (fix/feature causing existing functionality to break)
  • Code quality improvements to existing code or addition of tests
  • Documentation (improvement of either the docstrings or the documentation website)

Additional information

  • Python version: 3.11.5
  • OpticStudio version: 23.2.2

Related issues

Checklist

  • I have followed the contribution guidelines
  • The code has been linted, formatted and tested locally using tox.
  • Local tests pass. Please fix any problems before opening a PR. If this is not possible, specify what doesn't work and why you can't fix it.
  • I added new tests for any features contributed, or updated existing tests.
  • I updated CHANGELOG.md with my changes (except for refactorings and changes in the documentation).

If you contributed an analysis:

  • I did not use AttrDicts for the analysis result data (please use dataclasses instead).

If you contributed an example:

  • I contributed my example as a Jupyter notebook.

@crnh
Copy link
Collaborator

crnh commented Dec 11, 2023

Hi @noahrbn,

Thank you for contributing to ZOSPy! As it's quite busy on our side, it may take some time before we can review your contribution. Meanwhile, can you lint, format and test your code as indicated in the contribution guidelines? The output of the pull request checks suggests there are some issues related to our coding conventions that can be easily resolved by following the workflow explained in the contribution guidelines. If you are new to this and don't know how to get started, feel free to drop us a message and we're happy to help you!

@noahrbn
Copy link
Contributor Author

noahrbn commented Dec 11, 2023

The lint checks on my end are now passing - I will write unit tests (still familiarizing myself with your unit testing conventions) in another pull request. Please feel free to say anything obvious - I am somewhat unfamiliar with this.

I see when I run tox black lint automatically reformats the code to pass, but ruff lint does not. Is there a flag to pass to tox to automatically reformat to fix any errors?

@crnh
Copy link
Collaborator

crnh commented Dec 11, 2023

Nice to hear you got tox running! Black does indeed automatically reformat the code. For ruff, you can pass the --fix argument through tox (note the extra double dashes -- that separate the arguments for ruff from the arguments for tox):

tox run -e lint -- --fix

Note that not all errors can be fixed automatically (errors that can, are marked with [*]). Looking at the output from the GitHub Actions, most of them should already be solved by running black.

As for the unit tests, we'll wait for your 2nd PR. As the huygens_mtf analysis is largely similar to fft_through_focus_mtf, the unit tests will probably similar as well. Please also take a look at the documentation for generating reference data, as you need some data to test against ;).

@noahrbn
Copy link
Contributor Author

noahrbn commented Dec 11, 2023

I decided to include the unit testing as a part of this PR. I have committed it now and the new tests under test_mtf.py seem to pass on my machine. Let me know if anything else is needed to approve the PR.

I have a few questions:

  1. Could you clarify exactly how this unit testing is set up? I don't understand it from your README documents. Basically, what I understand is that the tests.yaml file specifies all the tests that are to be performed, and the generate_test_data_reference script goes through this file and creats .zmx files specific to each test (for most of them it seems to be simple_system, i.e., just a singlet lens). The analysis is run on this file. Then, when pytest is run, we see if the analysis performed with zospy matches the results from the file. Maybe I'm missing something, but isn't this a bit circular? What exactly is being tested here?
  2. How does the communication between pytest/each test_*.py file and the files in tests/data/reference work? For example, in each test_*_returns_correct_results method, it is unclear to me how expected_data is passed in.
  3. Why is it necessary to have both a test_*_returns_correct_results and test_*_matches_reference_data method? If I look at how fft_through_focus_mtf was implemented in tests.yaml, it seems that only the correct_results method is tested against reference data.

@crnh
Copy link
Collaborator

crnh commented Dec 12, 2023

Thanks a lot for contributing the unit tests as well, nice work!

Anwers to your questions:

  1. The generate_reference_data script is separated from the unit tests. It is only used to generate the data files that are used by the unit tests to check the output of the analyses. tests.yaml is only used by the generate_reference_data script. It specifies for each analysis on which optical system it should be run, and which arguments should be supplied to the analysis (often there are multiple sets of parameters for a single analysis). For each specified analysis and parameter set, generate_reference_data builds the system in OpticStudio, performs the analysis and saves the analysis output using zospy.analyses.base.AnalysisResult.to_json(). It also saves the system in .zmx format. Only the json files are used; the .zmx files are only saved for reference purposes.

    Isn't this a bit circular? Yes, it is: we use ZOSPy to generate the reference analysis output, and then test ZOSPy against this output. This is not really a "unit test", unlike for example test_to_json, which clearly tests a single piece of functionality (the ability to reconstruct an AnalysisResult from its own json serialization). Isn't that useless? No, it isn't: these tests should be seen as regression tests. Although we would like to, we cannot really test the validity of OpticStudio's output, but at least we can check the output stays the same. If we change some functionality used by one or more analyses, these tests help us to make sure the change didn't break the analyses or change their output.

  2. To pass the expected data to the unit tests, we make use of PyTest's fixtures. These fixtures are functions that can perform some setup steps and then return a value; this helps to keep the unit tests modular. As you can see, tests/analyses/testconf.py defines a fixture named expected_data. When PyTest encounters the expected_data parameter in the test definition, it will call the expected_data fixture and pass its returned value as an argument to the unit test. If you check this fixture, you will see that it determines the name of the appropriate reference data file based on the test name and its parameters. If this file is not present, the test will be skipped.

    Note that this also means you need to run the generate_reference_data script when you add new tests; otherwise these tests will still be skipped, because the necessary reference data is missing.

  3. We do not only want to ensure consistent results for a single version of OpticStudio, but also check if results are consistent between different versions. To do this, we defined a reference version (currently OpticStudio 23 R1). If you run the unit tests with a different OpticStudio version (say, 23 R2), test_*_returns_correct_results will check the analysis output against the reference data for that version (OpticStudio 23 R2). test_*_matches_reference_data will check the analysis output against the reference data for the reference version (OpticStudio 23 R1). For this check, the reference data files for OpticStudio 23 R1 are used, so it is not necessary to separately specify these test cases for the test_*_matches_reference_data tests (you can check this if you take a look at the reference_data fixture).

I hope this answers your questions, and let me know if something is still unclear!

@crnh crnh self-requested a review December 12, 2023 14:30
@noahrbn
Copy link
Contributor Author

noahrbn commented Dec 13, 2023

Thanks for the clarification - that makes a lot of sense.

Please let me know if any more information is needed to approve the PR.

@LucVV
Copy link
Contributor

LucVV commented Dec 13, 2023

I see that test_huygens_mtf_returns_correct_result uses assert np.allclose to compare the result and the expected data. We have just released version 1.1.2 and one of the updates is that we changed to pandas.testing.assert_frame_equal for these comparisons as that also checks the columns and indices of the DataFrames (see #56). I think that should also be the way forward in this merge request.

@noahrbn
Copy link
Contributor Author

noahrbn commented Dec 13, 2023

@LucVV I have accepted and committed your suggestions. I will test this branch locally myself to make sure the tests still pass, but other than that I think this is good to go.

Copy link
Collaborator

@crnh crnh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice work, thanks a lot! There are some minor issues that need to be addressed before we can merge this PR.

@noahrbn
Copy link
Contributor Author

noahrbn commented Dec 15, 2023

I have addressed the above comments with the most recent commits. I have linted the code and all unit tests (including new ones) pass on my machine.

Copy link
Collaborator

@crnh crnh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for addressing my comments! I think this one is ready to be merged if you address these final comments and all tests pass on our systems.

Copy link
Collaborator

@crnh crnh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All tests pass on my system! Thanks for and congratulations with your first contribution, @noahrbn! If you want, you may add the reference data files for your OpticStudio version to this PR, but we do not require this.

@LucVV Do you still want to run the unit tests on your system?

@crnh crnh requested a review from LucVV December 18, 2023 08:36
@LucVV
Copy link
Contributor

LucVV commented Dec 18, 2023

Great to hear! And congratulations indeed @noahrbn!

I'll run the test and add reference data for OpticStudio 20.3.2 today!

@LucVV
Copy link
Contributor

LucVV commented Dec 18, 2023

The new tests run on my system and the test results of OpticStudio 20.3.2 match those of 23.2.1! I have added the reference data for 20.3.2

I noticed that only test_huygens_mtf_returns_correct_result was implemented. I have also implemented test_huygens_mtf_matches_reference_data to compare different OpticStudio versions. @crnh, can you add the reference files for 23.1.0 (the current reference version)?

@noahrbn noahrbn changed the title Huygensmtf Huygens MTF Dec 18, 2023
Copy link
Contributor

@LucVV LucVV left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All looks good! @crnh should we still add reference data for 23.1.0?

@crnh
Copy link
Collaborator

crnh commented Dec 19, 2023

Yes, I'll add the reference data today!

@crnh crnh merged commit fd7f19b into MREYE-LUMC:main Dec 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants