Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add integration test for DeepL feature #2136

Closed
Tracked by #765 ...
timobrembeck opened this issue Mar 22, 2023 · 8 comments · Fixed by #2278 or #2400
Closed
Tracked by #765 ...

Add integration test for DeepL feature #2136

timobrembeck opened this issue Mar 22, 2023 · 8 comments · Fixed by #2278 or #2400
Assignees
Labels
effort: medium Should be doable in <12h feature New feature or request prio: medium Should be scheduled in the forseeable future.
Milestone

Comments

@timobrembeck
Copy link
Member

Motivation

In order to provide a stable machine translation feature, we should add tests for DeepL, similar to like we did for SUMM.AI.

Proposed Solution

  1. Make the DeepL API URL configurable
  2. Pass the settings url to the DeepL client (see docs)
  3. Add a mocked server which returns the same response format like DeepL
  4. Change the config in tests to use the mocked server and test all DeepL features:
    • Checkbox in page form
    • Bulk actions in event & location list

Alternatives

Fingers crossed 🤞

@timobrembeck timobrembeck added feature New feature or request prio: medium Should be scheduled in the forseeable future. effort: medium Should be doable in <12h labels Mar 22, 2023
@timobrembeck timobrembeck added this to the 23Q2 milestone Mar 22, 2023
This was referenced Mar 22, 2023
@seluianova seluianova self-assigned this Apr 5, 2023
@seluianova
Copy link
Contributor

I faced 2 issues:

  1. Supported source and target languages for DeepL are not loaded in tests because of this condition, which is not met:
    if "runserver" in sys.argv or "APACHE_PID_FILE" in os.environ

  2. CMS loads supported languages at start-up. This means that the mocked server must be running before it.

@timobrembeck
Copy link
Member Author

timobrembeck commented Apr 25, 2023

I faced 2 issues:

  1. Supported source and target languages for DeepL are not loaded in tests because of this condition, which is not met:
    if "runserver" in sys.argv or "APACHE_PID_FILE" in os.environ

  2. CMS loads supported languages at start-up. This means that the mocked server must be running before it.

What do you think about patch the ready() function of the deepl app in tests similar to e.g. here?
I think we can set the supported target languages to a fixed list in tests - we don't want to interact with the real DeepL API...

@seluianova
Copy link
Contributor

we don't want to interact with the real DeepL API

I assumed we want to do it through the mocked DeepL API. But to implement it in such way, the mocked server should be started before the CMS.

What do you think about patch the ready() function of the deepl app

In this case we won't cover the load of supported languages.
I guess I can just set them directly then:
apps.get_app_config("deepl_api").supported_source_languages = ["de"]

@seluianova
Copy link
Contributor

seluianova commented May 8, 2023

I discovered another strange problem:

If I use AsyncClient(), tests are failing on this request:

event_tree = reverse(
    "events",
    kwargs={
        "region_slug": region_slug,
        "language_slug": target_language_slug,
    },
)
response = await client.get(event_tree)`

Response code is 200, and response content is fine itself.

Error:
/home/tory/Integreat/integreat-cms/.venv/lib/python3.9/site-packages/django/db/models/fields/__init__.py:1358:DateTimeField Event.end received a naive datetime (2023-05-08 00:00:00) while time zone support is active.

image

What is particularly odd:

  1. The same code works fine (for event tree), if I use not AsyncClient(), but Client()
  2. I have no idea where it even came from - Event.end as 2023-05-08, because the end date of the event in the response is 01.01.2030.
  3. AsyncClient() works fine for page tree and location tree.

@timoludwig can you see any leads here? 🙈

@timobrembeck
Copy link
Member Author

@seluianova very strange indeed 🤔
Unfortunately, I don't have an idea what could cause this... 😒

I also don't know where the date is coming from, in theory the bulk action should only modify the translation objects and not the content objects. I don't see a place where Event.end could be reset to a naive local datetime during DeepL translation...
Maybe, if events are the only problem, we can start with tests for pages and POIs and think about events later?

@seluianova
Copy link
Contributor

seluianova commented May 8, 2023

I don't see a place where Event.end could be reset to a naive local datetime during DeepL translation...

Actually this error occurs even before the DeepL translation

Maybe, if events are the only problem, we can start with tests for pages and POIs and think about events later?

I believe we can :)

@seluianova
Copy link
Contributor

seluianova commented May 8, 2023

So. Assuming that the problem is in AsyncClient(), I could make these tests synchronized by replacing async aiohttp_raw_server with sync pytest-httpserver for DeepL mock. (https://pypi.org/project/pytest-httpserver/#description)
In this case all 3 tests (pages, events, pois) work fine.

@timoludwig do you have any objections if I make the summ_ai tests synchronized the same way for consistency?

@timobrembeck
Copy link
Member Author

So. Assuming that the problem is in AsyncClient(), I could make these tests synchronized by replacing async aiohttp_raw_server with sync pytest-httpserver for DeepL mock. (https://pypi.org/project/pytest-httpserver/#description) In this case all 3 tests (pages, events, pois) work fine.

Ok, seems reasonable.

@timoludwig do you have any objections if I make the summ_ai tests synchronized the same way for consistency?

The problem is that the SUMM.AI client only works in async context, so I'm not sure whether this would work. But maybe I'm missing something... However, since there is a major PR open for SUMM.AI (#1980), I'd suggest to wait until that PR is merged before proceeding with refactoring the SUMM.AI tests.

It's not ideal to have multiple dependencies for the same task, but I guess it's better than to invest hours of hours into debugging this problem 😅
And it's only a dev dependency...

@seluianova seluianova linked a pull request May 16, 2023 that will close this issue
@timobrembeck timobrembeck modified the milestones: 23Q2, 23Q3 Jul 2, 2023
@seluianova seluianova reopened this Jul 7, 2023
@seluianova seluianova linked a pull request Aug 16, 2023 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
effort: medium Should be doable in <12h feature New feature or request prio: medium Should be scheduled in the forseeable future.
Projects
None yet
2 participants