You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In addition to #2091, we could also measure the API response time.
We probably don't want to generate a load, so we could just capture the current timestamp before sending the request and after getting the response, as an simplest option.
Questions to think about:
The response time may depend on the system, so perhaps we want to introduce a separate parameter to enable/disable this measurement?
Perhaps it could even be a separate set of tests and a separate CI job that does not cause the build to fail, but adds an informational comment like Code Climate?
The text was updated successfully, but these errors were encountered:
To gain at least a little bit of determinism, I think the CircleCI servers should be used as baseline for execution times. Maybe a first step would be to gather some statistics, e.g. for 20 CircleCI runs, what are the min/max/avg execution times for the API endpoints?
I still think there are probably edge cases where the timing will be completely off, e.g. because some task is scheduled later etc...
However, I think this really has a low prio because we don't know if this really adds much benefit in addition to the SQL query counter, whereas the other tickets in #765 are immediately very useful.
But as an option, we could use these tests not to check performance against some golden numbers, but against the master branch.
So we could potentially catch performance degradation.
But as an option, we could use these tests not to check performance against some golden numbers, but against the master branch. So we could potentially catch performance degradation.
Sounds like an interesting idea, could be worth investigating.
I'm not completely aware of CircleCI's internal load balancing, it could also be the case that they assign resources to containers quite dynamically, which could make the whole thing less predictable and could cause some test cases to have different execution time even if nothing else changed.
I think a good first step would be to write some code to measure the execution times and then let these tests run a few times with the same code base to see whether the results are somewhat consistent or whether there are high variances.
Motivation
In addition to #2091, we could also measure the API response time.
We probably don't want to generate a load, so we could just capture the current timestamp before sending the request and after getting the response, as an simplest option.
Questions to think about:
The text was updated successfully, but these errors were encountered: