You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently performance measurements are performed at the end of some unit tests. These slow down the testing cycle and can be pulled out into their own subroutines. This will additionally help identify some common setup code that can be refectored out.
We also need:
A way to tag tests so a user can run only the unit/performance test
A way of regularly running performance tests to check for perf regression
The text was updated successfully, but these errors were encountered:
How about running the same set of test programs with an optional argument such as --perf so that when the argument is present the test program calls the subroutine being tested in a loop say a 100 times and reports the performance?
I'm not actually sure how to enable this in ctests or whether if this is possible or not, but if we add this optional argument and loop over the call many times if the argument is present, we can at least do performance tests by hand.
Currently performance measurements are performed at the end of some unit tests. These slow down the testing cycle and can be pulled out into their own subroutines. This will additionally help identify some common setup code that can be refectored out.
We also need:
The text was updated successfully, but these errors were encountered: