You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
store information to database: project info (name, commit, mapping.db size, full test-suite size)
remove a single random line
check changes
find tests on line-level granularity
find tests on file-level granularity
execute line-level, file-level and full test-suites, extract pytest exit codes
store data to database: all exit codes, diff, test suite sizes
Build mapping by iterating through commits of a project
Perhaps could be used for evaluation somehow
Issues: changing dependencies and structures. No solution at this time or plans to fix.
Future
Remove more random lines (configurable), compare data like with removal of one
Is replacing a line that has code with newline (\n) a valid line removal strategy? Causes a lot of faults that make the code not run.
Should individual test fails be collected? Currently random removal test compares pytest exit codes (if all tests found fault when test selection tests did not) and size of test sets. Is it interesting to know how many individual tests failed in cases where test selection fails to find any tests?
Collect some time data, for example how much time is saved with only running selected tests instead of all of them.
@guotin not sure that it is still relevant. If you have plans please describe it here if not and it is in good shape - just close the issue
The text was updated successfully, but these errors were encountered: