[Automated-testing] Automatic test result interpretation

Richard Palethorpe rpalethorpe at suse.de
Thu Jul 18 06:05:51 PDT 2019


Hello,

This mainly relates to tagging failing tests with bug references (say
Bugzilla ID or a mailing list item) and propagating those throughout the
community. Regardless of the bug trackers, test suite, test runners or
test result database being used.

Also automatically detecting anomalies in test results regardless of the
test runner used and with or without nicely formatted meta data. As well
as other similar problems.

I have, roughly speaking, created a data-analyses framework which can
help to solve these problems and used it to create a few reports/scripts
which handle "bug tag propagation" and some form of anomaly detection.

It is kind of difficult to explain this in writing, so please see the
following video:

https://youtu.be/Nzha4itchg8
https://palethorpe.gitlab.io/jdp/reports/ (link to the reports mentioned)

I designed/evolved the framework to be able to accept whatever input is
available from many disparate sources, side stepping the issue of what
test result format or test meta data format to use. Although it still
requires effort to integrate a new format or accept results from a new
test runner.

There is more information available here:
https://palethorpe.gitlab.io/jdp/

I must stress that the methods we are using right now are quite simple,
but we could easily incorporate something like MLJ[1]. I hypothesize
that this would allow us to us to automate the vast majority of test
result review. As most test failures contain some common pattern which
can be used to identify them as a known failure. This might even be
achievable purely by using some DSL[2] to specify heuristics.

However we first need enough data from enough sources make such efforts
worthwhile. Currently it mostly works for our internal uses in the SUSE
kernel QA team, but we have not had much serious interest from outside
our team, so I will put this to one side and work one something else for
a while to avoid tunnel vision. If you are interested, please let me
know, so that I can justify the time to make it more accessible.

[1] https://julialang.org/blog/2019/05/beyond-ml-pipelines-with-mlj
[2] https://palethorpe.gitlab.io/jdp/bug-tagging/#Advanced-Tagging-(Not-implemented-yet)-1

--
Thank you,
Richard.


More information about the automated-testing mailing list