[Automated-testing] Modularization project - parser

Tim.Bird at sony.com Tim.Bird at sony.com
Thu Nov 15 18:05:57 PST 2018


Hey everyone,

One thing that I think we do OK at, conceptually, in Fuego is
our parser.  Our architecture allows for
simple regular-expression-based parsers, as well as
arbitrarily complex parsing in the form of a python program.

The parser is used to transform output from a test program
into a set of testcase results, test measurement results (for benchmarks)
and can optionally split the output into chunks so that additional
information (e.g. diagnostic data) can be displayed from our
visualization layer (which is currently Jenkins), on a testcase-by-testcase
basis.

It has multiple outputs, including a json format that is usable
with KernelCI, as well as some charting outputs suitable for
use with a javascript plotting library (as well as HTML tables),
and a Fuego-specific flat text file used for aggregated results.

This is all currently integrated into the core of Fuego.  However,
we have been discussing breaking it out and making a standalone
testlog parser project.

I envision something that takes a test's testlog, and the run meta-data,
and spits out results in multiple formats (junit, xunit, kernelci json, etc.)

>From the survey, I noted that some systems prefer it if the tests
instrument their output with special tokens.  I'd like to gather up
all the different token sets, and if possible have the parser autodetect
the type of output it's processing, so this is all seamless and requires
little or no configuration on the part of the test framework or end user.

Also, from the test survey, I noted that some systems use a declarative
style for their parser (particularly Phoronix Test Suite).  I think it would
be great to support both declarative and imperative models.

It would be good, IMHO, if the parser could also read in results from
any known results formats, and output in another format, doing
the transformation between formats.

Let me know if I'm duplicating something that's already been done.
if not, I'll try to start a wiki page, and some discussions around the
design and implementation of this parser (results transformation engine)
on this list.

Particularly I'd like to hear people's requirements for such a tool.

Regards,
 -- Tim



More information about the automated-testing mailing list