[Automated-testing] Test stack survey - LKFT

Tim.Bird at sony.com Tim.Bird at sony.com
Mon Oct 8 16:05:37 PDT 2018



> -----Original Message-----
> From: Mark Brown
> 
> On Mon, Oct 08, 2018 at 09:19:43PM +0000, Tim.Bird at sony.com wrote:
> 
> > OK - this is interesting.  Fuego tries to address this issue with something
> > we call the 'fail_ok_list', that lists expected failures.  However we leave
> > them as failures, but prevent them from bubbling up the
> testcase/test_set/test_suite
> > hierarchy to indicate overall test failure.
> 
> FWIW the XFAIL thing is something GCC do as well (or used to last time I
> paid much attention to GCC development which was "some" years ago).

I'll have to check it out.  Thanks.

gcc overall is interesting when viewed from the perspective of it being a test
(which it is often used for - lots of 'build' tests consist of just executing the
compiler).  It's often viewed as a compilation tool, with testing as a side-effect.
Which is correct.  But if you look at how one uses it to test things, there are a
number of features:
1) the reporting space is sparse - only failures and warnings are printed - not
every testcase that is executed.  This is because there can be thousands of
'tests' done on every line of code that is compiled, and the testcase space is
just too big.
2) users can control what things are considered errors (so, similar to a 
fail_ok_list, or the xfail thing we were just discussing), as well as
what things are considered warnings (using -W flags).  Sometimes, in-source
mechanisms are used to mark things to be ignored.
3) They have a fairly well-regularized, line-based output format.  For each testcase
reported it consists of an error or warning line, with consistent strings, and additional
diagnostic data always on  subsequent lines (if I'm not mistaken).  They've thought
about machine parsing of their output.

 -- Tim






More information about the automated-testing mailing list