[Automated-testing] Test stack survey - LKFT

Michal Simek michal.simek at xilinx.com
Tue Oct 9 00:32:30 PDT 2018


On 9.10.2018 01:05, Tim.Bird at sony.com wrote:
> 
> 
>> -----Original Message-----
>> From: Mark Brown
>>
>> On Mon, Oct 08, 2018 at 09:19:43PM +0000, Tim.Bird at sony.com wrote:
>>
>>> OK - this is interesting.  Fuego tries to address this issue with something
>>> we call the 'fail_ok_list', that lists expected failures.  However we leave
>>> them as failures, but prevent them from bubbling up the
>> testcase/test_set/test_suite
>>> hierarchy to indicate overall test failure.
>>
>> FWIW the XFAIL thing is something GCC do as well (or used to last time I
>> paid much attention to GCC development which was "some" years ago).
> 
> I'll have to check it out.  Thanks.
> 
> gcc overall is interesting when viewed from the perspective of it being a test
> (which it is often used for - lots of 'build' tests consist of just executing the
> compiler).  It's often viewed as a compilation tool, with testing as a side-effect.
> Which is correct.  But if you look at how one uses it to test things, there are a
> number of features:
> 1) the reporting space is sparse - only failures and warnings are printed - not
> every testcase that is executed.  This is because there can be thousands of
> 'tests' done on every line of code that is compiled, and the testcase space is
> just too big.
> 2) users can control what things are considered errors (so, similar to a 
> fail_ok_list, or the xfail thing we were just discussing), as well as
> what things are considered warnings (using -W flags).  Sometimes, in-source
> mechanisms are used to mark things to be ignored.
> 3) They have a fairly well-regularized, line-based output format.  For each testcase
> reported it consists of an error or warning line, with consistent strings, and additional
> diagnostic data always on  subsequent lines (if I'm not mistaken).  They've thought
> about machine parsing of their output.

Also pytest have xfail listed.
https://docs.pytest.org/en/latest/skipping.html

In U-Boot we use test/py/ pytest testing framework and I have marked
some internal tests as xfail because when we are running the same tests
on HW and QEMU where not everything is modeled.

Thanks,
Michal



More information about the automated-testing mailing list