[Automated-testing] Test stack survey - LKFT

Tim.Bird at sony.com Tim.Bird at sony.com
Mon Oct 8 14:19:43 PDT 2018


> -----Original Message-----
> From: Milosz Wasilewski 
> 
> On Mon, 8 Oct 2018 at 18:57, <Tim.Bird at sony.com> wrote:> >
> > > -----Original Message-----
> > > From: Milosz Wasilewski
> >
...
> > > * do you have a common set of result names: (e.g. pass, fail, skip, etc.)
> > > "yes"
> > > ** if so, what are they?
> > > "pass, fail, skip, xfail (for ignored failed test)"
> > xfail is interesting.  I haven't seen that one called out (unless
> > I'm not understanding it correctly)
> >
> > What category would you assign to a testcase that failed to
> > run due to:
> > 1) user abort of the test run?
> > 2) DUT is not available?
> > 3) test program faulted?
> >
> > Fuego assigns these to 'error', which means something went wrong
> > besides the testcase itself.   Does LKFT have a similar result category?
> > (Well, sometimes we just leave a testcase result as SKIP if we don't
> > have a result for it and can't determine what happened.)
> 
> It depends on the cause I think. 'xfail' is assigned to the test case
> that was executed properly and produced a 'fail' result. Than during
> result postprocessing we check whether such failure is marked as
> 'known issue' (in database). If it is, result is changed to xfail. I
> know this isn't correct usage of this state. IIRC xfail should simply
> be equal to 'pass' because we deliberately trigger the condition that
> is considered 'failed' for some reason. Anyway, LKFT uses xfail to
> mark failures that we know about but don't want to deal with right
> now. It is meant do make distinction between 'known' failure and
> 'regression' which is 'unexpected failure'.

OK - this is interesting.  Fuego tries to address this issue with something
we call the 'fail_ok_list', that lists expected failures.  However we leave
them as failures, but prevent them from bubbling up the testcase/test_set/test_suite
hierarchy to indicate overall test failure. 

So a test_set with failing individual testcases can pass, if the only failing test
cases are in the fail_ok_list.  I like the idea of marking them visually to distinguish
them.
 -- Tim



More information about the automated-testing mailing list