[Automated-testing] Working Group Pages

Tim.Bird at sony.com Tim.Bird at sony.com
Mon Nov 19 16:31:29 PST 2018



> -----Original Message-----
> From: Neil Williams 
> 
> On Sat, 17 Nov 2018 at 02:26, <Tim.Bird at sony.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Carlos Hernandez on Thursday, November 15, 2018 7:06 AM
> > >
> > > I think we should create a page called working groups, milestones or
> > > something else along those lines under the top page
> > > https://elinux.org/Automated_Testing
> > >
> > > The working groups page should then point to individual pages focus on
> > > solving a specific problem. People can follow (i.e. add to their watchlist)
> > > working group pages that they are interested in.
> > >
> > > Couple of working group (WG) pages to start with could be 'test results
> > > definition WG' and 'test case definition WG'.
> > >
> > > I can setup first couple of pages to get this rolling.
> >
> > This sounds great to me.  I have been gathering information related
> > to some of this on sub-pages off of https://elinux.org/Test_Standards
> >
> > For example, I've been gathering dependency information on:
> > https://elinux.org/Test_Dependencies
> 
> With LAVA, we have been trying to get away from the test framework
> knowing anything about dependencies of any kind because that breaks
> portability of the test operation.
> 
> For example, the test writer needs to do the work of installing
> packages or ensuring that the support is built into deployments which
> don't support package managers. This allows the same unchanged test
> operation to run on Debian or in OE. The test can then be reproduced
> easily on Red Hat or Ubuntu without needing to investigate how to
> unpick the dependency format for their own system.
I don't think this is the right approach, but maybe I'm not understanding.

There are multiple types of dependencies, including ones which
prohibit a test from running, and ones which require some change
on the target before a test can be run.

And some dependencies might be one or the other depending on local
configuration (like permissions availability, requirement for non-disturbance
of the DUT, etc.).

However, in the case of package management, I think it's asking too much
for the test author to deal with package dependencies on multiple different
distributions.  Given that Phoronix Test Suite has come up with a system that
that can install required packages on BSD, Windows or Linux, I think
it should be possible to come up with something that can deal with package
dependencies on multiple Linux distributions.  Now, where package boundaries
are weird, or there's no package management provided by the system, maybe
this must be left as an exercise for the user.  But I think the goal would be to
support automation of this as much as possible.

> 
> Test writers then create a script which checks to see if any setup
> work is required and does whatever steps are required. That script
> then also analyses metadata and parameters to work out what it should
> do. The same script gets re-used for multiple test operations with
> different dependencies and can work inside the test framework and
> outside it. e.g. by checking for lava-test-case in $PATH, it is easy
> to call lava-test-case if it exists or use echo | print | log to
> declare what happened when running on a developer machine outside of
> any test framework.
OK - Reading this I think I've misinterpreted what you said in your first
paragraph.  But I'm now a bit lost.  Are you saying lava-test-case
does dependency checking, or that it *is* a dependency that is checked
for (with a check of $PATH)?

> 
> Such portability is an important aid in getting problems found in CI
> fixed in upstream tools because developers who are not invested in the
> test framework need to be able to reproduce the failure on their own
> desks without going through the hoops to set up another instance of an
> unfamiliar test framework.
Agreed.  The ability to reproduce test results on their own desks
should be made as easy as possible with whatever framework the
user chooses (including no framework)

> 
> It would be much better to seek for portable test definitions which
> know about their own dependencies than to prescribe a method of
> exporting those dependencies. This reduces the number of assumptions
> in the test operation and makes it easier to get value out of the CI
> by getting bugs fixed outside the narrow scope of the CI itself.
> 
> > and information about result codes on:
> > https://elinux.org/Test_Result_Codes
> 
> Test results are not the only indicator of a test operation.
??  I'm not sure what you mean.  By definition the indicator
of the test operation is the test result.

> A simple
> boot test, like KernelCI, does not care about pass, fail, skip or
> unknown.
Sure it does.  A successful boot is a pass, and an unsuccessful boot would
be a fail, and maybe finding the board unavailable would be a skip
or an error (depending on the result code scheme used).
Maybe we're meaning different things when we talk about 
"test results".

> A boot test cares about was the entire test job Complete or
> Incomplete. If Incomplete, what was the error type and error message.
> If the error type was InfrastructureError, then resubmit (LAVA will
> have already submitted a specialised test job called a health check
> which has the authority to take that one device offline if the device
> repeats an InfrastructureError exception) - a different device will
> then pick up the retry.
> 
> Test results only apply to "functional" test jobs
Every kind of test has a result.  Benchmark tests have numeric results.
Functional tests have pass/fail results.
> - there is a whole
> class of boot-only test jobs where test results are completely absent
> and only the test job completion matters.
Test job completion would be the test result.  Maybe you're
referring to test output?
 -- Tim



More information about the automated-testing mailing list