[Automated-testing] Test stack survey - LKFT

Tim.Bird at sony.com Tim.Bird at sony.com
Mon Oct 8 10:57:12 PDT 2018



> -----Original Message-----
> From: Milosz Wasilewski 

OK - thanks for the response.  I have created a wiki page for this here:
https://elinux.org/LKFT_survey_response

one or two questions or comments below...

> 
> ==== Test scheduling/management ====
> Does your test system:
> * check that dependencies are met before a test is run?
> "yes - as a part of test job setup"
> * schedule the test for the DUT?
> "yes"
> ** select an appropriate individual DUT based on SUT or test attributes?
> "yes"
> ** reserve the DUT?
> "yes"
> ** release the DUT?
> "yes"
> * install the software under test to the DUT?
> "yes"
> * install required packages before a test is run?
> "leave to user - dependency installation can be part of a test execution"
> * require particular bootloader on the DUT? (e.g. grub, uboot, etc.)
> "no"
> * deploy the test program to the DUT?
> "yes"
> * prepare the test environment on the DUT?
> "yes"
> * start a monitor (another process to collect data) on the DUT?
> "no - test monitoring happens over serial connection to DUT"

OK  - is there any support for something like performing
a trace during execution, or would this be left to the user
to configure as part of test definition?

> * start a monitor on external equipment?
> "yes"
> * initiate the test on the DUT?
> "yes"
> * clean up the test environment on the DUT?
> "no"

I presume this is because LKFT follows the LAVA model
of rebooting between tests, so the slate is always clean?
If not, let me know.

> ==== Run artifact handling ====
> Does your test system:
> * store run artifacts
> "yes"
> ** in what format?
> "leave to user - uploaded files are generated during the test, which
> is controlled by user"
> * put the run meta-data in a database?
> "yes"
> ** if so, which database?
> "postgresql"
> * parse the test logs for results?
> "yes"
> * convert data from test logs into a unified format?
> "no"
> ** if so, what is the format?
> * evaluate pass criteria for a test (e.g. ignored results, counts or
> thresholds)?
> "yes"
> * do you have a common set of result names: (e.g. pass, fail, skip, etc.)
> "yes"
> ** if so, what are they?
> "pass, fail, skip, xfail (for ignored failed test)"
xfail is interesting.  I haven't seen that one called out (unless
I'm not understanding it correctly)

What category would you assign to a testcase that failed to
run due to:
1) user abort of the test run?
2) DUT is not available?
3) test program faulted?

Fuego assigns these to 'error', which means something went wrong
besides the testcase itself.   Does LKFT have a similar result category?
(Well, sometimes we just leave a testcase result as SKIP if we don't
have a result for it and can't determine what happened.)

> ==== User interface ====
> Does your test system:
> * have a visualization system?
> "yes"
> * show build artifacts to users?
> "no"
> * show run artifacts to users?
> "yes"
> * do you have a common set of result colors?
> "yes"
> ** if so, what are they?
> "red - fail
> green - pass
> yellow - skip
> blue - xfail
> grey - total"
The separate color for aggregate result counts is also interesting.

> ==== Requirements ====
> Does your test framework:
> * require minimum software on the DUT?
> "yes"
> * require minimum hardware on the DUT (e.g. memory)
> "yes"
> ** If so, what? (e.g. POSIX shell or some other interpreter, specific
> libraries, command line tools, etc.)
> "software - POSIX shell
> hardware:
>  - boot on power

Good catch!  It's surprising that there are dev boards in this day and age
that don't support this simple attribute.  It might be good to call this out
in any documents we produce, as a recommended practice.
I assume by this, that you mean a board that comes all the way up
to the functioning state of SUT without requiring button presses, or
any other user action.  Is that right?

>  - serial line with unique ID
I'm not sure what you mean by this.  Does this mean something
that can have a unique id in /dev/serial/by-id?

>  - 'not-crazy' bootloader"

Can you be more specific?  (either by naming a 'crazy' bootloader, or
a bootloader feature (or anti-feature) that would make it 'crazy')?  :-)

Thanks very much!
 -- Tim




More information about the automated-testing mailing list