[Automated-testing] Farming together - areas of collobration

William Mills wmills at ti.com
Mon Feb 19 10:13:00 PST 2018


Neil,

On 02/19/2018 03:39 AM, Neil Williams wrote:
> From a LAVA perspective, we set the abstraction at:
> 
> * Jinja2 for device configuration
> * portable shell scripts to run on the DUT or in a container to control
> the DUT
> * Keep reboot and serial connection support *out* of the test operations
> * Retain only generic / relatively basic result presentation /
> notification but provide a series of APIs which can be used to present
> the results in ways which are suitable to the consumers of the CI lop.
> 
> Reboot is a particular problem. Many DUTs require complex steps to get
> back into a test environment after a reboot. (In the case of TFTP, it
> means a whole new deployment and that's often best done as a separate
> test job.) Many existing test frameworks are written to run on a single
> developer - single device on my desk model. That then leads to the
> automation of the test itself including commands to control the device.
> It is much more flexible to isolate the DUT control from the test framework.

There are actually multiple concerns here:

1) Isolation

A test should not be able to interfere with devices not given to it as a
resource.  (Ex: A test on DUT1 should not be able to reset DUT2 unless
DUT2 is also part of the same test).

2) Abstraction

Tests that do not care how the device gets booted should not have code
in them that does that.  This makes tesst portable and maintainable.

3) Boot Control

A test should be able to control the details of boot and deployment if
it needs to.

--------

I actually think both LAVA dispatcher and VATF (TI Opentest equivalent)
have this wrong today.  LAVA achieves 1 & 2.  VATF achieves 2 & 3.

VATF today fails at #1 because it has equipment drivers that directly
control the PDU, relays etc.  This is a problem for isolation as it
allows a badly written test case to mess with other independent test cases.

It is not a failure of abstraction because power & relay control (among
others) are abstracted in the code base so test benches can have
different methods of power control w/o changing the test case logic.

New equipment drivers could be written for VATF to talk to labgrid or
other daemon that will allow it to control only the resources (PDU
sockets, serial ports, relays, Digital Power Meter) that were assigned
by the scheduler.

This would fix #1-Isolation for VATF and very few if any test cases
would need to be re-written because of power, relay, & serial port
abstraction levels.

We agree that #2-Abstraction is very important.  However having the
*capability* to control the low level details does not cause a violation
of this principal.  The vast majority of VATF test cases do not have any
logic on how to boot the board.  Instead they inherit the default "boot"
method.  The default boot method will look at the board type and the
capabilities & policies of the test bench and do the right thing.  This
can be enhanced via test run arguments if you wish (run the same test
via different deployment models for example)

#3-Control is important to us and others.  With low level control we can
test things like:
* Do 1000 boot cycles with pseudo random off times between 0.5 and 2
seconds and categorize the results
* Test the SOC ROMs serial boot method
* Test the SOC ROMs TFTP boot method
* Test u-boots TFTP method
* Test u-boots USB boot method

These tests are indeed less portable than the ones for case #2.  The SOC
ROM serial boot test cases would be TI specific and probably needs a
tweak for each new TI SOC.  However it is test bench portable.  It just
needs a DUT that has power control, serial port connection, and bootmode
pin control.  The test case does not care if the bootmode is controlled
via 1 relay or 8 relays or which manufacture those relays come from.

The SOC ROM test case would declare that it needs bootmode control (or
perhaps it declares which bootmodes it needs). The scheduler would only
run that test on boards that had the bootmode control needed.  (Bootmode
control would not be needed on all boards, just a subset.)

VATF today (as I understand it, Carlos is the guru) maintains these
independent layers:
	VATF framework
	VATF test cases
  	Per test bench capabilities and properties
	Test database

In a more broadly deployed system you would imagine perhaps more layers:
---- Framework
	Generic framework
	Equipment driver plug ins
	Deployment model libraries
	Board Type knowledge plug ins

---- Test Logic
	Test cases
	(may tests can use the same test logic with
	different test run arguments)

---- Lab knowledge
	Test bench capabilities and properties

---- Test database
	What tests need to be run for release X
	The parameters for each test
	What tests have passed/failed for release X

BTW: I have talked about VATF a lot above.
It is TODO to write up / update our OpenTest info.  Here is some quick info:

http://arago-project.org/wiki/index.php/Opentest

OpenTest was created by TI in 2009 and is used primarily by TI.
It uses existing open source projects (TestLink and STAF) and adds new
content from TI. The TI framework and test case logic are open source
(BSD) but the test database is closed (for now anyway).

It was made open source largely so our customers could duplicate our
test environment. So far very few have done so but we will continue to
work on it as open source.

We have a very small team that works on it so we are not scaled for wide
spread deployment and community adoption.  We think there is a lot of
good principals in it.  Please borrow good ideas from it at will and
please don't copy the mistakes (like the use of Ruby :) ).

We plan to continue to use OpenTest as our main test framework and VATF
as a major "Test Execution Engine".  However, OpenTest allows multiple
TEEs to be used on the same board for different tests so we have
integrated LAVA dispatcher as a TEE also.

Bill


More information about the automated-testing mailing list