[Automated-testing] Farming together - areas of collobration

Neil Williams neil.williams at linaro.org
Mon Feb 19 00:11:59 PST 2018


On 12 February 2018 at 21:51, <Tim.Bird at sony.com> wrote:

>
>
> > -----Original Message-----
> > From: Steve McIntyre on Monday, February 12, 2018 10:33 AM
> > On Tue, Nov 14, 2017 at 09:16:01PM +0000, Bird, Timothy wrote:
> >
> > >I think one of the most difficult things will be building the ecosystem
> to
> > >support a solution here.  There are issues with finding a home for this
> > >stuff, in that someone will have to do maintenance work and support use
> > cases
> > >that don't apply to themselves.  In other words, I'm worried about the
> > economic
> > >incentives to help concentrate the collaborative effort required for
> this.
> > >
> > >Also, there will be difficulty getting people with existing systems to
> switch
> > >over to something new. That's going to be a lot of pain for not much
> near-
> > term
> > >gain.  Most people just cobble together their own system, get to a fixed
> > level of
> > >automation, then move on to testing or board usage within that
> > framework.
> > >
> > >To be a bit more blunt about it, as a concrete example: would LAVA
> adopt a
> > new
> > >system for low-level board control, if it presented itself?  I kind of
> doubt it.
> > >Who would do this work?
> >
> > That's the big question, yes. In the LAVA team, we've already solved a
> > lot of the issues that we've seen, in our own ways. We want to be good
> > Open Source citizens (of course!), but it's difficult to justify
> > spending much engineering time on ripping things out and moving to
> > different underpinnings unless we can see concrete benefit. I'd expect
> > most teams to be the same. Chicken and egg, as you said earlier. The
> > key thing to make it all worthwhile will be to show the value of doing
> > it, while minimising the cost. Let's see where we can take that.
>
> Steve - good to see your response.
>
> I agree with the 'difficult to justify... without concrete benefit' idea.
>
> I'm struggling with trying to decide what system to put my own efforts
> into, and to use with Fuego.  My understanding of the different systems
> is somewhat limited, but I'm aware of at least the following:
>  - r4d - linutronix
>  - ttc - Sony (well, mostly me and my lab)
>  - labgrid - pengutronix
>  - pduclient/LAVA - Linaro
>

There are more elements involved here than just remote power. Device
control in LAVA extends to a range of other services. The package
dependencies "recommend" pduclient because it's the earliest one we used,
it is relatively easy to configure and it copes with a reasonably wide
range of DUTs. However, pduclient is no longer used in the Linaro lab where
we host instances like validation.linaro.org, LKFT and our own staging
instance. The remote power control for these uses either SNMP or Cambrionix
PP15s switchable USB hubs. There are also commands required to connect to
serial (including multiple serial port support) and interface with static
and dynamic hardware through udev support.

As far as LAVA is concerned, it breaks down into two areas:

* Admins create scripts which can be called with simple arguments like the
address of the hardware switching the power and the number of the port on
that hardware for the DUT.
* Device configuration is created from a templating system based on Jinja2.

Jinja2 is an incredibly powerful mechanism and really needs to be
considered for any situation where the instructions to control a device are
to be standardised across other systems. LAVA uses YAML output from Jinja2
but Jinja2 can produce any ASCII based output whilst supporting logic and
conditional handlers, loops and overrides. This is a bit of a hidden area
with LAVA, an extra layer which isn't immediately obvious until users start
running their own LAVA labs.

This allows us to generate a complex 250 line YAML device configuration
from just 7 lines of device configuration:
https://git.linaro.org/lava/lava-lab.git/tree/staging.validation.linaro.org/master-configs/staging-master.lavalab/lava-server/dispatcher-config/devices/staging-black01.jinja2

{% extends 'beaglebone-black.jinja2' %}
{% set connection_list = ['uart0'] %}
{% set connection_tags = {'uart0': ['primary', 'telnet']} %}
{% set connection_commands = {'uart0': 'telnet localhost 7105'} %}
{% set hard_reset_command = '/usr/local/lab-scripts/snmp_pdu_control
--hostname pdu15 --command reboot --port 15' %}
{% set power_off_command = '/usr/local/lab-scripts/snmp_pdu_control
--hostname pdu15 --command off --port 15' %}
{% set power_on_command = '/usr/local/lab-scripts/snmp_pdu_control
--hostname pdu15 --command on --port 15' %}

The rendered configuration (only really useful for the lava-dispatcher
codebase) can be downloaded:
https://staging.validation.linaro.org/scheduler/device/staging-black01/devicedict/plain

Adding external hardware support only adds a small amount extra:
https://git.linaro.org/lava/lava-lab.git/tree/staging.validation.linaro.org/master-configs/staging-master.lavalab/lava-server/dispatcher-config/devices/staging-hi6220-hikey-05.jinja2
https://staging.validation.linaro.org/scheduler/device/staging-hi6220-hikey-03/devicedict/plain

The constraints on the commands are:
0: must always exit zero on success
1: must always exit non zero on any failure
2: must not require pipes or redirects in the command line (create a
wrapper script)
3: must be available to be called on the worker.

Templating can solve many of the problems of different levels of
abstraction.

Linaro QA teams use templating to create the test job submissions as well.


>
> One thing I've observed is that there are substantial differences
> in the level of abstraction and the board management architecture
> for these different systems.  I previously had hoped we could do a simple
> survey of the different systems, and come up with a set of verbs
> that everyone could use - making it possible for, say, tests written
> in LAVA to run in a farm that used labgrid or r4d.  I would certainly like
> to easily run Fuego tests in LAVA-based farms.  Actually, I'd like to make
> Fuego independent of the board farm management software, so the
> user can choose.
>
> However, it has turned out to not be that simple.  Different systems
> impose different hardware requirements, or make assumptions about
> the nature of system deployment, that seem incompatible with
> other systems.  Or, at least that's my understanding.
>
> I'm not sure where to start, but maybe it would be good to have a
> discussion about the basic requirements and operation of each
> system, to see where the compatibilities and incompatibilities are.
> (Or, maybe start with the requirements that the upper-level
> software or user have, that are using the different systems).
>
> Would it be worth putting together some kind of "board farm summit"
> at the next plumbers?  I would suggest ELC, but it's already programmed,
> and coming up soon, and Linaro has Connect going on the same month, so
> I expect travel would be an issue.
>
> Let me know what you think.
>  -- Tim
>
>
> --
> _______________________________________________
> automated-testing mailing list
> automated-testing at yoctoproject.org
> https://lists.yoctoproject.org/listinfo/automated-testing
>



-- 

Neil Williams
=============
neil.williams at linaro.org
http://www.linux.codehelp.co.uk/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.yoctoproject.org/pipermail/automated-testing/attachments/20180219/d4012c61/attachment.html>


More information about the automated-testing mailing list