[Automated-testing] board control verbs (was RE: Farming together - areas of collaboration)

Bird, Timothy Tim.Bird at sony.com
Thu Nov 16 10:48:21 PST 2017


> -----Original Message-----
> From: Jan Lübbe on Wednesday, November 15, 2017 1:37 AM
> On Tue, 2017-11-14 at 21:37 +0000, Bird, Timothy wrote:
> > > -----Original Message-----
> > > From: Kieran Bingham on Saturday, November 11, 2017 7:37 AM
> [...]
> > > My design goals here are that an individual should be able to access
> targets
> > > with name based resolution such as:
> > >
> > >   lab beaglebone serial # Load up a shared serial console over
> tmux/screen
> > >   lab beaglebone on
> > >   lab beaglebone reboot
> > >   lab beaglebone upload Image board.dtb   # Upload boot files using
> rsync/scp
> > >
> >
> > Just by way of comparison, ttc supports similar things:
> > Assuming a board named 'bbb' for "beagleboneblack".  I use commands
> like the following
> > on a daily basis:
> >  $ ttc bbb console - get serial console
> >  $ ttc bbb login - get network login (usually ssh these days)
> >  $ ttc bbb reboot - reboot board
> >  $ ttc bbb kinstall - install kernel
> >  $ ttc bbb fsinstall - install file system (or image, as appropriate)
> >  $ ttc run <command> - execute command on board
> >
> > See https://elinux.org/Ttc_Program_Usage_Guide for a list of verbs.
> > (Again - I'm not promoting ttc, just providing data for comparison with
> other systems).
> > Having tried to push ttc in the past as an industry standard, and gotten
> nowhere,
> > I am perhaps more sensitive to the issues of how hard it is to build an
> ecosystem
> > around this stuff.
> 
> One difficulty with these verbs is that they apply to different levels:
> - Simple HW (console, reboot)
> - USB protocols (fastboot, mass storage, SD, bootloader upload)
> - Bootloader control (kernel install/upload, variables, boot device
> selection)
> - Linux shell control (login, running tests, ...)
> - Board state control (switch from linux back to bootloader, boot to a
> different redundant image, ...)

I've gotten so accustomed to my current verb set (having used
them for over a decade), that I'm not sure I follow parts of this.
The verbs match my mental model of what I, as a human, want to
accomplish, as a sequence of events for interacting with the board.

I do see your point that I have verbs about building software, 
verbs about installing software, and verbs about low-level board
control and access all in one set.  There's no layering here at all.

But I'm not sure that I care whether the verbs match some notion of
physical connector or software layer.  Abstracting those differences is,
quite frankly, what I have this software for.

Maybe I could refactor the operations I do onto another set of verbs,
but I would want to see if that would  end up increasing the overhead
for my operations.  (I don't mind a one-time switching cost, but I don't
want to be typing 10 commands where I used to type 4.)  This is not
so much an issue for automation, where a few extra cycles for extra
command invocations is not a big deal.

I think this is where a survey of existing operations, verbs, and their
use cases would be good.  I'd like to see how other people organize
their "layers", and why they do it that way.

> 
> The Bootloader/Linux levels need a lot more knowledge about how each
> target works.
I don't understand this.  I've worked on the Linux kernel for quite a number
of years, and the abstractions provided by ttc have been adequate for my
use cases.  And I've had boards in my farms over the years with a much
wider variety of bootloaders than are prevalent today.

But I haven't worked on bootloaders, so maybe our use cases are different?

> For our uses-cases, this is often highly project
> specific. So it needs to be relatively simple to customize the
> behavior, while still making the common cases easy.
I agree with this.

> 
> > Maybe I should try to convert my lab over to labgrid, and see if it handles
> > the things I want to do (and if not send some patches upstream).
> > For some reason, I could never muster the energy to do this with LAVA,
> > but maybe converting to labgrid wouldn't involve as much perceived pain.
> >
> > Is anyone on this list using labgrid (or especially if they've converted over
> > from something else), and can tell me how painful it was?
> 
> I suspect there aren't may people who have tried it already. Also the
> docs don't really have a step-by-step guide how to set up the remote
> infrastructure. There are probably also some traps during setup which
> are not obvious to me. ;)
Well, ttc doesn't really have remote infrastructure, other than ssh-ing
to a host that the board is connected to (and that the control hardware
is also accessible from).

The primary use case that inspired ttc, was that as satellite workers in the U.S.,
our team did not have physical access to hardware.  There were a variety
of reasons: Sony didn't want to expose the pre-release hardware by shipping
it overseas, sometimes there were only 2 or 3 prototype boards in existence
and they had to be shared with dev. teams in Japan, etc.

In any event, the model that ttc uses is that (as inferred above)  I can SSH
to a remote host, where the board is physically connected.  DUT control
hardware (things like power switches, network connections, button
control relays, etc) are either also physically connected to that host, or
are accessible from it (via local network at the host site).  We would
reserve a board in Japan, operate on it from the command line, and release
it when we were done.

Tests that we wrote, executed the same commands a human
would, and thus were written to execute on that host machine in Japan.
That is, we would write a test script in the U.S., put it on the host machine
in Japan, and execute it there.

We had a similar setup in the U.S.  Our board farm had a dedicated server
that connected to all our boards, and developers from Japan could access
our boards using the same methods.

Fuego follows this model of having a "driving" script that executes
on a host, with some kind of transport/control connection to the physical
board.  In Fuego's case, the host is by default expected to be your local machine,
with makes it more complicated for the board farm case, if the farm is remote,
but we're working on that.

> 
> Tim, if you could describe your test setup (which boards, how they are
> connected/powered/…, where they are connected to, what your first use-
> case is), I'd write that guide for your case. Later I could generalize
> it and move it to the docs as a guide.

OK - my current lab is a shadow of its former self.  Over the years I've
had android devices (requiring fastboot/adb), boards set up with
tftp/nfs root (my preference for early development), and boards with
other kinds of bootloaders, and custom hardware and relays for 
button control.

But here is my current setup:
== beaglebone black
 * power provided by USB, mediated by Sony Debug Board
 * network via an Ethernet hub, and also via USB networking to host,
 * USB connection to host (providing network and mass storage device from the beaglebone to the host)
and host providing power.
 * serial console mediated by Sony Debug Board
https://elinux.org/Sony_Debug_Assist_board

The Sony Debug board controls the USB connection (can toggle it on and off),
which also controls the power, and converts serial UART to USB serial.
The Sony Debug board is connected to the host by its own USB cable,
and by another cable which acts as the pass-through USB from the beaglebone
to the host.

 * The firmware is uboot.
 * firmware, kernel and root filesystem are on sdcard.

Control of the Sony Debug Board is via another interface over the  SDB control
USB cable, using character sequences to issue commands, and reading
the interface as a character device to examine status (power draw, USB
connection status, button connection status, etc.)

== minnowboard 
 * power control via digital loggers Web Power Switch
https://www.digital-loggers.com/lpc.html
 * network via local Ethernet hub
 * USB connection - not connected to anything
 * serial console - not used.  If it was used, it would be via dedicated FTDI serial to USB serial cable.

The firmware is some Intel uefi thing, with grub as the bootloader
grub, kernel and root filesystem are on sdcard.

I have my own custom python program for managing power ports on the
web power switch called: powerswitch-set.  To configure what board is
connected to what port, I add the name to the powerswitch-set command,
and also modify the ttc configuration which drives that command.

== renesas RCar
* power via digital loggers Web Power Switch 
 * network via an Ethernet hub
 * USB not connected to host
 * serial console via on-board UART to USB-serial converter, then to host via USB
 * button control via Sony Debug Board (USB with character-based SDB control interface)

 * The firmware is Uboot, on flash (I think)
 * kernel loaded via tftp
 * root filesystem via nfsroot fs

== raspberry pi 3
 * power provided by USB, mediated by Sony Debug Board 
 * network via onboard wifi
 * USB not connected to host (used for peripherals)
 * serial console through Sony Debug Board
* The firmware is uboot.
 * firmware, kernel and root filesystem are on sdcard.

board control software:
 - ttc
     - (with mini-scripts built into the ttc config for performing Sony Debug Board operations)
 - powerswitch-set, powerswitch-cycle
- minicom
 - ssh, ssh_exec (a custom ssh wrapper), scp

I used to use the following:
- adb
 - fastboot
 - telnet_exec
 - switch-target-fs (custom program to swap nfsrootfs for different users)
 - wrclient.py (custom python program to control a web relay)

I suppose I should capture this on an elinux page, and be the first
to subject myself to a board farm "survey".
 -- Tim




More information about the automated-testing mailing list