[Automated-testing] Test Stack Survey

Matt Hart matthew.hart at linaro.org
Mon Oct 1 10:18:42 PDT 2018


Answers for LAVA,

Thanks

On Wed, 19 Sep 2018 at 07:39, <Tim.Bird at sony.com> wrote:
>
> The format of the survey is mediawiki markup, so * = bullet, ** = indented bullet.
>
> == Diagrams ==
> Attached is a diagram for the high level CI loop:
>
> The boxes represent different processes, hardware, or storage locations.  Lines between boxes indicate APIs or control flow,
> and are labeled with letters.  The intent of this is to facilitate discussion at the summit.
>
> [[File:<see attachment>|high level CI loop]]
> --------
>
> If you have an element that is not featured in this diagram, please let us know.
>
>
> == Survey Questions ==
> * What is the name of your test framework?

LAVA

>
> Which of the aspects below of the CI loop does your test framework perform?

Lab/Board Farm, Scheduler, DUT Control (Deploy, Provision, Test,
Collect Results)

>
> The answers can be: "yes", "no", or "provided by the user".
> Where the answer is not simply yes or no, an explanation is appreciated.
>
> For example, in Fuego, Jenkins is used for trigger detection (that is, to
> detect new SUT versions), but the user must install the Jenkins module and
> configure this themselves.  So Fuego supports triggers, but does not provide
> them pre-configured for the user.
>
> If the feature is provided by a named component in your system (or by an
> external module), please provide the name of that module.
>
> Does your test framework:
> ==== source code access ====
> * access source code repositories for the software under test?

No

> * access source code repositories for the test software?

Yes, all the common VCS

> * include the source for the test software?

No

> * provide interfaces for developers to perform code reviews?

No

> * detect that the software under test has a new version?

No, most people would use jenkins or similar

> ** if so, how? (e.g. polling a repository, a git hook, scanning a mail list, etc.)
> * detect that the test software has a new version?
>
> ==== test definitions ====
> Does your test system:
> * have a test definition repository?

LAVA does not come with tests, however Linaro does maintain a set of
job definitions

> ** if so, what data format or language is used (e.g. yaml, json, shell script)

YAML

>
> Does your test definition include:
> * source code (or source code location)?

Yes

> * dependency information?

Yes

> * execution instructions?

Yes

> * command line variants?

Yes

> * environment variants?

Yes

> * setup instructions?

Yes

> * cleanup instructions?

Yes

> ** if anything else, please describe:
>
> Does your test system:
> * provide a set of existing tests?

No

> ** if so, how many?
>
> ==== build management ====
> Does your test system:
> * build the software under test (e.g. the kernel)?

No

> * build the test software?

Sometimes, quite a lot of LAVA users will build the test software on
the device they are testing before executing it

> * build other software (such as the distro, libraries, firmware)?

No

> * support cross-compilation?

No

> * require a toolchain or build system for the SUT?

Yes

> * require a toolchain or build system for the test software?

No, it can be built on the device, though pre-built is obviously faster

> * come with pre-built toolchains?

No

> * store the build artifacts for generated software?

No, however pushing to external storage is supported (Artifactorial)

> ** in what format is the build metadata stored (e.g. json)?
> ** are the build artifacts stored as raw files or in a database?
> *** if a database, what database?
>
> ==== Test scheduling/management ====
> Does your test system:
> * check that dependencies are met before a test is run?

Yes

> * schedule the test for the DUT?

Yes

> ** select an appropriate individual DUT based on SUT or test attributes?

Yes

> ** reserve the DUT?

Yes

> ** release the DUT?

Yes

> * install the software under test to the DUT?

Yes

> * install required packages before a test is run?

Yes

> * require particular bootloader on the DUT? (e.g. grub, uboot, etc.)

Yes

> * deploy the test program to the DUT?

Yes

> * prepare the test environment on the DUT?

Yes

> * start a monitor (another process to collect data) on the DUT?

No

> * start a monitor on external equipment?

No

> * initiate the test on the DUT?

Yes

> * clean up the test environment on the DUT?

Yes

>
> ==== DUT control ====
> Does your test system:
> * store board configuration data?

Yes

> ** in what format?

YAML, rendered from Jinja2

> * store external equipment configuration data?

Yes

> ** in what format?

YAML, rendered from Jinja2

> * power cycle the DUT?

Yes

> * monitor the power usage during a run?

Possibly, there is basic support for ARM energy probes

> * gather a kernel trace during a run?

Yes

> * claim other hardware resources or machines (other than the DUT) for use during a test?

Yes, and can claim other DUT in a multi-node job

> * reserve a board for interactive use (ie remove it from automated testing)?

Not directly, but users use LAVA to provision a device and hand over SSH access

> * provide a web-based control interface for the lab?

Yes

> * provide a CLI control interface for the lab?

Yes

>
> ==== Run artifact handling ====
> Does your test system:
> * store run artifacts

No

> ** in what format?
> * put the run meta-data in a database?

Yes

> ** if so, which database?

LAVA results database, postgres

> * parse the test logs for results?

Yes, during the run

> * convert data from test logs into a unified format?

Yes

> ** if so, what is the format?

Stored in database, can be fetched as YAML

> * evaluate pass criteria for a test (e.g. ignored results, counts or thresholds)?

Yes

> * do you have a common set of result names: (e.g. pass, fail, skip, etc.)
> ** if so, what are they?

Pass, Fail, Skip, Unknown

>
> * How is run data collected from the DUT?
> ** e.g. by pushing from the DUT, or pulling from a server?

Parsed from the DUT serial output on the fly

> * How is run data collected from external equipment?

External equipment is considered another DUT

> * Is external equipment data parsed?

Same as other DUT

>
> ==== User interface ====
> Does your test system:
> * have a visualization system?

Yes

> * show build artifacts to users?

No

> * show run artifacts to users?

Yes, if pushed to external storage a link can be put in the results

> * do you have a common set of result colors?

No

> ** if so, what are they?
> * generate reports for test runs?

No

> * notify users of test results by e-mail?

Only job status

>
> * can you query (aggregate and filter) the build meta-data?

No

> * can you query (aggregate and filter) the run meta-data?

Yes, if stored as a result

>
> * what language or data format is used for online results presentation? (e.g. HTML, Javascript, xml, etc.)

Javascript

> * what language or data format is used for reports? (e.g. PDF, excel, etc.)

N/A

>
> * does your test system have a CLI control tool?

Yes

> ** what is it called?

LAVACLI

>
> ==== Languages: ====
> Examples: json, python, yaml, C, javascript, etc.
> * what is the base language of your test framework core?

Python

>
> What languages or data formats is the user required to learn?
> (as opposed to those used internally)

YAML for writing a job definition and device description
Shell script for writing a test definition

>
> ==== Can a user do the following with your test framework: ====
> * manually request that a test be executed (independent of a CI trigger)?

Yes

> * see the results of recent tests?

Yes

> * set the pass criteria for a test?

Yes, would require editing the job

> ** set the threshold value for a benchmark test?

Yes, would require editing the job

> ** set the list of testcase results to ignore?

Yes, would require editing the job

> * provide a rating for a test? (e.g. give it 4 stars out of 5)

No

> * customize a test?

Yes, would require editing the job

> ** alter the command line for the test program?

Yes, would require editing the job

> ** alter the environment of the test program?

Yes, would require editing the job

> ** specify to skip a testcase?

Yes, would require editing the job

> ** set a new expected value for a test?

Yes, would require editing the job

> ** edit the test program source?

Yes, would require editing the job

> * customize the notification criteria?

Yes

> ** customize the notification mechanism (eg. e-mail, text)

Yes

> * generate a custom report for a set of runs?

Yes

> * save the report parameters to generate the same report in the future?

Yes

>
> ==== Requirements ====
> Does your test framework:
> * require minimum software on the DUT?

Bootloader

> * require minimum hardware on the DUT (e.g. memory)

Serial port

> ** If so, what? (e.g. POSIX shell or some other interpreter, specific libraries, command line tools, etc.)

POSIX shell for most DUT, however IOT devices are supported without a shell

> * require agent software on the DUT? (e.g. extra software besides production software)

No

> ** If so, what agent?
> * is there optional agent software or libraries for the DUT?

No

> * require external hardware in your labs?

Power control is required for most DUT types

>
> ==== APIS ====
> Does your test framework:
> * use existing APIs or data formats to interact within itself, or with 3rd-party modules?

ZMQ

> * have a published API for any of its sub-module interactions (any of the lines in the diagram)?
> ** Please provide a link or links to the APIs?

No

>
> Sorry - this is kind of open-ended...
> * What is the nature of the APIs you currently use?
> Are they:
> ** RPCs?
> ** Unix-style? (command line invocation, while grabbing sub-tool output)
> ** compiled libraries?
> ** interpreter modules or libraries?
> ** web-based APIs?
> ** something else?

ZMQ is used between LAVA workers (dispatchers) and the master, to schedule jobs
XML-RPC is for users to submit jobs, access results, and control the lab

>
> ==== Relationship to other software: ====
> * what major components does your test framework use (e.g. Jenkins, Mondo DB, Squad, Lava, etc.)

Jenkins, Squad

> * does your test framework interoperate with other test frameworks or software?
> ** which ones?

A common LAVA setup is Jenkins to create the builds, LAVA to execute
the tests, and Squad/KernelCI to consume the results

>
> == Overview ==
> Please list the major components of your test system.
>
> {{{Just as an example, Fuego can probably be divided into 3 main parts, with somewhat overlapping roles:
> * Jenkins - job triggers, test scheduling, visualization, notification
> * Core - test management (test build, test deploy, test execution, log retrieval)
> * Parser - log conversion to unified format, artifact storage, results analysis
> There are lots details omitted, but you get the idea.
> }}}
>
> Please list your major components here:
> *

* LAVA Server - (DUT Scheduler) - UI, Results storage, Device
configuration files, Job scheduling, User interaction
* LAVA Dispatcher - (DUT Controller) - Device interaction (deploy,
boot, execute tests), Device Power Control, Test results Parsing

>
> == Glossary ==
> Here is a glossary of terms.  Please indicate if your system uses different terms for these concepts.
> Also, please suggest any terms or concepts that are missing.

PDU - Power Distribution Unit, however has become a standard term for
automation power control.

>
> * Bisection - automatic testing of SUT variations to find the source of a problem
> * Boot - to start the DUT from an off state
> * Build artifact - item created during build of the software under test
> * Build manager (build server) - a machine that performs builds of the software under test
> * Dependency - indicates a pre-requisite that must be filled in order for a test to run (e.g. must have root access, must have 100 meg of memory, some program must be installed, etc.)
> * Device under test (DUT) - the hardware or product being tested (consists of hardware under test and software under test) (also 'board', 'target')
> * Deploy - put the test program or SUT on the DUT
> ** this one is ambiguous - some people use this to refer to SUT installation, and others to test installation
> * Device under Test (DUT) - a product, board or device that is being tested
> * DUT controller - program and hardware for controlling a DUT (reboot, provision, etc.)
> * DUT scheduler - program for managing access to a DUT (take online/offline, make available for interactive use)
> ** This is not shown in the CI Loop diagram - it could be the same as the Test Scheduler
> * Lab - a collection of resources for testing one or more DUTs (also 'board farm')
> * Log - one of the run artifacts - output from the test program or test framework
> * Log Parsing - extracting information from a log into a machine-processable format (possibly into a common format)
> * Monitor - a program or process to watch some attribute (e.g. power) while the test is running
> ** This can be on or off the DUT.
> * Notification - communication based on results of test (triggered by results and including results)
> * Pass criteria - set of constraints indicating pass/fail conditions for a test
> * Provision (verb) - arrange the DUT and the lab environment (including other external hardware) for a test
> ** This may include installing the SUT to the device under test and booting the DUT.
> * Report generation - generation of run data into a formatted output
> * Request (noun) - a request to execute a test
> * Result - the status indicated by a test - pass/fail (or something else) for a Run
> * Results query - Selection and filtering of data from runs, to find patterns
> * Run (noun) - an execution instance of a test (in Jenkins, a build)
> * Run artifact - item created during a run of the test program
> * Serial console - the Linux console connected over a serial connection
> * Software under test (SUT) - the software being tested
> * Test agent - software running on the DUT that assists in test operations (e.g. test deployment, execution, log gathering, debugging
> ** One example would be 'adb', for Android-based systems)
> * Test definition - meta-data and software that comprise a particular test
> * Test program - a script or binary on the DUT that performs the test
> * Test scheduler - program for scheduling tests (selecting a DUT for a test, reserving it, releasing it)
> * Test software - source and/or binary that implements the test
> * Transport (noun) - the method of communicating and transferring data between the test system and the DUT
> * Trigger (noun) - an event that causes the CI loop to start
> * Variant - arguments or data that affect the execution and output of a test (e.g. test program command line; Fuego calls this a 'spec')
> * Visualization - allowing the viewing of test artifacts, in aggregated form (e.g. multiple runs plotted in a single diagram)
>
> Thank you so much for your assistance in answering this survey!
>
> Regards,
>  -- Tim
>
> --
> _______________________________________________
> automated-testing mailing list
> automated-testing at yoctoproject.org
> https://lists.yoctoproject.org/listinfo/automated-testing


More information about the automated-testing mailing list