[Automated-testing] Automated Testing Summit - Test Stack survey

Tim.Bird at sony.com Tim.Bird at sony.com
Tue Oct 2 12:29:22 PDT 2018


Well, I don't want to miss my own deadline :-), so ...

Here is the survey response for Fuego.

> [[File:<see attachment>|high level CI loop]]
> --------
> 
> If you have an element that is not featured in this diagram, please let us
> know.

There is a box missing for Fuego, which would be "Test runner".
Fuego tests do not reside on the DUT, but are executed on a separate
machine (in our case, the test host, which encapsulates almost everything
in the diagram that is not on the DUT).

A test in Fuego consists of statements that are executed off-DUT.
These statements almost always include statements to execute things
on the DUT, but they also always include statements to execute things
off of the DUT.

For example, log parsing is always done off-DUT.  Dependency checking is done
off-DUT, but it include commands to the DUT to gather required information.
Lab environment setup is done off-DUT.  etc.

> == Survey Questions ==
> * What is the name of your test framework? Fuego
> > 
> Does your test framework:
> ==== source code access ====
> * access source code repositories for the software under test? not usually. We have a kernel_build test, that builds the kernel for the DUT, when we are doing kernel testing.  For most tests, the kernel_build and distro build are outside the scope of Fuego.
> * access source code repositories for the test software? yes
> * include the source for the test software? yes
> * provide interfaces for developers to perform code reviews? no
> * detect that the software under test has a new version? Yes, using Jenkins.  Fuego does not specifically include pre-configured trigger mechanisms for the kernel or SUT.  Instead, users can use a Jenkins SCM module to set up a trigger to start a test.
> ** if so, how? (e.g. polling a repository, a git hook, scanning a mail list, etc.) - depends on Jenkins module used
> * detect that the test software has a new version? yes
> 
> ==== test definitions ====
> Does your test system:
> * have a test definition repository? yes
> ** if so, what data format or language is used (e.g. yaml, json, shell script)
shell script and yaml, some data is also in json.  A test's log parser is written
in python, but many tests use a default log parser already defined in the system.

> Does your test definition include:
> * source code (or source code location)? yes
> * dependency information? yes
> * execution instructions? yes
> * command line variants? yes
> * environment variants? yes
> * setup instructions? yes
> * cleanup instructions? yes
> ** if anything else, please describe:
pass criteria, benchmark reference units (these are stored separately from the parsed results and pass criteria),
log parser, chart configuration file (visualization configuration information), per-testcase documentation

> 
> Does your test system:
> * provide a set of existing tests? yes
> ** if so, how many? about 140, including 15 self-tests
> 
> ==== build management ====
> Does your test system:
> * build the software under test (e.g. the kernel)? not usually.  However, it can be configured to do so.
> * build the test software? yes
> * build other software (such as the distro, libraries, firmware)? not usually
> * support cross-compilation? yes
> * require a toolchain or build system for the SUT? yes, if SUT-building is done.
> * require a toolchain or build system for the test software? yes
> * come with pre-built toolchains? no, but it includes tools to install pre-built toolchains
> * store the build artifacts for generated software? yes
> ** in what format is the build metadata stored (e.g. json)? as raw images and files
We have an tool to construct bundles from generated software.
> ** are the build artifacts stored as raw files or in a database? raw files
> *** if a database, what database? not applicable
> 
> ==== Test scheduling/management ====
> Does your test system:
> * check that dependencies are met before a test is run? yes
> * schedule the test for the DUT? yes, DUT concurrency  is managed by Jenkins
> ** select an appropriate individual DUT based on SUT or test attributes? no
> ** reserve the DUT? no
> ** release the DUT? no
> * install the software under test to the DUT? not usually
> * install required packages before a test is run? no.  But a user can add instructions to their test to do so.
> * require particular bootloader on the DUT? (e.g. grub, uboot, etc.) no
> * deploy the test program to the DUT? yes
> * prepare the test environment on the DUT? yes
> * start a monitor (another process to collect data) on the DUT? no
> * start a monitor on external equipment? no
> * initiate the test on the DUT? yes
> * clean up the test environment on the DUT? yes
> 
> ==== DUT control ====
> Does your test system:
> * store board configuration data? yes
> ** in what format? json
> * store external equipment configuration data? no
> ** in what format? n/a
> * power cycle the DUT? yes, via a software reboot, and, for hardware reboot, via an external, user-supplied command
> * monitor the power usage during a run? no
> * gather a kernel trace during a run? no
> * claim other hardware resources or machines (other than the DUT) for use
> during a test? no
> * reserve a board for interactive use (ie remove it from automated testing)? no
> * provide a web-based control interface for the lab? no
> * provide a CLI control interface for the lab? no
> 
> ==== Run artifact handling ====
> Does your test system:
> * store run artifacts yes
> ** in what format? json, and raw log files
> * put the run meta-data in a database? no
> ** if so, which database? n/a
> * parse the test logs for results? yes
> * convert data from test logs into a unified format? yes
> ** if so, what is the format? json (based heavily on kernelci run data format)
> * evaluate pass criteria for a test (e.g. ignored results, counts or thresholds)? yes
> * do you have a common set of result names: (e.g. pass, fail, skip, etc.) yes
> ** if so, what are they? PASS, FAIL, ERROR, SKIP
> 
> * How is run data collected from the DUT? by file retrieval from the DUT, initiated by the test runner (Fuego host)
> ** e.g. by pushing from the DUT, or pulling from a server?
> * How is run data collected from external equipment? If user specifies it, initiated by test runner (Fuego host) via file retrieval
> * Is external equipment data parsed? no
> 
> ==== User interface ====
> Does your test system:
> * have a visualization system? yes - Jenkins and flot
> * show build artifacts to users? no
> * show run artifacts to users? yes
> * do you have a common set of result colors? kind-of
> ** if so, what are they? Most sites run the greenballs Jenkins plugin, which provides 3 colors: green=pass, red=fail, grey=not run yet, and test error.
> * generate reports for test runs? yes
> * notify users of test results by e-mail? no- but it's available using a Jenkins plugin
> 
> * can you query (aggregate and filter) the build meta-data? no
> * can you query (aggregate and filter) the run meta-data? yes
> 
> * what language or data format is used for online results presentation? (e.g.
> HTML, Javascript, xml, etc.) HTML and Javascript
> * what language or data format is used for reports? (e.g. PDF, excel, etc.) text, PDF, CSV, excel, rst
> 
> * does your test system have a CLI control tool? yes
> ** what is it called? ftc
> 
> ==== Languages: ====
> Examples: json, python, yaml, C, javascript, etc.
> * what is the base language of your test framework core? bash shell script and python
> 
> What languages or data formats is the user required to learn?
> (as opposed to those used internally)
POSIX shell script, python
> 
> ==== Can a user do the following with your test framework: ====
> * manually request that a test be executed (independent of a CI trigger)? yes
> * see the results of recent tests? yes
> * set the pass criteria for a test? yes
> ** set the threshold value for a benchmark test? yes
> ** set the list of testcase results to ignore? yes
> * provide a rating for a test? (e.g. give it 4 stars out of 5) no
> * customize a test? yes
> ** alter the command line for the test program? yes
> ** alter the environment of the test program? yes
> ** specify to skip a testcase? for LTP, yes, but for most tests, no.
> ** set a new expected value for a test? no
> ** edit the test program source? You can provide a patch for test program source, that Fuego will incorporate into its build of the test program.
> * customize the notification criteria? no
> ** customize the notification mechanism (eg. e-mail, text) no
> * generate a custom report for a set of runs? yes
> * save the report parameters to generate the same report in the future? no (well, reports are generated using a command-line tool, so scripting can be used to generate the same reports again)
> 
> ==== Requirements ====
> Does your test framework:
> * require minimum software on the DUT? yes - posix shell, and a few command-line utilities
> * require minimum hardware on the DUT (e.g. memory) - no
> ** If so, what? (e.g. POSIX shell or some other interpreter, specific libraries,
> command line tools, etc.)
POSIX shell, cat, df, find, grep, free, head, mkdir, mount, ps, rm, rmdir, sync, tee, touch, true, umount, uname, uptime, xargs
/proc is required

> * require agent software on the DUT? (e.g. extra software besides
> production software) no - almost all labs use sshd, but a lab can use pure serial-console, with no agent
> ** If so, what agent? n/a
> * is there optional agent software or libraries for the DUT? Only a very small shell script helper library.  It is not used by many tests.
> * require external hardware in your labs? no
> 
> ==== APIS ====
> Does your test framework:
> * use existing APIs or data formats to interact within itself, or with 3rd-party
> modules? yes - we can publish our results to kernelci or squad.
> * have a published API for any of its sub-module interactions (any of the lines
> in the diagram)? 
> ** Please provide a link or links to the APIs?
> 
> Sorry - this is kind of open-ended...
> * What is the nature of the APIs you currently use?  Unix-style, for test execution.
shell script for test execution, with a library of functions available (see http://fuegotest.org/wiki/Test_Script_APIs
python module for parser system (see http://fuegotest.org/wiki/Parser_module_API
For remote execution of jobs in other labs, we use web-based APIs, that are basicaly REST RPCs.

> Are they:
> ** RPCs?
> ** Unix-style? (command line invocation, while grabbing sub-tool output) 
> ** compiled libraries?
> ** interpreter modules or libraries?
> ** web-based APIs?
> ** something else?
> 
> ==== Relationship to other software: ====
> * what major components does your test framework use (e.g. Jenkins,
> Mondo DB, Squad, Lava, etc.) Jenkins
> * does your test framework interoperate with other test frameworks or
> software? yes  
> ** which ones? Squad, kernelci, LAVA
> 
> == Overview ==
> Please list the major components of your test system.
> 
> Please list your major components here:
Fuego can has divided into 3 main parts, with somewhat overlapping roles:
* Jenkins - job triggers, test scheduling, visualization, notification
* Core - test management (test build, test deploy, test execution, log retrieval)
* Parser - log conversion to unified format, artifact storage, results analysis
There are lots details omitted, but you get the idea.
> 
> *
> 
> == Glossary ==
> Here is a glossary of terms.  Please indicate if your system uses different
> terms for these concepts.
> Also, please suggest any terms or concepts that are missing.
> 
> * Deploy - put the test program or SUT on the DUT
> ** Fuego uses this term for test program installation, and "provision" for SUT installation.

Test runner - a machine or process that executes the instructions for a test

> * Variant - arguments or data that affect the execution and output of a test
>  Fuego calls this a 'spec'

= Architecture =
Here is the Fuego architecture description and diagram: http://fuegotest.org/wiki/Architecture




More information about the automated-testing mailing list