[Automated-testing] Glossary words unfamiliar to enterprise testers

Cyril Hrubis chrubis at suse.cz
Thu Nov 1 07:19:28 PDT 2018


Hi!
> In the recent summit, you mentioned that some of the terms in the glossary
> that I sent with the survey were unfamiliar to you, and that you had to 
> map them onto aspects of LTP that had different names (IIRC).
> 
> This may not be easy to do now that you know them, but could
> you put an hash sign (#) before to the terms that were originally unfamiliar
> to you, from the list below?  The reason for this request is that I would like to
> identify the items that may not be obvious to non-embedded testers, and
> either select a new term, or mark them, or put more information in the
> description for these terms.

Here you go. I've tried to explain why some of the terms were unfamiliar
or why even does not make any sense in our settings as well. Mostly that
is because we do not care about actuall hardware most of the time.

> There was a good discussion at the summit about the "device under test"
> terminology, and how different people interpreted this in different ways.
> 
> Thanks,
>  -- Tim
> 
> P.S. Anyone else on the list who would like to mark terms that were originally
> unfamiliar to them, can do so as well.  Just reply-all to this e-mail, and mark
> the unfamiliar or problematic items with a hash sign.
> 
> Here is the originally-posted glossary:
> 
> * Bisection - automatic testing of SUT variations to find the source of a problem
> * Boot - to start the DUT from an off state  (addition: point of time when a test can be started)
# * Build artifact - item created during build of the software under test

Basically we call artifacts assets and we do not differentiate between
build and run assets and logs, everything is an asset including the
virtual machine disk images.

# * Build manager (build server) - a machine that performs builds of the software under test

We do product testing, hence we do not build the product ourselves, we
are handed an installation iso. Even for our kernel of the day tests we
are handed a rpm kernel package.

> * Dependency - indicates a pre-requisite that must be filled in order for a test to run (e.g. must have root access, must have 100 meg of memory, some program must be installed, etc.)
# * Device under test (DUT) - the hardware or product being tested (consists of hardware under test and software under test) (also 'board', 'target')

We call this more or less SUT i.e. system under test, since we do system
wide testing and we mostly do not care about the hardware.

# * Deploy - put the test program or SUT on the DUT
> ** this one is ambiguous - some people use this to refer to SUT installation, and others to test program installation

This is complicated to describe correctly. But basically we do not call
our OS installation deployment because the installation is a test
itself. So instead of deploying anything for kernel tests, these tests
are depending on installation tests. We are handed an qemu disk image at
the end of the installation tests, which we boot and then install our
tests such as LTP. So maybe we can call deployment the act of LTP
installation but we do not call it as such.

# * Device under Test (DUT) - a product, board or device that is being tested
# * DUT controller - program and hardware for controlling a DUT (reboot, provision, etc.)
# * DUT scheduler - program for managing access to a DUT (take online/offline, make available for interactive use)

This does not even make any sense in our automated testing, most of the
time we do not care about hardware. We spawn virtual machines on demand.

> ** This is not shown in the CI Loop diagram - it could be the same as the Test Scheduler
# * Lab - a collection of resources for testing one or more DUTs (also 'board farm')

For us Lab is a beefy server than can run reasonable amount of virtual
machines at one time.

> * Log - one of the run artifacts - output from the test program or test framework
> * Log Parsing - extracting information from a log into a machine-processable format (possibly into a common format)
> * Monitor - a program or process to watch some attribute (e.g. power) while the test is running
> ** This can be on or off the DUT.
> * Notification - communication based on results of test (triggered by results and including results)
> * Pass criteria - set of constraints indicating pass/fail conditions for a test
# * Provision (verb) - arrange the DUT and the lab environment (including other external hardware) for a test

Not sure if this makes sense in our settings. We certainly do not mess
with any hardware most of the time.

Well we do create virtual switches that connects our VMs on demand for
certain tests though.

> ** This may include installing the SUT to the device under test and booting the DUT.
> * Report generation - collecting run data and putting it into a formatted output

# * Request (noun) - a request to execute a test

Not sure where this one actually goes, I suppose that we request a test
to be executed and this is handled by the test scheduller.

> * Result - the status indicated by a test - pass/fail (or something else) for a Run
> * Results query - Selection and filtering of data from runs, to find patterns
> * Run (noun) - an execution instance of a test (in Jenkins, a build)
# * Run artifact - item created during a run of the test program
> * Serial console - the Linux console connected over a serial connection
> * Software under test (SUT) - the software being tested
# * Test agent - software running on the DUT that assists in test operations (e.g. test deployment, execution, log gathering, debugging
> ** One example would be 'adb', for Android-based systems)

We just type commands into virtual serial console, so I guess that we
can say that our test agent is bash.

> * Test definition - meta-data and software that comprise a particular test
> * Test program - a script or binary on the DUT that performs the test
> * Test scheduler - program for scheduling tests (selecting a DUT for a test, reserving it, releasing it)
> * Test software - source and/or binary that implements the test
# * Transport (noun) - the method of communicating and transferring data between the test system and the DUT

Again we use virtual serial console but we do not call it transport.

> * Trigger (noun) - an event that causes the CI loop to start
# * Variant - arguments or data that affect the execution and output of a test (e.g. test program command line; Fuego calls this a 'spec')
> * Visualization - allowing the viewing of test artifacts, in aggregated form (e.g. multiple runs plotted in a single diagram)

-- 
Cyril Hrubis
chrubis at suse.cz


More information about the automated-testing mailing list