[Automated-testing] Automated Testing Summit - Test Stack survey - 0-Day CI

Li, Philip philip.li at intel.com
Wed Oct 3 20:31:49 PDT 2018


== Survey Questions ==
* What is the name of your test framework?
"0-day CI"

Which of the aspects below of the CI loop does your test framework perform?

The answers can be: "yes", "no", or "provided by the user".
Where the answer is not simply yes or no, an explanation is appreciated.

For example, in Fuego, Jenkins is used for trigger detection (that is, to
detect new SUT versions), but the user must install the Jenkins module and
configure this themselves.  So Fuego supports triggers, but does not provide
them pre-configured for the user.

If the feature is provided by a named component in your system (or by an
external module), please provide the name of that module.

Does your test framework:
==== source code access ====
* access source code repositories for the software under test?
"yes"
* access source code repositories for the test software?
"yes"
* include the source for the test software?
"yes"
* provide interfaces for developers to perform code reviews?
"no"
* detect that the software under test has a new version?
"yes, through polling the repo and scanning mailing list regularly"
** if so, how? (e.g. polling a repository, a git hook, scanning a mail list, etc.)
* detect that the test software has a new version?
"yes, pack the test software regularly"

==== test definitions ====
Does your test system:
* have a test definition repository?
"yes, yaml and shell"
** if so, what data format or language is used (e.g. yaml, json, shell script)

Does your test definition include:
* source code (or source code location)?
"yes"
* dependency information?
"yes"
* execution instructions?
"yes"
* command line variants?
"yes "
* environment variants?
"yes "
* setup instructions?
"yes"
* cleanup instructions?
"no"
** if anything else, please describe:

Does your test system:
* provide a set of existing tests?
"yes, 70+"
** if so, how many?

==== build management ====
Does your test system:
* build the software under test (e.g. the kernel)?
"yes"
* build the test software?
"yes"
* build other software (such as the distro, libraries, firmware)?
"yes, for certain distro we use makepkg to build lib if not existed"
* support cross-compilation?
"yes"
* require a toolchain or build system for the SUT?
"yes"
* require a toolchain or build system for the test software?
"yes"
* come with pre-built toolchains?
"yes"
* store the build artifacts for generated software?
"yes"
** in what format is the build metadata stored (e.g. json)?
"yaml"
** are the build artifacts stored as raw files or in a database?
"raw file"
*** if a database, what database?

==== Test scheduling/management ====
Does your test system:
* check that dependencies are met before a test is run?
"It checks kernel kconfig dependency now"
* schedule the test for the DUT?
** select an appropriate individual DUT based on SUT or test attributes?
"yes"
** reserve the DUT?
"yes"
** release the DUT?
"yes"
* install the software under test to the DUT?
"yes"
* install required packages before a test is run?
"yes"
* require particular bootloader on the DUT? (e.g. grub, uboot, etc.)
"no"
* deploy the test program to the DUT?
"yes"
* prepare the test environment on the DUT?
"yes"
* start a monitor (another process to collect data) on the DUT?
"yes"
* start a monitor on external equipment?
"yes, like pmeter"
* initiate the test on the DUT?
"yes"
* clean up the test environment on the DUT?
"no, the environment (tmp, overlay) will be cleaned up during reboot/kexec to next test"

==== DUT control ====
Does your test system:
* store board configuration data?
"yes"
** in what format?
"yaml"
* store external equipment configuration data?
"yes"
** in what format?
"yaml"
* power cycle the DUT?
"yes"
* monitor the power usage during a run?
"yes"
* gather a kernel trace during a run?
"yes"
* claim other hardware resources or machines (other than the DUT) for use during a test?
"yes"
* reserve a board for interactive use (ie remove it from automated testing)?
"yes"
* provide a web-based control interface for the lab?
"not yet, the web UI is just started this year but focusing on test status query firstly"
* provide a CLI control interface for the lab?
"yes"

==== Run artifact handling ====
Does your test system:
* store run artifacts
"yes"
** in what format?
"raw file"
* put the run meta-data in a database?
"no"
** if so, which database?
* parse the test logs for results?
"yes"
* convert data from test logs into a unified format?
"yes"
** if so, what is the format?
"json"
* evaluate pass criteria for a test (e.g. ignored results, counts or thresholds)?
"yes"
* do you have a common set of result names: (e.g. pass, fail, skip, etc.)
"no, we use the sets of each integrated test suites"
** if so, what are they?

* How is run data collected from the DUT?
** e.g. by pushing from the DUT, or pulling from a server?
* How is run data collected from external equipment?
* Is external equipment data parsed?

==== User interface ====
Does your test system:
* have a visualization system?
"no"
* show build artifacts to users?
"yes"
* show run artifacts to users?
"yes"
* do you have a common set of result colors?
"no"
** if so, what are they?
* generate reports for test runs?
* notify users of test results by e-mail?
"yes, but only for kernel build status or regression report"
* can you query (aggregate and filter) the build meta-data?
" yes "
* can you query (aggregate and filter) the run meta-data?
" yes "
* what language or data format is used for online results presentation? (e.g. HTML, Javascript, xml, etc.)
"N/A"
* what language or data format is used for reports? (e.g. PDF, excel, etc.)
"N/A"
* does your test system have a CLI control tool?
** what is it called?
"lkp"
==== Languages: ====
Examples: json, python, yaml, C, javascript, etc.
* what is the base language of your test framework core?
"shell, ruby"
What languages or data formats is the user required to learn?
(as opposed to those used internally)
"shell"
==== Can a user do the following with your test framework: ====
* manually request that a test be executed (independent of a CI trigger)?
"yes"
* see the results of recent tests?
"yes"
* set the pass criteria for a test?
"yes"
** set the threshold value for a benchmark test?
"no"
** set the list of testcase results to ignore?
"yes"
* provide a rating for a test? (e.g. give it 4 stars out of 5)
"no"
* customize a test?
** alter the command line for the test program?
"yes"
** alter the environment of the test program?
"yes"
** specify to skip a testcase?
"no"
** set a new expected value for a test?
"no"
** edit the test program source?
"yes"
* customize the notification criteria?
** customize the notification mechanism (eg. e-mail, text)
"no"
* generate a custom report for a set of runs?
"no"
* save the report parameters to generate the same report in the future?
"yes"
==== Requirements ====
Does your test framework:
* require minimum software on the DUT?
" yes "
* require minimum hardware on the DUT (e.g. memory)
"yes"
** If so, what? (e.g. POSIX shell or some other interpreter, specific libraries, command line tools, etc.)
"POSiX shell, PXE boot"
* require agent software on the DUT? (e.g. extra software besides production software)
"yes"
** If so, what agent?
"lkp init scripts installed during system boot"
* is there optional agent software or libraries for the DUT?
"no"
* require external hardware in your labs?
"yes, power control, serial cable"

==== APIS ====
Does your test framework:
* use existing APIs or data formats to interact within itself, or with 3rd-party modules?
"yes"
* have a published API for any of its sub-module interactions (any of the lines in the diagram)?
"no"
** Please provide a link or links to the APIs?

Sorry - this is kind of open-ended...
* What is the nature of the APIs you currently use?
Are they:
** RPCs?
** Unix-style? (command line invocation, while grabbing sub-tool output)
"yes, part of"
** compiled libraries?
** interpreter modules or libraries?
** web-based APIs?
"yes, part of"
** something else?

==== Relationship to other software: ====
* what major components does your test framework use (e.g. Jenkins, Mondo DB, Squad, Lava, etc.)
"Jenkins, Mongo"
* does your test framework interoperate with other test frameworks or software?
** which ones?
"we integrate a lot of industry test suites to execute test"

== Overview ==
Please list the major components of your test system.
* KBuild - git polling, fetch mailing list, kernel compiling, static analysis, notification
* LKP - test management, scheduling, execution, result analysis, cyclic testing
* Jenkins - UI for manually configure/schedule required tests (calls into LKP component)
* Bisection - bisect regression to identify bad commit

{{{Just as an example, Fuego can probably be divided into 3 main parts, with somewhat overlapping roles:
* Jenkins - job triggers, test scheduling, visualization, notification
* Core - test management (test build, test deploy, test execution, log retrieval)
* Parser - log conversion to unified format, artifact storage, results analysis
There are lots details omitted, but you get the idea.
}}}

Please list your major components here:
*

Thanks


More information about the automated-testing mailing list