[Automated-testing] Automated Testing Summit - Test Stack Survey

Dmitry Vyukov dvyukov at google.com
Tue Sep 25 05:37:18 PDT 2018


Hi Tim, Kevin,

Here are survey answers for syzbot.
syzbot is probably somewhat different from most other test systems
(automatically generates test cases, only monitors kernel crashes,
currently uses only VMs), so I found some questions hard to answer,
but I tried to do my best. Hope you will find this new perspective
useful too.
I am ready to extend answers if necessary (and if you point to things
that need extension).
I've tried to follow the mark up style, hope I did not mess things too much.

Where can I see other answers? I've found only one:
https://elinux.org/Opentest_survey_response

Thanks


== Survey Questions ==
* What is the name of your test framework?

[https://github.com/google/syzkaller/blob/master/docs/syzbot.md syzbot]

Which of the aspects below of the CI loop does your test framework perform?

Does your test framework:

==== source code access ====
* access source code repositories for the software under test? '''yes'''
* access source code repositories for the test software? '''yes'''
* include the source for the test software? '''yes'''
* provide interfaces for developers to perform code reviews? '''no'''
* detect that the software under test has a new version? '''yes'''
** if so, how? '''polling git trees every N hours'''
* detect that the test software has a new version? '''yes, polling git
tree every N hours'''

==== test definitions ====
Does your test system:
* have a test definition repository? '''yes'''
** if so, what data format or language is used '''own declarative
format for kernel interfaces'''

Does your test definition include:
* source code (or source code location)? '''no'''
* dependency information? '''no, but most is inferred, e.g.
syscall/socket proto is not implemented, required /dev file is not
present'''
* execution instructions? '''no, all tests are equivalent'''
* command line variants? '''no'''
* environment variants? '''no'''
* setup instructions? '''no'''
* cleanup instructions? '''no'''
** if anything else, please describe: '''tests are generated randomly
from kernel interface descriptions'''

Does your test system:
* provide a set of existing tests? '''intersting tests are
automatically detected and persisted across runs'''
** if so, how many? '''37000'''

==== build management ====
Does your test system:
* build the software under test (e.g. the kernel)? '''yes'''
* build the test software? '''yes'''
* build other software (such as the distro, libraries, firmware)?
'''no, uses prepackaged images/compilers/etc'''
* support cross-compilation? '''yes'''
* require a toolchain or build system for the SUT? '''yes'''
* require a toolchain or build system for the test software? '''yes'''
* come with pre-built toolchains? '''no'''
* store the build artifacts for generated software? '''no'''
** in what format is the build metadata stored (e.g. json)? '''database table'''
** are the build artifacts stored as raw files or in a database?
'''build artifacts are not stored, metadata in database'''
*** if a database, what database? '''Google Cloud Datastore'''

==== Test scheduling/management ====
Does your test system:
* check that dependencies are met before a test is run? '''mostly yes,
but it does not really matter as we only care about crashing kernel'''
* schedule the test for the DUT? '''yes, but DUTs are cloud VMs'''
** select an appropriate individual DUT based on SUT or test
attributes? '''no'''
** reserve the DUT? '''yes, but these are just cloud VMs'''
** release the DUT? '''yes, but these are just cloud VMs'''
* install the software under test to the DUT? '''yes'''
* install required packages before a test is run? '''no, no dependencies'''
* require particular bootloader on the DUT? (e.g. grub, uboot, etc.) '''no'''
* deploy the test program to the DUT? '''yes'''
* prepare the test environment on the DUT? '''yes'''
* start a monitor (another process to collect data) on the DUT? '''no'''
* start a monitor on external equipment? '''yes, using console output'''
* initiate the test on the DUT? '''yes'''
* clean up the test environment on the DUT? '''yes'''

==== DUT control ====
Does your test system:
* store board configuration data? '''no'''
** in what format?
* store external equipment configuration data? '''no'''
** in what format?
* power cycle the DUT? '''yes'''
* monitor the power usage during a run? '''no'''
* gather a kernel trace during a run? '''no'''
* claim other hardware resources or machines (other than the DUT) for
use during a test? '''no'''
* reserve a board for interactive use (ie remove it from automated
testing)? '''no'''
* provide a web-based control interface for the lab? '''no'''
* provide a CLI control interface for the lab? '''no'''

==== Run artifact handling ====
Does your test system:
* store run artifacts '''no'''
** in what format?
* put the run meta-data in a database? '''no'''
** if so, which database?
* parse the test logs for results? '''yes, console output for crashes'''
* convert data from test logs into a unified format? '''yes'''
** if so, what is the format? '''database: crash title, text, console
output, maintainer emails'''
* evaluate pass criteria for a test (e.g. ignored results, counts or
thresholds)? '''no'''
* do you have a common set of result names: (e.g. pass, fail, skip,
etc.) '''no'''
** if so, what are they?

* How is run data collected from the DUT? '''capturing of console
output + test driver output via ssh'''
** e.g. by pushing from the DUT, or pulling from a server?
* How is run data collected from external equipment? '''no'''
* Is external equipment data parsed? '''no'''

==== User interface ====
Does your test system:
* have a visualization system? '''yes'''
* show build artifacts to users? '''no'''
* show run artifacts to users? '''yes'''
* do you have a common set of result colors? '''no'''
** if so, what are they?
* generate reports for test runs? '''yes'''
* notify users of test results by e-mail? '''yes, +integration with
bug tracking systems'''

* can you query (aggregate and filter) the build meta-data? '''no'''
* can you query (aggregate and filter) the run meta-data? '''no'''

* what language or data format is used for online results
presentation? '''HTML'''
* what language or data format is used for reports? '''text/plain'''

* does your test system have a CLI control tool? '''no, but it has
email-based controls'''
** what is it called? '''custom'''

==== Languages: ====
* what is the base language of your test framework core? '''Go'''

What languages or data formats is the user required to learn?
[https://github.com/google/syzkaller/blob/master/docs/syscall_descriptions_syntax.md
in-house declarative language]
(as opposed to those used internally)

==== Can a user do the following with your test framework: ====
* manually request that a test be executed (independent of a CI
trigger)? '''yes, we support patch testing for reported bugs'''
* see the results of recent tests? '''not applicable'''
* set the pass criteria for a test? '''no'''
** set the threshold value for a benchmark test? '''not applicable'''
** set the list of testcase results to ignore? '''no'''
* provide a rating for a test? (e.g. give it 4 stars out of 5) '''no'''
* customize a test? '''no'''
** alter the command line for the test program? '''not applicable'''
** alter the environment of the test program? '''not applicable'''
** specify to skip a testcase? '''no'''
** set a new expected value for a test? '''no'''
** edit the test program source? '''yes'''
* customize the notification criteria? '''no'''
** customize the notification mechanism (eg. e-mail, text) '''maybe,
in general syzbot can be integrated with any bug tracking system'''
* generate a custom report for a set of runs? '''not applicable'''
* save the report parameters to generate the same report in the
future? '''not applicable'''

==== Requirements ====
Does your test framework:
* require minimum software on the DUT? '''yes'''
* require minimum hardware on the DUT (e.g. memory) '''no'''
** If so, what? (e.g. POSIX shell or some other interpreter, specific
libraries, command line tools, etc.) '''sshd'''
* require agent software on the DUT? (e.g. extra software besides
production software) '''yes, bug it's copied automatically'''
** If so, what agent? '''custom agent: generates tests, setups test
env, talks to master machine'''
* is there optional agent software or libraries for the DUT? '''no'''
* require external hardware in your labs? '''no'''

==== APIS ====
Does your test framework:
* use existing APIs or data formats to interact within itself, or with
3rd-party modules? '''yes'''
* have a published API for any of its sub-module interactions (any of
the lines in the diagram)? '''yes'''
** Please provide a link or links to the APIs?

syzbot extensively uses Google Cloud APIs, in particular:
[https://cloud.google.com/compute/docs/reference/rest/v1/instances/insert
GCE machine management]
[https://cloud.google.com/appengine/docs/standard/go/datastore/reference
Appengine Datastore APIs]
[https://godoc.org/cloud.google.com/go/storage Cloud Storage]

Within the test farm syzbot uses custom RPC APIs to exchange test
programs and metadata:
https://github.com/google/syzkaller/blob/master/pkg/rpctype/rpctype.go

Web/API application uses HTTPS/JSON APIs to upload crash info, build
metadata, request patch testing, talk to bug tracking systems:
https://github.com/google/syzkaller/blob/master/dashboard/dashapi/dashapi.go

Sorry - this is kind of open-ended...
* What is the nature of the APIs you currently use?
Are they:
** RPCs?
** Unix-style? (command line invocation, while grabbing sub-tool output)
** compiled libraries?
** interpreter modules or libraries?
** web-based APIs?
** something else?

'''Binary RPCs over TCP and JSON RPCs over HTTPS'''

==== Relationship to other software: ====
* what major components does your test framework use (e.g. Jenkins,
Mondo DB, Squad, Lava, etc.) '''Google Cloud Platform'''
* does your test framework interoperate with other test frameworks or
software? '''no'''
** which ones?

== Overview ==
Please list the major components of your test system.

Please list your major components here:

* [https://syzkaller.appspot.com Dashboard]: AppEngine application,
renders web UI, sends/receives emails, talks to bug tracking systems,
manages persistent database, serves APIs from other parts of the
system (uploading of crashes, build metadata, etc).
* [https://github.com/google/syzkaller/tree/master/syz-ci syz-ci]:
continuous SUT (kernel) and test software (syzkaller) poll, build,
update; uploads build metadata to dashboard; starts syz-manager
instances; there are several syz-ci instances.
* [https://github.com/google/syzkaller/tree/master/syz-manager
syz-manager]: manages test machines; starts syz-fuzzer on test
machines; monitors console output of test machines; uploads crash info
to dashboard; talks to syz-hub.
* [https://github.com/google/syzkaller/tree/master/syz-hub syz-hub]:
allows several syz-manager's to exchange interesting test cases.
* [https://github.com/google/syzkaller/tree/master/syz-fuzzer
syz-fuzzer]: runs on the test machine; generates and executes test
cases; sends new interesting test cases to syz-manager.
* [https://github.com/google/syzkaller/tree/master/executor
syz-executor]: executes/interpretes test cases (the only part of the
system in C++).


More information about the automated-testing mailing list