[Automated-testing] Automated Testing Summit - Test Stack Survey

Tim.Bird at sony.com Tim.Bird at sony.com
Tue Oct 2 19:37:51 PDT 2018



> -----Original Message-----
> From: 

> Here are survey answers for syzbot.
> syzbot is probably somewhat different from most other test systems
> (automatically generates test cases, only monitors kernel crashes,
> currently uses only VMs), so I found some questions hard to answer,
> but I tried to do my best. Hope you will find this new perspective
> useful too.
Indeed - it's quite interesting.

> I am ready to extend answers if necessary (and if you point to things
> that need extension).
> I've tried to follow the mark up style, hope I did not mess things too much.
Thanks - this saved me a lot of time entering  your data on the wiki. :-)

> 
> 
> == Survey Questions ==
> * What is the name of your test framework?
> 
syzbot
> 
> Which of the aspects below of the CI loop does your test framework
> perform?
> 
> Does your test framework:
> 
> ==== source code access ====
> * access source code repositories for the software under test? '''yes'''
> * access source code repositories for the test software? '''yes'''
> * include the source for the test software? '''yes'''
> * provide interfaces for developers to perform code reviews? '''no'''
> * detect that the software under test has a new version? '''yes'''
> ** if so, how? '''polling git trees every N hours'''
> * detect that the test software has a new version? '''yes, polling git
> tree every N hours'''
> 
> ==== test definitions ====
> Does your test system:
> * have a test definition repository? '''yes'''
> ** if so, what data format or language is used '''own declarative
> format for kernel interfaces'''
> 
> Does your test definition include:
> * source code (or source code location)? '''no'''
> * dependency information? '''no, but most is inferred, e.g.
> syscall/socket proto is not implemented, required /dev file is not
> present'''
> * execution instructions? '''no, all tests are equivalent'''
> * command line variants? '''no'''
> * environment variants? '''no'''
> * setup instructions? '''no'''
> * cleanup instructions? '''no'''
> ** if anything else, please describe: '''tests are generated randomly
> from kernel interface descriptions'''
> 
> Does your test system:
> * provide a set of existing tests? '''intersting tests are
> automatically detected and persisted across runs'''
> ** if so, how many? '''37000'''

Wow.  I guess automatically generating them helps with the count.

> 
> ==== build management ====
> Does your test system:
> * build the software under test (e.g. the kernel)? '''yes'''
> * build the test software? '''yes'''
> * build other software (such as the distro, libraries, firmware)?
> '''no, uses prepackaged images/compilers/etc'''
> * support cross-compilation? '''yes'''
> * require a toolchain or build system for the SUT? '''yes'''
> * require a toolchain or build system for the test software? '''yes'''
> * come with pre-built toolchains? '''no'''
> * store the build artifacts for generated software? '''no'''
> ** in what format is the build metadata stored (e.g. json)? '''database table'''
> ** are the build artifacts stored as raw files or in a database?
> '''build artifacts are not stored, metadata in database'''
> *** if a database, what database? '''Google Cloud Datastore'''
> 
> ==== Test scheduling/management ====
> Does your test system:
> * check that dependencies are met before a test is run? '''mostly yes,
> but it does not really matter as we only care about crashing kernel'''
> * schedule the test for the DUT? '''yes, but DUTs are cloud VMs'''
> ** select an appropriate individual DUT based on SUT or test
> attributes? '''no'''
> ** reserve the DUT? '''yes, but these are just cloud VMs'''
> ** release the DUT? '''yes, but these are just cloud VMs'''
> * install the software under test to the DUT? '''yes'''
> * install required packages before a test is run? '''no, no dependencies'''
> * require particular bootloader on the DUT? (e.g. grub, uboot, etc.) '''no'''
> * deploy the test program to the DUT? '''yes'''
> * prepare the test environment on the DUT? '''yes'''
> * start a monitor (another process to collect data) on the DUT? '''no'''
> * start a monitor on external equipment? '''yes, using console output'''
> * initiate the test on the DUT? '''yes'''
> * clean up the test environment on the DUT? '''yes'''
> 
> ==== DUT control ====
> Does your test system:
> * store board configuration data? '''no'''
> ** in what format?
> * store external equipment configuration data? '''no'''
> ** in what format?
> * power cycle the DUT? '''yes'''
> * monitor the power usage during a run? '''no'''
> * gather a kernel trace during a run? '''no'''
> * claim other hardware resources or machines (other than the DUT) for
> use during a test? '''no'''
> * reserve a board for interactive use (ie remove it from automated
> testing)? '''no'''
> * provide a web-based control interface for the lab? '''no'''
> * provide a CLI control interface for the lab? '''no'''
> 
> ==== Run artifact handling ====
> Does your test system:
> * store run artifacts '''no'''
> ** in what format?
> * put the run meta-data in a database? '''no'''
> ** if so, which database?
> * parse the test logs for results? '''yes, console output for crashes'''
> * convert data from test logs into a unified format? '''yes'''
> ** if so, what is the format? '''database: crash title, text, console
> output, maintainer emails'''
> * evaluate pass criteria for a test (e.g. ignored results, counts or
> thresholds)? '''no'''
> * do you have a common set of result names: (e.g. pass, fail, skip,
> etc.) '''no'''
> ** if so, what are they?
> 
> * How is run data collected from the DUT? '''capturing of console
> output + test driver output via ssh'''
> ** e.g. by pushing from the DUT, or pulling from a server?
> * How is run data collected from external equipment? '''no'''
> * Is external equipment data parsed? '''no'''
> 
> ==== User interface ====
> Does your test system:
> * have a visualization system? '''yes'''
> * show build artifacts to users? '''no'''
> * show run artifacts to users? '''yes'''
> * do you have a common set of result colors? '''no'''
> ** if so, what are they?
> * generate reports for test runs? '''yes'''
> * notify users of test results by e-mail? '''yes, +integration with
> bug tracking systems'''
> 
> * can you query (aggregate and filter) the build meta-data? '''no'''
> * can you query (aggregate and filter) the run meta-data? '''no'''
> 
> * what language or data format is used for online results
> presentation? '''HTML'''
> * what language or data format is used for reports? '''text/plain'''
> 
> * does your test system have a CLI control tool? '''no, but it has
> email-based controls'''
> ** what is it called? '''custom'''
> 
> ==== Languages: ====
> * what is the base language of your test framework core? '''Go'''
> 
...

OK - I created a page for your responses at:
https://elinux.org/Syzbot_survery_response

I think you had a presentation at Plumbers a few years ago (if memory
serves), with an overview of syzbot operation.  Would you mind posting
a link to this slide deck in the "Additional Data" portion of that wiki page?

Thanks,
 -- Tim



More information about the automated-testing mailing list