[Automated-testing] Automated Testing Summit - Test Stack survey

Punnaiah Choudary Kalluri punnaia at xilinx.com
Mon Oct 1 17:29:44 PDT 2018


Hi Tim,

Thanks for the initiative and creating the survey.


> -----Original Message-----
> From: Tim.Bird at sony.com [mailto:Tim.Bird at sony.com]
> Sent: Tuesday, September 18, 2018 5:08 PM
> To: ajhernandex at ti.com; anders.roxell at linaro.org; 
> andrewstephenmurray at gmail.com; Bikram_Bhola at mentor.com; ceh at ti.com; 
> cfi at pengutronix.de; chrubis at suse.cz; dan.rue at linaro.org; 
> daniel.sangorrin at toshiba.co.jp; geert at linux-m68k.org; 
> gregkh at linuxfoundation.org; groeck at google.com; 
> Motai.Hirotaka at aj.mitsubishielectric.co.jp; jlu at pengutronix.de; 
> jsmoeller at linuxfoundation.org; khilman at baylibre.com; 
> khiem.nguyen.xt at renesas.com; yoshitake.kobayashi at toshiba.co.jp;
> manual.traut at linutronix.de; broonie at kernel.org; 
> matthew.hart at linaro.org; michael at phoronix.com; Michal Simek 
> <michals at xilinx.com>; milosz.wasilewski at linaro.org; 
> nobuhiro.iwamatsu at miraclelinux.com;
> p.wieczorek2 at samsung.com; philip.li at intel.com; Punnaiah Choudary 
> Kalluri <punnaia at xilinx.com>; shuahkh at osg.samsung.com; 
> sjored.simons at collabora.co.uk; rostedt at goodmis.org; Tim.Bird at sony.com; 
> timothy.t.orling at linux.intel.com; tshibata at ab.jp.nec.com; 
> yuichi.usakabe at denso-ten.com
> Subject: Automated Testing Summit - Test Stack survey
> 
> Hello invitee to the Automated Testing Summit,
> 
> Executive Summary:  Here's a survey. If you "own" one of the test 
> systems mentioned below, please fill it out before the summit (by Oct 2).
> 
> This e-mail is to notify you that plans are proceeding with the 
> Automated Testing Summit, and Kevin and I look forward to your 
> participation at the event.
> 
> In preparation for our discussions, we have prepared a diagram of the 
> Continuous Integration Loop, and a survey to ask you questions about 
> your respective test systems.
> 
> We'd like to conduct the survey in public, but you can respond in 
> private if you'd like.
> We'd like to get just one survey response per test system.  Some test 
> systems have more than one representative coming to the summit.  So, 
> if you see someone else working on your test system in the To: list 
> above, please coordinate with that person to have one of you answer the survey.
> 
> I may be missing something (if so I apologize), but here are the 
> different test systems I expect represented at the event:
>   KernelCI, Fuego, Lava, SLAV, LKFT, labgrid, kerneltests, zero-day, kselftest,
>   Phoronix-test-suites, LTP, r4d, ktest, Gentoo testing, Yocto testing,
>   TI testing, Xilinx testing, Mentor testing, opentest, tbot.
> 
> Please send the survey response to Kevin and me, and
> CC: <automated-testing at yoctoproject.org>
> The reasons for posting to the automated-testing list is so that we 
> can archive the answer and any ensuing discussion clarifying how your 
> test system works.  Please note we will also be sending the survey to 
> that list, for public responses on any test systems not represented at the summit.
> 
> Just to put your mind not-at-ease about speaking at the event...
> We are still working on the exact format of the event, but I expect 
> that for many of the test systems listed above we will want a VERY 
> short (5 minute) lightning talk on
> 1) the status of your project,
> 2) the differences between our reference CI loop and how your system 
> works,
> 3) any special features your system has that others don't.
> 
> We want to leave lots of time for open discussions.
> 
> Please complete the survey no later than October 2 (two weeks from now).
> 
> OK - finally, here is the survey.  Sorry it's so long, but many 
> questions should be easy to answer.
> Please refer to the attached diagram, or see the page:
> https://elinux.org/Test_Stack_Survey.
> 
> The format of the survey is mediawiki markup, so * = bullet, ** = 
> indented bullet.
> 
> == Diagrams ==
> Attached is a diagram for the high level CI loop:
> 
> The boxes represent different processes, hardware, or storage locations.
> Lines between boxes indicate APIs or control flow, and are labeled 
> with letters.  The intent of this is to facilitate discussion at the summit.
> 
> [[File:<see attachment>|high level CI loop]]
> --------
> 
> If you have an element that is not featured in this diagram, please 
> let us know.
> 
> == Cover text ==
> Hello Test Framework developer or user,
> 
> The purpose of this survey is to try to understand how different Test 
> Frameworks and Automated Test components in the Linux Test ecosystem 
> work - what features they have, what terminology they use, and so forth.
> The reason to characterize these different pieces of software (and
> hardware) is to try to come up with definitions for a Test Stack, and 
> possibly API definitions, that will allow different elements to 
> communicate and interact.  We are interested in seeing the 
> commonalities and differences between stack elements.
> 
> This information will be used, to start, to prepare for discussions 
> about test stack standards at the Automated Testing Summit 2018.
> 
> Please see the Glossary below for the meaning of words used in this survey.
> If you use different words in your framework for the same concept, 
> please let us know.  If you think there are other words that should be 
> in the Glossary, please let us know.
> 
> == Survey Questions ==
> * What is the name of your test framework?
 
"regression_xlnx", it's a command line based tool. Our In house  CI framework will utilize this tool for building the software And executing the test cases.

> Which of the aspects below of the CI loop does your test framework 
> perform?
Build/deploy/test/analyze/publish the results
 

> The answers can be: "yes", "no", or "provided by the user".
> Where the answer is not simply yes or no, an explanation is appreciated.
> 
> For example, in Fuego, Jenkins is used for trigger detection (that is, 
> to detect new SUT versions), but the user must install the Jenkins 
> module and configure this themselves.  So Fuego supports triggers, but 
> does not provide them pre-configured for the user.
> 
> If the feature is provided by a named component in your system (or by 
> an external module), please provide the name of that module.
> 
> Does your test framework:
> ==== source code access ====
> * access source code repositories for the software under test?
Yes

> * access source code repositories for the test software?
Yes

> * include the source for the test software?
Yes

> * provide interfaces for developers to perform code reviews?
No. Not from the test framework

> * detect that the software under test has a new version?
> ** if so, how? (e.g. polling a repository, a git hook, scanning a mail 
> list, etc.)
> * detect that the test software has a new version?
No

> ==== test definitions ====
> Does your test system:
> * have a test definition repository?
> ** if so, what data format or language is used (e.g. yaml, json, shell
> script)
> 
> Does your test definition include:
> * source code (or source code location)?
Yes

> * dependency information?
Not now

> * execution instructions?
Yes

> * command line variants?
Yes

> * environment variants?
Yes

> * setup instructions?
Yes

> * cleanup instructions?
Not now

> ** if anything else, please describe:
> 
> Does your test system:
> * provide a set of existing tests?
> ** if so, how many?
> 
Around 600 tests

> ==== build management ====
> Does your test system:
> * build the software under test (e.g. the kernel)?
Yes

> * build the test software?
Yes

> * build other software (such as the distro, libraries, firmware)?
Yes

> * support cross-compilation?
Yes

> * require a toolchain or build system for the SUT?
> * require a toolchain or build system for the test software?
Yes

> * come with pre-built toolchains?
Yes

> * store the build artifacts for generated software?
Yes

> ** in what format is the build metadata stored (e.g. json)?
Yaml/csv/html

> ** are the build artifacts stored as raw files or in a database?
Raw files

> *** if a database, what database?
> 
> ==== Test scheduling/management ====
> Does your test system:
> * check that dependencies are met before a test is run?
No

> * schedule the test for the DUT?
Yes

> ** select an appropriate individual DUT based on SUT or test attributes?
Yes

> ** reserve the DUT?
Yes

> ** release the DUT?
Yes

> * install the software under test to the DUT?
Yes

> * install required packages before a test is run?
Yes

> * require particular bootloader on the DUT? (e.g. grub, uboot, etc.)
Yes.  but there is no requirement that after powerup bootloader is shown

> * deploy the test program to the DUT?
Yes

> * prepare the test environment on the DUT?
Yes

> * start a monitor (another process to collect data) on the DUT?
No

> * start a monitor on external equipment?
Yes

> * initiate the test on the DUT?
Yes

> * clean up the test environment on the DUT?
Yes

> 
> ==== DUT control ====
> Does your test system:
> * store board configuration data?

Yes

> ** in what format?
Shell scripts 

> * store external equipment configuration data?
Yes

> ** in what format?
Shells scripts

> * power cycle the DUT?
Yes

> * monitor the power usage during a run?
Yes

> * gather a kernel trace during a run?
Yes

> * claim other hardware resources or machines (other than the DUT) for 
> use during a test?
Yes

> * reserve a board for interactive use (ie remove it from automated testing)?
Test framework is just another user of boardfarm managing SW which is running commands automatically. Interactive use is supported by default

> * provide a web-based control interface for the lab?
No

> * provide a CLI control interface for the lab?
Yes

> 
> ==== Run artifact handling ====
> Does your test system:
> * store run artifacts
> ** in what format?
Txt files

> * put the run meta-data in a database?
> ** if so, which database?
No database. Its just a test case folder.

> * parse the test logs for results?
Yes

> * convert data from test logs into a unified format?
Yes

> ** if so, what is the format?
Yaml, csv, html

> * evaluate pass criteria for a test (e.g. ignored results, counts or thresholds)?
Yes

> * do you have a common set of result names: (e.g. pass, fail, skip,
> etc.)
Yes

> ** if so, what are they?
PASS, FAIL, UNANALYZED

> 
> * How is run data collected from the DUT?
Results will not be stored in DUT. It will be in the results folder

> ** e.g. by pushing from the DUT, or pulling from a server?
> * How is run data collected from external equipment?
mostly using the ssh

> * Is external equipment data parsed?
Yes


> 
> ==== User interface ====
> Does your test system:
> * have a visualization system?
No

> * show build artifacts to users?
Yes

> * show run artifacts to users?
Yes

> * do you have a common set of result colors?
> ** if so, what are they?
yes. pass-green, fail-red, unanalyzed-yellow

> * generate reports for test runs?
Yes

> * notify users of test results by e-mail?
Yes

>> * can you query (aggregate and filter) the build meta-data?
No

> * can you query (aggregate and filter) the run meta-data?
No

> 
> * what language or data format is used for online results presentation? (e.g.
> HTML, Javascript, xml, etc.)
'Html, java script

> * what language or data format is used for reports? (e.g. PDF, excel,
> etc.)
> 
web interface

> * does your test system have a CLI control tool?
> ** what is it called?
> 
It is not part of the test framework but we have custom CLI tool which call the test framework for build and run

> ==== Languages: ====
> Examples: json, python, yaml, C, javascript, etc.
> * what is the base language of your test framework core?
Shell scripts, python and yaml


> 
> What languages or data formats is the user required to learn?
> (as opposed to those used internally)
shell scripting and pyhon


> 
> ==== Can a user do the following with your test framework: ====
> * manually request that a test be executed (independent of a CI trigger)?
Yes

> * see the results of recent tests?
Yes

> * set the pass criteria for a test?
Yes

> ** set the threshold value for a benchmark test?
Yes

> ** set the list of testcase results to ignore?
Yes

> * provide a rating for a test? (e.g. give it 4 stars out of 5)
No

> * customize a test?
Yes

> ** alter the command line for the test program?
Yes

> ** alter the environment of the test program?
Yes

> ** specify to skip a testcase?
None has requested this feature yet but can be done


> ** set a new expected value for a test?
Yes

> ** edit the test program source?
Yes

> * customize the notification criteria?
> ** customize the notification mechanism (eg. e-mail, text)
No

> * generate a custom report for a set of runs?
Yes

> * save the report parameters to generate the same report in the future?
Yes

> 
> ==== Requirements ====
> Does your test framework:
> * require minimum software on the DUT?
No

> * require minimum hardware on the DUT (e.g. memory)
> ** If so, what? (e.g. POSIX shell or some other interpreter, specific 
> libraries, command line tools, etc.)
Yes.  JTAG , serial  and SOC or FPGA are the minimal requirement.

> * require agent software on the DUT? (e.g. extra software besides 
> production software)
No

> ** If so, what agent?
> * is there optional agent software or libraries for the DUT?
No

> * require external hardware in your labs?
Yes

> ==== APIS ====
> Does your test framework:
> * use existing APIs or data formats to interact within itself, or with 
> 3rd-party modules?
Yes, we use data formats

> * have a published API for any of its sub-module interactions (any of 
> the lines in the diagram)?
No

> ** Please provide a link or links to the APIs?
> 
> Sorry - this is kind of open-ended...
> * What is the nature of the APIs you currently use?
> Are they:
> ** RPCs?
> ** Unix-style? (command line invocation, while grabbing sub-tool
> output)
> ** compiled libraries?
> ** interpreter modules or libraries?
> ** web-based APIs?
> ** something else?
> 
> ==== Relationship to other software: ====
> * what major components does your test framework use (e.g. Jenkins, 
> Mondo DB, Squad, Lava, etc.)
Jenkins, custom boardfarm management/control tool, custom report generation tool

> * does your test framework interoperate with other test frameworks or 
> software?
Yes

> ** which ones?
we are doing proof of concept with using u-boot python test framework on the top of current boardfarm management/control tool

Regards,
Punnaiah
> 
> == Overview ==
> Please list the major components of your test system.
> 
> {{{Just as an example, Fuego can probably be divided into 3 main 
> parts, with somewhat overlapping roles:
> * Jenkins - job triggers, test scheduling, visualization, notification
> * Core - test management (test build, test deploy, test execution, log
> retrieval)
> * Parser - log conversion to unified format, artifact storage, results 
> analysis There are lots details omitted, but you get the idea.
> }}}
> 
> Please list your major components here:
> *
> 
> == Glossary ==
> Here is a glossary of terms.  Please indicate if your system uses 
> different terms for these concepts.
> Also, please suggest any terms or concepts that are missing.
> 
> * Bisection - automatic testing of SUT variations to find the source 
> of a problem
> * Boot - to start the DUT from an off state
> * Build artifact - item created during build of the software under 
> test
> * Build manager (build server) - a machine that performs builds of the 
> software under test
> * Dependency - indicates a pre-requisite that must be filled in order 
> for a test to run (e.g. must have root access, must have 100 meg of 
> memory, some program must be installed, etc.)
> * Device under test (DUT) - the hardware or product being tested 
> (consists of hardware under test and software under test) (also 
> 'board', 'target')
> * Deploy - put the test program or SUT on the DUT
> ** this one is ambiguous - some people use this to refer to SUT 
> installation, and others to test installation
> * Device under Test (DUT) - a product, board or device that is being 
> tested
> * DUT controller - program and hardware for controlling a DUT (reboot, 
> provision, etc.)
> * DUT scheduler - program for managing access to a DUT (take 
> online/offline, make available for interactive use)
> ** This is not shown in the CI Loop diagram - it could be the same as 
> the Test Scheduler
> * Lab - a collection of resources for testing one or more DUTs (also 
> 'board
> farm')
> * Log - one of the run artifacts - output from the test program or 
> test framework
> * Log Parsing - extracting information from a log into a 
> machine-processable format (possibly into a common format)
> * Monitor - a program or process to watch some attribute (e.g. power) 
> while the test is running
> ** This can be on or off the DUT.
> * Notification - communication based on results of test (triggered by 
> results and including results)
> * Pass criteria - set of constraints indicating pass/fail conditions 
> for a test
> * Provision (verb) - arrange the DUT and the lab environment 
> (including other external hardware) for a test
> ** This may include installing the SUT to the device under test and 
> booting the DUT.
> * Report generation - generation of run data into a formatted output
> * Request (noun) - a request to execute a test
> * Result - the status indicated by a test - pass/fail (or something
> else) for a Run
> * Results query - Selection and filtering of data from runs, to find 
> patterns
> * Run (noun) - an execution instance of a test (in Jenkins, a build)
> * Run artifact - item created during a run of the test program
> * Serial console - the Linux console connected over a serial 
> connection
> * Software under test (SUT) - the software being tested
> * Test agent - software running on the DUT that assists in test 
> operations (e.g. test deployment, execution, log gathering, debugging
> ** One example would be 'adb', for Android-based systems)
> * Test definition - meta-data and software that comprise a particular 
> test
> * Test program - a script or binary on the DUT that performs the test
> * Test scheduler - program for scheduling tests (selecting a DUT for a 
> test, reserving it, releasing it)
> * Test software - source and/or binary that implements the test
> * Transport (noun) - the method of communicating and transferring data 
> between the test system and the DUT
> * Trigger (noun) - an event that causes the CI loop to start
> * Variant - arguments or data that affect the execution and output of 
> a test (e.g. test program command line; Fuego calls this a 'spec')
> * Visualization - allowing the viewing of test artifacts, in 
> aggregated form (e.g. multiple runs plotted in a single diagram)
> 
> Thank you so much for your assistance in answering this survey!
> 
> Regards,
>  -- Tim
> 
> 
> 
> 
> 
> 
> 



More information about the automated-testing mailing list