[Automated-testing] Test Stack Survey

Tim.Bird at sony.com Tim.Bird at sony.com
Thu Oct 11 15:52:44 PDT 2018



> -----Original Message-----
> From: Richard Purdie 
>
> Sorry this is late, I've completed this as best I can for the Yocto
> Project. The system is fairly open ended and configurable and I've
> tried to be as clear as I can be. I tried to put wiki markup in ready
> for adding to the wiki but it probably needs tweaking.

Thanks for the response.  I can't tell you how happy I was to see
the response in MediaWiki format!  Thanks for that.

I put your survey response at:
https://elinux.org/Yocto_project_survey_response

> 
> I'm happy to answer any questions or elaborate anywhere needed. I've
> tried to put links in too.

I haven't had time to digest it all, but there's much more there than I was
expecting.  There is some interesting stuff there, that will be useful
to consider moving forward.  If I have additional questions, I'll ask.

Thanks,
 -- Tim

> 
> = Yocto Project Testing survey response =
> Yocto project survey response provided by Richard Purdie and Tim Orling
> 
> = Overview of Yocto Project Testing Structure/Terminology =
> 
> Tests are performed on several levels:
> * '''oe-selftest''' tests the inner workings of the Yocto Project,
> OpenEmbedded
> * '''bitbake-selftest''' tests the inner workings of the BitBake environments
> * Build time testing (are generated binaries the right architecture?)
> * '''imagetest''' or '''oeqa/runtime''' tests that images boot and have
> functionality (if python is present does it work? does any toolchain build
> applications?) Can be virtual or hardware, either a static target or under
> control of e.g. LAVA
> * ''''oeqa/sdktest''' and '''oeqa/esdktest''' tests SDK and eSDK toolchains (do
> the compilers work?)
> * '''ptest''' (package test) enables test suites from individual pieces of
> software to run
> * Build time performance testing
> * '''Yocto Project autobuilder''' - plugin for buildbot which runs our test matrix
> (build and runtime, all of the above)
> 
> == Survey Questions ==
> * What is the name of your test framework? '''Yocto Project or OEQA'''
> 
> Which of the aspects below of the CI loop does your test framework
> perform? '''All except “Code Review” and “Lab/Board Farm”'''
> 
> Does your test framework:
> ==== source code access ====
> * access source code repositories for the software under test? '''Yes (bitbake
> fetcher)
> 
> * access source code repositories for the test software? '''Yes'''
> 
> * include the source for the test software? '''Yes (although ptest source is
> often brought in from upstream)'''
> 
> * provide interfaces for developers to perform code reviews? '''No, this is
> done via patches sent to mailing lists'''
> 
> * detect that the software under test has a new version? '''Partially. Auto
> Upgrade Helper (AUH) is available to test for new upstream versions. Can be
> configured to pull latest source e.g from a git repo (AUTOREV).'''
> 
> ** if so, how? (e.g. polling a repository, a git hook, scanning a mail list, etc.)
> '''Checks an upstream URL for a regex pattern (AUH) or scans source control
> for latest code at build time (AUTOREV).'''
> 
> * detect that the test software has a new version? '''Partially. Test software
> comes from the upstream metadata repos just like other source, or from the
> software itself (ptest)'''
> 
> ==== test definitions ====
> Does your test system:
> * have a test definition repository? '''Yes'''
> 
> ** if so, what data format or language is used (e.g. yaml, json, shell script)
> '''Mostly written in python unittest'''
> '''ptest varies by the package under test, often shell script with an interface
> script to convert to our standardised ptest output format'''
> 
> Does your test definition include:
> * source code (or source code location)? '''in the original build metadata'''
> 
> * dependency information? '''Yes. ptest packages are normal rpm/deb/ipk
> packages with dependencies.'''
> 
> * execution instructions? '''Yes'''
> 
> * command line variants? '''Yes (in the execution instructions)'''
> 
> * environment variants? '''Yes (in the execution instructions)'''
> 
> * setup instructions? '''Yes (in the execution instructions or as unittest setup
> methods)'''
> 
> * cleanup instructions? '''Yes  (in the execution instructions or as unittest
> cleanup  methods)'''
> 
> ** if anything else, please describe:
> 
> Does your test system:
> * provide a set of existing tests? '''Yes'''
> 
> ** if so, how many?
> 
> '''bitbake-selftest - 353 testcases'''
> '''oe-selftest - 332 testcases'''
> '''Imagetest - 49 testcases'''
> '''sdk/eSDK tests - 9 testcases'''
> '''eSDK tests - 16 testcases'''
> '''ptest - 64 recipes have ptest packages in OE-Core (Note one ptest may
> encapsulate all of LTP)'''
> 
> ==== build management ====
> Does your test system:
> * build the software under test (e.g. the kernel)? '''It can but they are two
> separate phases, build, then test, each independent of the other..'''
> 
> * build the test software?
> 
> '''oeselftest and oeqa are mostly python (interpreted) so N/A'''
> '''ptest includes building the test runner script and tests into packages and
> them potentially including in an image for testing'''
> 
> * build other software (such as the distro, libraries, firmware)? '''Yes. Yocto
> Project is a complete build system/environment.'''
> 
> * support cross-compilation? '''Yes'''
> 
> * require a toolchain or build system for the SUT? '''No.'''
> 
> * require a toolchain or build system for the test software? '''No.'''
> 
> * come with pre-built toolchains? '''It can be configured to use prebuilt
> toolchains or prebuilt objects from sstate.'''
> 
> * store the build artifacts for generated software? '''Yes, as packages, images
> or as “sstate” (Yocto Projects shared state format of pre built objects)'''
> 
> ** in what format is the build metadata stored (e.g. json)? '''bitbake recipes
> (.bb files)'''
> 
> ** are the build artifacts stored as raw files or in a database? '''Either raw
> files, packages or sstate (tarball of files)'''
> 
> *** if a database, what database?
> 
> ==== Test scheduling/management ====
> 
> '''OEQA will either bring up a virtual QEMU machine for testing (in which case
> it handles everything), assume that its free to use a machine at a given IP
> address (with custom hooks for provisioning/control) or rely on a third party
> system (e.g. LAVA) for provisioning/control.'''
> 
> Does your test system:
> * check that dependencies are met before a test is run? '''Yes'''
> 
> * schedule the test for the DUT? '''Only through a third party system'''
> 
> ** select an appropriate individual DUT based on SUT or test attributes?
> '''Can select mechanism based on target machine type (QEMU or real
> board)'''
> 
> ** reserve the DUT? '''Only through a third party system'''
> 
> ** release the DUT? '''Only through a third party system'''
> 
> * install the software under test to the DUT? '''Only through a third party
> system'''
> 
> * install required packages before a test is run? '''Yes'''
> 
> * require particular bootloader on the DUT? (e.g. grub, uboot, etc.) '''Only
> through a third party system'''
> 
> * deploy the test program to the DUT? '''Yes'''
> 
> * prepare the test environment on the DUT? '''Yes'''
> 
> * start a monitor (another process to collect data) on the DUT? '''It could'''
> 
> * start a monitor on external equipment? '''Only through a third party
> system'''
> 
> * initiate the test on the DUT? '''Yes'''
> 
> * clean up the test environment on the DUT? '''Yes'''
> 
> ==== DUT control ====
> 
> '''Handled through any third party system'''
> 
> ==== Run artifact handling ====
> Does your test system:
> * store run artifacts '''Yes'''
> 
> ** in what format? '''Text log files'''
> 
> * put the run meta-data in a database? '''No'''
> 
> ** if so, which database?
> * parse the test logs for results? '''Yes'''
> 
> * convert data from test logs into a unified format?
> ** if so, what is the format?
> 
> '''Aiming for json files. Currently test results are logged into  testopia but its
> being replaced by a simpler mechanism using a git repository.'''
> 
> * evaluate pass criteria for a test (e.g. ignored results, counts or thresholds)?
> '''Yes'''
> 
> * do you have a common set of result names: (e.g. pass, fail, skip, etc.)
> '''Yes'''
> ** if so, what are they? '''Pass, Fail, Skip and Error (error means the testcase
> broke somehow)'''
> 
> * How is run data collected from the DUT?
> ** e.g. by pushing from the DUT, or pulling from a server?
> 
> '''Tests are run via ssh and the output logged, or log files transferred off the
> device using scp.'''
> 
> * How is run data collected from external equipment? '''N/A'''
> 
> * Is external equipment data parsed? '''N/A'''
> 
> ==== User interface ====
> Does your test system:
> * have a visualization system?
> 
> '''Buildbot provides our high level build/test status
> (https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__autobuilder.yoctoproject.org_typhoon_-
> 23_console&d=DwIDaQ&c=fP4tf--1dS0biCFlB0saz0I0kjO5v7-
> GLPtvShAo4cc&r=jjTc71ylyJg68rRxrFQuDFMMybIqPCnrHF85A-
> GzCRg&m=wwNlccp1AqX2AcFhVmDzLHhsMHN_LlZyoW5Y0SPyXig&s=1LXIp3
> Zg2Rv3lm3t3SmXvdsOB7dKeY9pPbXVekwpS4c&e=)'''
> '''We have graphical HTML emails of our build performance tests'''
> 
> * show build artifacts to users?
> 
> '''Yes'''
> 
> * show run artifacts to users?
> 
> '''Some of them'''
> 
> * do you have a common set of result colors?
> ** if so, what are they?
> 
> '''Green - All ok'''
> '''Orange - Ok, but there were warnings'''
> '''Red - There was some kind of failure/error'''
> '''Yellow - In progress'''
> 
> * generate reports for test runs? '''Yes'''
> 
> * notify users of test results by e-mail? '''It can.'''
> 
> * can you query (aggregate and filter) the build meta-data? '''No'''
> 
> * can you query (aggregate and filter) the run meta-data? '''No, but you can
> query failures (https://urldefense.proofpoint.com/v2/url?u=http-
> 3A__errors.yoctoproject.org&d=DwIDaQ&c=fP4tf--
> 1dS0biCFlB0saz0I0kjO5v7-
> GLPtvShAo4cc&r=jjTc71ylyJg68rRxrFQuDFMMybIqPCnrHF85A-
> GzCRg&m=wwNlccp1AqX2AcFhVmDzLHhsMHN_LlZyoW5Y0SPyXig&s=jID0zs3
> iLX-lty5izAoQQStFzfOzVUTuZUtMASeD9Sk&e= - our own error database
> system)'''
> 
> * what language or data format is used for online results presentation? (e.g.
> HTML, Javascript, xml, etc.) '''HTML'''
> 
> * what language or data format is used for reports? (e.g. PDF, excel, etc.)
> '''Aiming for HTML emails and json'''
> 
> * does your test system have a CLI control tool? '''Yes'''
> 
> ** what is it called?
> 
> '''bitbake, oe-test, oe-selftest, bitbake-selftest'''
> 
> ==== Languages: ====
> Examples: json, python, yaml, C, javascript, etc.
> * what is the base language of your test framework core? '''python'''
> 
> What languages or data formats is the user required to learn?
> (as opposed to those used internally) '''Python, json'''
> 
> ==== Can a user do the following with your test framework: ====
> * manually request that a test be executed (independent of a CI trigger)?
> '''Yes'''
> 
> * see the results of recent tests? '''Yes'''
> 
> * set the pass criteria for a test? '''No'''
> 
> ** set the threshold value for a benchmark test? '''No'''
> ** set the list of testcase results to ignore? '''No'''
> * provide a rating for a test? (e.g. give it 4 stars out of 5) '''No'''
> * customize a test? '''Yes'''
> ** alter the command line for the test program? '''Yes'''
> ** alter the environment of the test program? '''Yes'''
> ** specify to skip a testcase? '''Yes'''
> ** set a new expected value for a test? '''Yes'''
> ** edit the test program source? '''Yes'''
> * customize the notification criteria? '''Yes'''
> ** customize the notification mechanism (eg. e-mail, text) '''Yes'''
> * generate a custom report for a set of runs? '''no'''
> * save the report parameters to generate the same report in the future?
> '''Planned through the json output files'''
> 
> ==== Requirements ====
> Does your test framework:
> * require minimum software on the DUT?
> * require minimum hardware on the DUT (e.g. memory)
> 
> '''A network+ssh for many tests but some only need serial console access,
> really defined by the testcases and the way hardware is connected (e.g.
> LAVA).'''
> 
> ** If so, what? (e.g. POSIX shell or some other interpreter, specific libraries,
> command line tools, etc.)
> 
> '''Entirely test case dependent. Basic Linux system is assumed for many tests
> (busybox shell and C library) but system has tested RTOS images over a serial
> connection before.'''
> 
> * require agent software on the DUT? (e.g. extra software besides
> production software)
> ** If so, what agent? '''No agent required'''
> 
> * is there optional agent software or libraries for the DUT? '''Eclipse plugin
> uses tcf-agent for development'''
> 
> * require external hardware in your labs? '''Dependent on the hardware
> interface used (e.g. LAVA)'''
> 
> ==== APIS ====
> Does your test framework:
> * use existing APIs or data formats to interact within itself, or with 3rd-party
> modules?
> 
> '''Yes, python unittest with extensions for internal and external use'''
> '''Yocto Project defined ptest results format.'''
> 
> * have a published API for any of its sub-module interactions (any of the lines
> in the diagram)?
> ** Please provide a link or links to the APIs?
> 
> '''https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__wiki.yoctoproject.org_wiki_Ptest&d=DwIDaQ&c=fP4tf--
> 1dS0biCFlB0saz0I0kjO5v7-
> GLPtvShAo4cc&r=jjTc71ylyJg68rRxrFQuDFMMybIqPCnrHF85A-
> GzCRg&m=wwNlccp1AqX2AcFhVmDzLHhsMHN_LlZyoW5Y0SPyXig&s=77OfI7
> kVONXIWz0x5VcmWyI8nVIinEnnvt9ZZmE-KyQ&e= - See “What constitutes a
> ptest?” for standardised output definition'''
> 
> Sorry - this is kind of open-ended...
> * What is the nature of the APIs you currently use?
> Are they:
> ** RPCs?
> ** Unix-style? (command line invocation, while grabbing sub-tool output)
> ** compiled libraries?
> ** interpreter modules or libraries?
> ** web-based APIs?
> ** something else?
> 
> '''Python modules based around unitest along with standardised formats for
> output/logs (e.g. ptest output)'''
> 
> ==== Relationship to other software: ====
> * what major components does your test framework use (e.g. Jenkins,
> Mondo DB, Squad, Lava, etc.) '''Buildbot'''
> 
> * does your test framework interoperate with other test frameworks or
> software?
> ** which ones? '''Could use any other framework to control the hardware
> (e.g. LAVA)'''
> 
> == Overview ==
> Please list the major components of your test system.
> 
> Please list your major components here:
> 
> An overview of the testing that happens within the Yocto Project Follows
> 
> Our testing is orchestrated by a custom plugin to Buildbot:
> yocto-autobuilder2 - https://urldefense.proofpoint.com/v2/url?u=http-
> 3A__git.yoctoproject.org_cgit.cgi_yocto-
> 2Dautobuilder2&d=DwIDaQ&c=fP4tf--1dS0biCFlB0saz0I0kjO5v7-
> GLPtvShAo4cc&r=jjTc71ylyJg68rRxrFQuDFMMybIqPCnrHF85A-
> GzCRg&m=wwNlccp1AqX2AcFhVmDzLHhsMHN_LlZyoW5Y0SPyXig&s=pH8wif
> Xp6s9-B27PbkRZHG6JPXpjC1SgVZUZ9g4tYkM&e=
> 
> This loads the test matrix configuration and some helper scripts from yocto-
> autobuilder-helper:
> yocto-autobuilder-helper -
> https://urldefense.proofpoint.com/v2/url?u=http-
> 3A__git.yoctoproject.org_cgit.cgi_yocto-2Dautobuilder-
> 2Dhelper&d=DwIDaQ&c=fP4tf--1dS0biCFlB0saz0I0kjO5v7-
> GLPtvShAo4cc&r=jjTc71ylyJg68rRxrFQuDFMMybIqPCnrHF85A-
> GzCRg&m=wwNlccp1AqX2AcFhVmDzLHhsMHN_LlZyoW5Y0SPyXig&s=YiEhrD
> C-rLaS3j6JbfHJjrMtIZiQ84BGcxAQOwYFMv8&e=
> which stores the test matrix configuration in a json file.
> 
> The web console interface can be seen at
> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__autobuilder.yoctoproject.org_typhoon_-
> 23_console&d=DwIDaQ&c=fP4tf--1dS0biCFlB0saz0I0kjO5v7-
> GLPtvShAo4cc&r=jjTc71ylyJg68rRxrFQuDFMMybIqPCnrHF85A-
> GzCRg&m=wwNlccp1AqX2AcFhVmDzLHhsMHN_LlZyoW5Y0SPyXig&s=1LXIp3
> Zg2Rv3lm3t3SmXvdsOB7dKeY9pPbXVekwpS4c&e=
> 
> There are 35 different ‘targets’ such as nightly-arm, eclipse-plugin-neon, oe-
> selftest. These:
> * Build images, then run the oeqa ‘runtime’ image tests under qemu
> (including any ptests installed in the image)
> * Build SDK/eSDKs and then run the corresponding tests
> * Trigger bitbake-selftest and oe-selftest to execute
> * Build the eclipse plugins
> * Cover many different architecture and configuration settings (init systems,
> kernel version, C library etc.)
> 
> Builds can be marked as release builds and if they are, artefacts are published
> on a webserver and an email is sent to interested parties who can perform
> further QA. This may be done with a further buildbot instance which
> interfacesto real hardware through a LAVA plugin (Intel does this). There are
> some tests we haven;t automated yet which are run manually by QA, we
> recently agreed to document these in a custom json format in tree alongside
> our other tests. All the tests can be see at:
> https://urldefense.proofpoint.com/v2/url?u=http-
> 3A__git.yoctoproject.org_cgit.cgi_poky_tree_meta_lib_oeqa&d=DwIDaQ&c
> =fP4tf--1dS0biCFlB0saz0I0kjO5v7-
> GLPtvShAo4cc&r=jjTc71ylyJg68rRxrFQuDFMMybIqPCnrHF85A-
> GzCRg&m=wwNlccp1AqX2AcFhVmDzLHhsMHN_LlZyoW5Y0SPyXig&s=Sgd-
> 4sLHyDTa2BmXPFWxTSflRTU5VjlrLn-zuknlyh4&e= (see runtime/cases or
> manual)
> 
> In parallel we have dedicated machines which perform build time
> performance analysis and email the results to an email list:
> https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__lists.yoctoproject.org_pipermail_yocto-2Dperf_&d=DwIDaQ&c=fP4tf--
> 1dS0biCFlB0saz0I0kjO5v7-
> GLPtvShAo4cc&r=jjTc71ylyJg68rRxrFQuDFMMybIqPCnrHF85A-
> GzCRg&m=wwNlccp1AqX2AcFhVmDzLHhsMHN_LlZyoW5Y0SPyXig&s=FLaibc
> M9EI00xLz60eTFpeltR94ttBj9GKhgxjv3Prg&e=
> 
> 



More information about the automated-testing mailing list