[yocto] Discussion: Package testing

Frans Meulenbroeks fransmeulenbroeks at gmail.com
Thu Jun 14 12:54:58 PDT 2012


Bjorn, these are some interesting ideas.
Some feedback below.
This is from the perspective of asomeone who develops for embedded systems
boards

2012/6/14 Björn Stenberg <bjst at enea.com>

> Hi.
>
> Many source packages include their own package test suites. We are looking
> at running these tests on target in a structured way, and I would like a
> discussion about how to best fit this into the Yocto framework.
>
> The actions that need to be performed for a package test are roughly:
>
> 1) Build the test suite
> 2) Make the test suite appear on target
> 3) Run the test suite
> 4) Parse the results
>
> Each action can be done in several ways, and there are different
> considerations for each solution.
>
> 1) Build the test suite
> -----------------------
> Many package tests are simply bash scripts that run the packaged binaries
> in various ways, but often the test suites also include binary tools that
> are not part of the normal package build. These tools have to be built for
> package testing to work.
>
> Additionally, many packages build and run the test in a single command,
> such as "make check", which is obviously unsuitable for cross-compiled
> packages.
>
> We can solve this in different ways:
>
> a) Run the test build+run jobs on target. This avoids the need to modify
> packages, but building code on target can get quite expensive in terms of
> disk space. This in turn means many tests would require a harddisk or
> network disk to run.
>
> Not only storage space can be an issue. Some targets also do have limited
RAM and are low on CPU power.


> b) Patch the makefiles to split test building and test running. Patching
> makefiles mean we get an additional maintenance and/or upstreaming burden,
> but we should be able to do this in ways that are acceptable to upstream.
> This is our suggestion.
>
This seems the most plexible approach. I'd say this could generate
additional -test packages

>
>
> 2) Make the test suite appear on target
> ---------------------------------------
> The test suite and utilities obviously have to be executable by the target
> system in order to run. There are a few options for this:
>
> a) Copy all test files to the target at test run time, from the build dir,
> using whatever means available (scp, ftp etc). This limits testing to
> targets/images with easy automatic file transfer abilities installed.
>
It also somewhat couples the building and the copying (or an additional
command or step would be needed.

>
> b) NFS mount the build dir to access the full work dir and hence the test
> code. this limits testing to targets (and images) with network+nfs support.
> Plus it blends the build env and runtime env in an unpleasant way.
>
> c) Create ${PN}-ptest packages (in the model of -dev and -dbg) that
> contain all test files and add those to the image and hence the rootfs.
> This is our suggestion.
>

Agree, then one can decide whether to add them to an image, use a package
feed to install things or whatever means.
In our case some of our systems do not have network support

>
>
> 3) Run the test suite
> ---------------------
> Depending on how the test files are presented to the target, the way we
> run them can take different shapes:
>
> a) (scp) A top-level run-all-tests.sh runs on the build host, copies all
> test files from build dir to target, logs in and runs each test.
>
> b) (nfs) run-all-tests.sh is executed in the nfs-mounted build dir on
> target and build-runs each test in its work dir.
>
> c) (-ptest) Install all test files to /opt/ptest/${PN}-${PV} (for
> example). Make a package "ptest-runner" that has a script
> /opt/ptest/run-all-tests to iterate over all installed tests and run them.
> This is our suggestion.
>

Seems good to me. a run-all-tests.sh script is somewhat problematic as
different systems have different packages installed so different tests.
E.g. t is not too interesting to run C++ tests if your target system does
not have/need C++
That makes creating a run-all-tests.sh script somewhat nastier (of course
it could be conditional e.g. test -f ./testA && ./testA)

>
>
> 4) Parse the test results
> -------------------------
> Just running the tests doesn't give us much. We have to be able to look at
> the results and make them meaningful to us on a system-global level.
> Packages present their test results in very different ways, and we need to
> convert that to a generic format:
>
> a) Patch each package test to produce a generic ptest output format. This
> is likely difficult to get accepted upstream.
>
> b) Patch the test code minimally and instead use a simple per-packet
> translate script that converts test suite output from the package-specific
> format to a generic ptest format. This is our suggestion.
>

I suspect most tests would indicate success/failure by their return code.
Maybe the test-runner could use the return code. Optionally the recipe
could add  a way to specify how to measure success (e.g by means of
specifying a TEST_SUCCESS variable in the recipe that contains a command or
shell script that when executed will return true otherwise false.

I think testers are mainly interested in success (all tests give ok is
good, if a test fails, it would be fine with me if I would have to go to a
package specific place to find log files etc (or it could be written to a
common log file)

>
>
> Opinions?
>

Good proposal!
The only specific point I can think of is cleaning up after a test is run.
If you are low on resources like storage space and a test creates some
larger outputs and does not delete them, the tests might run out of storage.
So (maybe as an option) a test should contain (or be supplemented with)
code to remove all its output (maybe only on success).

Hope this helps.
Frans

>
> Note that this mail is about the test suites for/in/by specific packages.
> Standalone test suites such as LTP are a slightly different topic, since
> they are separate packages (${BPN} == ${PN}) rather than package companion
> suites.
>
> --
> Björn
> _______________________________________________
> yocto mailing list
> yocto at yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.yoctoproject.org/pipermail/yocto/attachments/20120614/a1a6c717/attachment.html>


More information about the yocto mailing list