[Automated-testing] ELC followup / Search for a testing framework for small distribution tests on device

Tim.Bird at sony.com Tim.Bird at sony.com
Wed Nov 13 06:02:11 PST 2019



> -----Original Message-----
> From: Westermann, Oliver
> Hey everyone,
> 
> I recently joined the ELC in Lyon and had a good chat with a few who may
> read this list. Only during the ELC I leaned about the automated testing
> summit and if I had not planned the next days already, I would probably have
> joined. During the interesting talks and some networking I also learned about
> this list, so I now have to ask a bunch of questions in the hope for answers ;-)
> 
> We (as in the company I work for) moved to linux for our embedded devices
> quite some time ago, but our testing hasn't. We have a test system for our
> full product, but due to the nature of the product ("cameras with
> intelligence"), there is a lot of implied requirements on the state of the
> device, which makes testing expensive (in terms of device- and setuptime),
> which results in only one, but very thorough test loop a day.
> 
> We're now trying to check our options to setup a smaller on-device test-
> stage before this setup. We would like to test features we would consider
> "distribution features": Network connectivity, IO features, updates and
> more. Most devs have their small scripts to do one thing or another, but we
> would prefer to distribute something that allows us to share all this and run it
> as part of the CI chain.
> 
> So I've looked into some of the pages on the Automated_Testing page of the
> elinux wiki and dove into a few of the frameworks, but actually I'm even
> more confused by LAVA, fuego and all the other tools than before. We
> already have a build system, we already have a CI system controlling build
> system. So I'm writing to you in an attempt to get a feeling for whats there
> and where we can contribute, and I will try to list some of our goal and ideas
> so maybe someone can throw me some good links (and hopefully, we can

Let me describe what Fuego does, and where it might apply here (and where not).

Fuego is designed to be a host/target embedded test system.  It comes with
a suite of existing tests.  It also comes with a Jenkins system for controlling
everything.  However, tests can also be invoked from the command line
independent of Jenkins.  This was done specifically to allow integration with
other CI frameworks.

Fuego does not currently do board provisioning or boot-time tests,
although some users of Fuego have their own Jenkins jobs to perform
those steps.

Fuego has about 150 tests covering things like the filesystem, networking,
and tests of individual Linux tools and libraries.  These are often built from
source, so a Fuego user has to provide a toolchain (SDK or build environment)
that works with their board.

> What we would like:
> We would like to run simple tests (execute eg python & bash script) and
> validate the outputs
Fuego runs tests from the host (putting all test materials on the target prior
to each test, and removing them at the completion of the test).
Each test in Fuego has a built-in parser to evaluate the results and convert
test results into a standardized format (e.g. pass/fail).  Benchmark tests
have a criteria file that is used to evaluate the pass/fail status of the
results from the benchmark.  Functional tests can have criteria files that can
allow failures of specific testcases to be ignored.

> We would like to bring devices into a certain state, eg by an upgrade (we use
> sth with swupdate beneath)
Fuego does not have support or examples for this currently.
This sounds like something labgrid or LAVA could do.

IMHO, Fuego's strengths are in generic distro testing and it's existing test suite.

I hope this is helpful.

Regards,
 -- Tim



More information about the automated-testing mailing list