[Automated-testing] test definitions shared library

Tim.Bird at sony.com Tim.Bird at sony.com
Tue Jul 16 21:10:36 PDT 2019


> -----Original Message-----
> From: Dan Rue
> 
> Hi Daniel!
> 
> On Fri, Jul 12, 2019 at 05:05:10AM +0000, daniel.sangorrin at toshiba.co.jp
> wrote:
> > Finally, an alternative would be to use Go static binaries for the
> > parsing phase. Go would work in Fuego (on the host side) and on Linaro
> > (on the target side) unless you are using an OS or architecture not
> > supported by Go [12].
> 
> I just wanted to comment on this suggestion, because it is an experiment
> we have performed.
> 
> You may have noticed a binary named 'skipgen' checked into
> test-definitions. This was a compromise that allows us to generate skip
> lists based on a yaml definition. The 'skipgen' binaries parse yaml and
> based on board, branch, and environment, generate a flat skipfile that
> can be fed into ltp, kselftest, etc. (well actually we don't have a
> solution for implementing skips in the new 5.2+ version of kselftest yet
> :( )

This concept of generated skiplists is interesting.

> 
> So during a typical LTP LAVA run in LKFT, LAVA will download
> test-definitions and overlay it to the filesystem, which will make
> skipgen available. Then at runtime, skipgen will be called with for
> example,
> 
>     ../../bin/arm64/skipgen --board juno-r2 --branch 5.1 --environment
> production /lava-823218/0/tests/2_ltp-timers-
> tests/automated/linux/ltp/skipfile-lkft.yaml
> 
> This will generate the list of 'skips' that match those parameters.
> 
> To back up, we originally had flat skip files but we were getting killed
> with the inflexibility - keeping them in sync, copy pasting, etc,
> because it's a multi-dimensional problem.

That's good experience to hear about.  There are a couple of places
in testing where multi-dimensional problems occur.  We have
the same type of issue with our pass-criteria files.

> 
> We've since softened our reliance on skip lists, and mostly manage
> 'known issues' now instead in SQUAD (which also use yaml files to
> generate). At some point we may even be able to rip out the skipgen
> stuff rather than maintain it.

Well, skiplists are handy for tests that actually hang the machine and
mess up the subsequent test pipeline.  But they present a problem
in that you may want to periodically check if the problem still exists.
My recommendation for when these types of problems are encountered
is to put skipped tests into their own batch, and check them individually
on a separate schedule from the main "these should all work" tests.

> 
> Anyway, my impressions of using Go as suggested:
> - It does solve the problem of being able to run things on target
>   without any filesystem requirements!
> - The go binaries aren't tiny, but we use upx to compress them further
>   before checking them into test-definitions, making them ~750KB each.
> - Having a separate repo and build process is kind of expensive in terms
>   of cognitive overhead. Developers have to learn Go a bit to
>   contribute. Setting up a go development environment is its own whole
>   thing.
> - We have hit bugs in go and upx. I would expect issue with more exotic
>   architectures or combinations.
> - Checking binaries into test-definitions is obviously not great,
>   especially since we need one-per-architecture. It would be better to
>   find some better way to do this in LAVA directly rather than abuse the
>   test-definitions repo.

The way they are described, it sounds like the Fuego architecture would place
these on the host, not on the target (that's where we generate skiplists now).
So size and language shouldn't be a problem for us (unless someone is running
Fuego natively on the target).

 -- Tim



More information about the automated-testing mailing list