[Automated-testing] CKI Test Stack Survey (was RE: CKI project introduction)

Veronika Kabatova vkabatov at redhat.com
Mon Aug 12 02:57:13 PDT 2019



----- Original Message -----
> From: "Tim Bird" <Tim.Bird at sony.com>
> To: vkabatov at redhat.com
> Cc: automated-testing at yoctoproject.org
> Sent: Saturday, August 10, 2019 1:53:12 AM
> Subject: RE: CKI Test Stack Survey (was RE: CKI project introduction)
> 
> > -----Original Message-----
> > From: Veronika Kabatova
> > 
> > ----- Original Message -----
> > > From: "Tim Bird" <Tim.Bird at sony.com>
> > >
> > > Thanks!
> > >
> > > Sorry this took so long for me to process (only 5 months!).
> > >
> > > I've created the following wiki page for the survey response:
> > > https://elinux.org/CKI_survey_response
> > >
> > > And it's linked from the main test stack survey page here:
> > > https://elinux.org/Test_Stack_Survey
> > >
> > 
> > Thanks! A lot of things changed since I filled out the initial info. I
> > think
> > I already have access to wiki editing so I'll just go ahead and update the
> > page so I don't have to give you more work :)
> > 
> > 
> > > It's quite interesting to see how CKI compares with other systems.
> > > I'm not sure if it's a one-to-one correspondence, but I get the
> > > impression
> > > that CKI/Beaker is comparable in division of responsibilities to
> > > LKFT/LAVA.
> > >
> > > It would be interesting to do a comparison between Beaker and LAVA.
> > > They are quite different in implementation, although at a high level
> > > I think they do similar things.
> > >
> > 
> > I'm not familiar with LAVA except "it runs tests" but if you want any info
> > about Beaker feel free to ask and I'll do my best to answer everything.
> > 
> > > I added an "additional information" section.  Can you take a look and
> > > make sure that what I have there is correct.  For example, I think your
> > > main project web site is at:
> > > https://cki-project.org
> > >
> > > with project source repositories at:
> > > https://github.com/CKI-project
> > >
> > > and I think your open source tests are at:
> > > https://github.com/CKI-project/tests-beaker
> > >
> > > Can you recommend some good example tests from your collection, to
> > examine
> > > to get a feel for your system?
> > >
> > > Specifically, if you have a very simple test, and a "characteristic" test
> > > that you
> > > could point to (give the repository URL for each test), that would help
> > > me
> > > understand
> > > your test definition (which is what I'm working on now, in preparation
> > > for
> > > the
> > > meetings in Lisbon).
> > >
> > 
> > The tests we have are maintained by kernel QE and the interface they use to
> > talk to Beaker varies a lot from team to team. However we do actually have
> > a super simple example test -- basically "exit 0" wrapped as Beaker task
> > [0].
> > 
> > A test that is still very simple but uses more Beaker features (result
> > reporting, log uploads etc.) is [1]. A more complicated one is the LTP Lite
> > wrapper which includes a knownissue detection and spans into a different
> > include directory.
> 
> OK great.  I added references to these to this page:
> https://elinux.org/Test_definition_survey
> 
> I am only part way through researching and recording the files and fields
> used
> by CKI tests.  I believe that this is the 'beaker' test format, right?  I
> haven't
> gone through the LTP lite wrapper, but I noticed that there's a test for
> xfstests (under filesystems/xfs/xfstests) that has quite a few more files
> and fields.
> 

You can find the documentation of metadata in the Beaker docs here:

https://beaker-project.org/docs/user-guide/task-metadata.html

People do put in more info than the "required" fields suggest. In some cases
this can be a residue of RHTS (Beaker predecessor) and copy pasting as Beaker
is still compatible with it, in other cases they simply use some less known
features or just go all out in writing the makefiles.

You don't need to put all code you use into a single runtest files, you can
have helper scripts in the test directory (which is the case with xfstests
you mentioned) you call. You can also use common libraries that will be
installed together with the test (from different place) using the RhtsRequires
clause.


> > 
> > Most test definition information can be found in "Makefile" and "metadata"
> > (if present) files.
> It's not super important, but I have a question about the metadata file.
> I see a file called 'metadata', which has data similar
> to but not identical to the data that is emitted by the makefile
> when making the $METADATA target.  Does the system make
> a temporary $METADATA file using the makefile, and then somehow
> incorporate that data into the file called 'metadata' (which appears
> to have sections - one called General, and another 'restraint')?
> 

$METADATA or testinfo.desc is for task information (can be generated from
Makefile before the test starts) and metadata can contain both info about
task ("general" section) and for test harness (restraint section). I *think*
the metadata file only works with restraint harness (there are others, such
as older beah) so people usually don't use it unless they need
restraint-specific functionality. CKI only uses restraint.

There might be something about RHTS (Beaker predecessor) compatibility
involved as well but I don't know details about that.

> > We also have an internal test definition "DB" which we
> > are looking into open sourcing but didn't get that far yet. This one
> > contains information about test maintainers, test waiving, which trees and
> 
> By "test waiving", do you mean tests that are skipped, or whose results
> are ignored, or something else?
> 

Yes, the second one. We run the tests but don't know how stable they are on
all architectures and releases we need them. So if they fail, we ignore that
failure. The test maintainer is still informed about the failure and can
debug it (and who knows, maybe it was a real bug) but the result of the test
run is marked as "pass".

You can find the waived tests marked by a construction sign in our reports,
like here:

https://lists.linaro.org/pipermail/linux-stable-mirror/2019-March/096629.html


> > architectures the test should be executed, which modified files in kernel
> > should trigger the test etc.
> That's quite interesting.  We've talked before about creating some system
> to allow tests to somehow be associated with specific kernel code, so
> that the test manager could use more specific triggers for tests.  But we
> never
> actually implemented anything.
> 
> > Some of these information is also present in
> > the data/code of the tests themselves but not all.
> 
> A lot of attributes of a test need to be customizable at a different
> level than the test itself (such as pass criteria that is dependent
> on a specific board, or kernel configuration).  I'm not sure if this is
> what you're talking about, but knowing where to store stuff
> besides with the test itself is a thorny issue.
> 

We don't go as deep here and what you describe is done in the test. E.g. the
test is not supported in VMs:

https://github.com/CKI-project/tests-beaker/blob/master/cpu/driver/runtest.sh#L122

Or test fails because of known bugs on older systems:

https://github.com/CKI-project/tests-beaker/blob/master/storage/blk/runtest.sh#L213


The information I mean are e.g. extra HW requirements for Beaker machines,
filtering of unsupported HW or trees (e.g. a new kernel feature not present in
older RHELs) or passing parameters (env vars) the task needs.


> > 
> > 
> > Let me know if this helps or if you need more info. I'll try to update the
> > info page later today.
> It helps very much.  I saw your edits.
> 
> Thanks,
>  -- Tim
> 
> 

Veronika



More information about the automated-testing mailing list