[Automated-testing] CKI Test Stack Survey (was RE: CKI project introduction)

Tim.Bird at sony.com Tim.Bird at sony.com
Fri Aug 9 16:53:12 PDT 2019


> -----Original Message-----
> From: Veronika Kabatova 
> 
> ----- Original Message -----
> > From: "Tim Bird" <Tim.Bird at sony.com>
> >
> > Thanks!
> >
> > Sorry this took so long for me to process (only 5 months!).
> >
> > I've created the following wiki page for the survey response:
> > https://elinux.org/CKI_survey_response
> >
> > And it's linked from the main test stack survey page here:
> > https://elinux.org/Test_Stack_Survey
> >
> 
> Thanks! A lot of things changed since I filled out the initial info. I think
> I already have access to wiki editing so I'll just go ahead and update the
> page so I don't have to give you more work :)
> 
> 
> > It's quite interesting to see how CKI compares with other systems.
> > I'm not sure if it's a one-to-one correspondence, but I get the impression
> > that CKI/Beaker is comparable in division of responsibilities to LKFT/LAVA.
> >
> > It would be interesting to do a comparison between Beaker and LAVA.
> > They are quite different in implementation, although at a high level
> > I think they do similar things.
> >
> 
> I'm not familiar with LAVA except "it runs tests" but if you want any info
> about Beaker feel free to ask and I'll do my best to answer everything.
> 
> > I added an "additional information" section.  Can you take a look and
> > make sure that what I have there is correct.  For example, I think your
> > main project web site is at:
> > https://cki-project.org
> >
> > with project source repositories at:
> > https://github.com/CKI-project
> >
> > and I think your open source tests are at:
> > https://github.com/CKI-project/tests-beaker
> >
> > Can you recommend some good example tests from your collection, to
> examine
> > to get a feel for your system?
> >
> > Specifically, if you have a very simple test, and a "characteristic" test
> > that you
> > could point to (give the repository URL for each test), that would help me
> > understand
> > your test definition (which is what I'm working on now, in preparation for
> > the
> > meetings in Lisbon).
> >
> 
> The tests we have are maintained by kernel QE and the interface they use to
> talk to Beaker varies a lot from team to team. However we do actually have
> a super simple example test -- basically "exit 0" wrapped as Beaker task [0].
> 
> A test that is still very simple but uses more Beaker features (result
> reporting, log uploads etc.) is [1]. A more complicated one is the LTP Lite
> wrapper which includes a knownissue detection and spans into a different
> include directory.

OK great.  I added references to these to this page:
https://elinux.org/Test_definition_survey

I am only part way through researching and recording the files and fields used
by CKI tests.  I believe that this is the 'beaker' test format, right?  I haven't
gone through the LTP lite wrapper, but I noticed that there's a test for
xfstests (under filesystems/xfs/xfstests) that has quite a few more files
and fields.

> 
> Most test definition information can be found in "Makefile" and "metadata"
> (if present) files.
It's not super important, but I have a question about the metadata file.
I see a file called 'metadata', which has data similar
to but not identical to the data that is emitted by the makefile
when making the $METADATA target.  Does the system make
a temporary $METADATA file using the makefile, and then somehow
incorporate that data into the file called 'metadata' (which appears
to have sections - one called General, and another 'restraint')?

> We also have an internal test definition "DB" which we
> are looking into open sourcing but didn't get that far yet. This one
> contains information about test maintainers, test waiving, which trees and

By "test waiving", do you mean tests that are skipped, or whose results
are ignored, or something else?

> architectures the test should be executed, which modified files in kernel
> should trigger the test etc.
That's quite interesting.  We've talked before about creating some system
to allow tests to somehow be associated with specific kernel code, so
that the test manager could use more specific triggers for tests.  But we never
actually implemented anything.

> Some of these information is also present in
> the data/code of the tests themselves but not all.

A lot of attributes of a test need to be customizable at a different
level than the test itself (such as pass criteria that is dependent
on a specific board, or kernel configuration).  I'm not sure if this is
what you're talking about, but knowing where to store stuff
besides with the test itself is a thorny issue.

> 
> 
> Let me know if this helps or if you need more info. I'll try to update the
> info page later today.
It helps very much.  I saw your edits.

Thanks,
 -- Tim



More information about the automated-testing mailing list