[Automated-testing] LTP and test metadata

Cyril Hrubis chrubis at suse.cz
Fri Aug 23 02:27:11 PDT 2019


Hi!
> Here are some initial impressions, unrelated to the discussion points between you and Daniel.
> 
> 1) I really like the enumeration in the README.me about the rationale and use cases for this
> work.  This is a good articulation of the purpose of this feature and it's possible benefits.
> 
> There are a few typos - do you want me to submit a patch to fix them up, or just point them
> out?

Well it's in a git so just send a patch or open a pull request :-).

> 2) I think it would be nice to add some more explanation of how the dependencies are expressed
> in the code, along with some examples, to the README.md.  I looked in some of the source
> files but didn't find some clear examples of how to add this to tests.  Maybe some tests already have
> this information, and you could reference those?

Ok. I will add it ASAP to the README, currently we make use of the
need_foo fields of the tst_test structure, which are mostly bitflags for
things like needs_root or array of strings for needs_kconfig or
needs_drivers.

> 3) json output seems like a good choice.
> 
> It would be nice to see a fragment of the json output, to see how the tst_test data is converted.

Sure, I can add it to the README as well, at this point it's just 1:1
serialization of the C structure.

> Also, you have the all the json for all tests in a single file.  Is there any option to get just the json
> for a sub-set of tests, or for an individual test?  Writing a filter from the single global file for individual
> test should be trivial, and could be external to your system, I suppose.

I just answered the same question in the mail to Daniel, the json file
produced along with the LTP build is supposed to be the catalogue of all
tests and is supposed to be filtered by the testrunner.

> 3) the notion that the implementation might change as you get actual consumers is a good point.
> 
> I'll respond to points on the other thread separately.
> 
> I was just talking to Shuah Kahn about dependency data in kselftest.  kselftest is in a unique
> position, due to it being inside the kernel source tree, but some of the same issues apply.  Right now
> I believe that kselftest only supports kernel config dependencies, but I need to do more research.
> 
> If we could get harmony between LTP, kselftest, Fuego, LKFT and 0-day in terms of dependency expressions
> for sub-tests, that would be very beneficial.  (I think that might be enough to establish a de-facto
> standard in this area.)  I'm planning on working on comparing different dependency formats as part of
> phase 1 of my test definition standards research.

That sounds like things are starting to move into right direction, which
is awesome.

-- 
Cyril Hrubis
chrubis at suse.cz


More information about the automated-testing mailing list