[Automated-testing] LTP and test metadata

daniel.sangorrin at toshiba.co.jp daniel.sangorrin at toshiba.co.jp
Sun Aug 18 22:49:21 PDT 2019


Hi Cyril!

> -----Original Message-----
> From: automated-testing-bounces at yoctoproject.org <automated-testing-bounces at yoctoproject.org> On Behalf
> Of Cyril Hrubis
> Sent: Thursday, August 15, 2019 1:25 AM
> To: automated-testing at yoctoproject.org
> Subject: [Automated-testing] LTP and test metadata
> 
> Hi!
> As promised I've continued to work on the test metadata
> export/extraction and apart from hacking on actuall code I've done the
> most important part, wrote a README where I put all my ideas I've come
> to when I was keeping the implementation on the backburner.
> 
> You can see both the new proof of concent along with the documentation
> at:
> 
> https://github.com/metan-ucw/ltp/tree/master/docparse
> 
> Ideally I would love to get some feedback before I return to hacking on
> the code.

Thanks for your work.

Please check my summary and comments.

1) Test dependency metadata is stored on each test case (.c file) inside a "tst_test" structure. Dependencies are specified through variables that start with "needs_" such as needs_root, needs_kconfigs, or needs_tmpdir.
https://github.com/linux-test-project/ltp/blob/master/include/tst_test.h
https://elinux.org/Test_Dependencies
https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/syscalls/mkdir/mkdir05.c

[Comment] perhaps it would be cleaner to separate metadata from actual data and functions. For example, by adding a new element to "tst_test" called "metadata" that points to an array of "struct tst_metadata". These tst_metadata structs would contain the "needs_xxx" configuration options. 
- Using an array would allow specifying slightly different requirements depending on the test case variant or parameters.
- Separating the metadata would also simplify parsing because you would not need to filter out non-metadata elements.
- Downside: the change might be a bit intrusive.

[Comment] Probably we need to specify some requirements in a parametric way. For example: "this test needs 10MB*num_threads memory", where num_threads is a test case parameter or another element.

[Comment] Sometimes, you may need to specify requirements conditionally. For example, if you pass parameter "-s", then you need "root" permissions. The same can happen with test variants.

[Note] here I am assuming that you can pass parameters to the test cases (e.g.: size, number of times, units, number of threads etc..) and that the set of parameters could affect the dependencies/requirements.

2) Test writers can add additional metadata by writting comments in a specific format layout. The format layout is an open question.

[Comment] Could you describe the format that you support in your proof-of-concept?

[Comment] It would be nice to have information such as:
- regular expressions to identify well-known errors, and a string explaining why that error may have occurred.
- a string for each variant that explains what that variant is testing.
- upstream commit ids that need to be in the kernel for the test to pass (in particular for CVE tests this might be useful).

[Comment] It is necessary to distinguish what metadata needs to go to the data structures and which one needs to go to the documentation. Apart from the binary size, is there any reason not to put everything in the data structures (tst_test)?.

3) docparse: a parser program that extracts the metadata (needs_xxx variables and commented documentation) from each test case into a single JSON file.

[Comment] how big is that JSON file? would you create it on-the-fly including only the tests and variants that you want to run (e.g.: specified through a tag or a wildcard)? or would you create it with all possible tests even though some of them will not run?

4) The JSON file with metadata can be used for test runners/frameworks to skip test cases depending on the hardware or software limitations, dynamically select the board to execute a test, specify the test cases you want to run in a more flexible way (e.g. with tags or wildcards), or to create a report with possible failure reasons. It should also allow running some test cases in parallel.

[Comment] I would add an attribute to specify how many CPU cores a test case needs. 0: it can share the CPU(s) with other test cases (e.g., a simple, functional test), 1: it needs to be the only test case running on the CPU core (e.g., to check for a hardware vulnerability), 2+: it needs more than one CPU core (e.g., multi-core testing). This information could be useful when you want to parallelize the execution of the test cases. 

Thanks,
Daniel

> 
> --
> Cyril Hrubis
> chrubis at suse.cz
> --
> _______________________________________________
> automated-testing mailing list
> automated-testing at yoctoproject.org
> https://lists.yoctoproject.org/listinfo/automated-testing


More information about the automated-testing mailing list