[yocto] QA cycle report for 2.4.4 RC1

richard.purdie at linuxfoundation.org richard.purdie at linuxfoundation.org
Tue Dec 4 07:57:07 PST 2018


On Wed, 2018-11-28 at 07:07 +0000, Jain, Sangeeta wrote:
> The test cases run for 2.4.4 RC1 are same as run for 2.4 release. 

Ok, but I'm not sure they're correct. Are the test cases we're talking
about also run for the 2.5 release? I'll explain more below.

> > Thanks for running the QA for this. I do have some
> > questions/observations.
> > 
> > Firstly, this report is a little misleading as there are only two
> > bugs mentioned but 8
> > failures. The full report shows other bugs which were for example
> > reopened or
> > already open.
> In report, only new bugs are mentioned. All the bugs which are
> reopened/already existing are not listed in report mail but can be
> seen in full report.
> However, if its creating any confusion/inconvenience, going forward I
> can mention all the failing bugs in report.

I think it would be useful to have a complete summary of the bugs
found, in particular the reopened ones as otherwise it gives a
misleading view of the release status.

> > Taking the above bugs first, I believe they're both manual versions
> > of tests which
> > are already automated. I believe the automated tests have passed
> > and there
> > appears to be some kind of problem with the manual execution such
> > as the
> > wrong process being documented. I'm not quite sure why these are
> > being run
> > manually? Was the list of manual tests we received from Intel
> > incorrect?
> The list manual test we run for 2.4.4 is same as we run for 2.4
> release. This test case is automated later than 2.4 release and is
> still manual as per 2.4 test plan in Testopia. It's an irony that in
> Yocto Project we don't have any tracking of a test case converting
> into automated in which release, and no system that we can update
> test cases in Dot releases. I am trying to streamline the tracking of
> test case evolution, so as to avoid such scenarios in future.

Taking:

https://bugzilla.yoctoproject.org/show_bug.cgi?id=13033

as an example, I believe the automated version is:

http://git.yoctoproject.org/cgit.cgi/poky/tree/meta/lib/oeqa/selftest/cases/prservice.py?h=rocko#n52

which was present in the first 2.4 release.

Its also present in pyro:

http://git.yoctoproject.org/cgit.cgi/poky/tree/meta/lib/oeqa/selftest/prservice.py?h=pyro#n54

and morty:
http://git.yoctoproject.org/cgit.cgi/poky/tree/meta/lib/oeqa/selftest/prservice.py?h=morty#n51
krogoth:
http://git.yoctoproject.org/cgit.cgi/poky/tree/meta/lib/oeqa/selftest/prservice.py?h=krogoth#n52
jethro:
http://git.yoctoproject.org/cgit.cgi/poky/tree/meta/lib/oeqa/selftest/prservice.py?h=jethro#n51

so has been there since at least 2.0.

So either I'm wrong about this being an automated version of the test,
or we really shouldn't be running this test manually but it isn't a
problem of changing test criteria between point releases.

I'd also note that nobody from QA has replied to my question in the
bugzilla.

I'm now worrying about what test cases get run for 2.5 and how these
differ from what we run against 2.6 and master and what QA summarised
in the recent documentation of test cases exercise. Perhaps we need to
document the manual test cases in the 2.5 release too to ensure we have
the right set?

Cheers,

Richard



More information about the yocto mailing list