[Automated-testing] [Openembedded-architecture] Yocto Project LTS Proposal/Discussion

Tim.Bird at sony.com Tim.Bird at sony.com
Sat Oct 26 14:08:17 PDT 2019



> -----Original Message-----
> From: Richard Purdie on October 26, 2019 5:46 PM
> 
> On Sat, 2019-10-26 at 10:34 -0400, Rich Persaud wrote:
> > On Oct 26, 2019, at 7:11 AM, richard.purdie at linuxfoundation.org
> > wrote:
> > > On Fri, 2019-10-25 at 16:04 -0400, Rich Persaud wrote:
> > > > There may be lessons in Microsoft's move from diverse hardware to
> > > > VM-
> > > > based testing:
> > > > https://www.ghacks.net/2019/09/23/former-microsoft-employee-
> explains-why-bugs-in-windows-updates-increased/
> > >
> > > I don't disagreed, real hardware does have some distinct
> > > advantages. We
> > > also have to keep in mind that we're testing what the Yocto Project
> > > is
> > > offering. At some point YP has to trust that the underlying kernels
> > > we
> > > ship are "ok" and that upstream has covered some of this.
> >
> > Here's a quote from John Loucaides of Eclypsium, ex-Intel,
> > https://twitter.com/johnloucaides/status/1187839034661339136,
> > describing his PSEC video:
> >
> > "While hard, I don't believe supply chain security is unsolvable. I
> > believe it's a "tragedy of the commons" problem where everyone hopes
> > someone else will fix it. By working together on tools/projects, we
> > can change the incentives and create practical solutions."
> 
> Sure.
> 
> As a project we struggle to do what we do with the resources we have
> available today, to the point that we're burning out people, me
> included.
> 
> I have pushed and tried to have automated real hardware testing for
> over a decade. Its proving very difficult to get traction and make it
> happen.

Same here (well, not the "over a decade" part :-).

Automated testing on real hardware, as an industry, IMHO requires much
more coordination than we currently have.  For example, I believe
a lot of hardware testing is going to require standards for referencing
and manipulating off-board resources (like power measurement devices
and bus endpoints - things like video and audio capture, or canbus 
emulators).  In order for tests to be shared between labs, it will require
standardization between test labs on how to control these things and
get data back from them. This is one of the things I hope we will eventually
tackle in our standards work.  We're starting to take the first steps on some
of it, and it's moving slowly. But it *is* moving.

> 
> We could make a significant difference under virtualised testing so
> we've done that, I've spent a considerable amount of time ensuring YP
> at least has robust testing there. There is much more I'd love to do
> but I personally don't have the hours in the day and I struggle to find
> people to help.
> 
> I'm a realist. I'd like YP LTS to happen. The bigger the barriers we
> put in its way, the less likely it is. I've always aimed to build
> systems and technologies that do allow incremental
> development/extension so there is at least options for this.
> 
> > > Virtual testing works well for the software stack itself which a
> > > large
> > > part of what YP offers, particularly where we can run upstream's
> > > tests
> > > for components. Where its struggles is on the variations of
> > > hardware
> > > that are out there as the article highlights.
> > >
> > > As such, I think virtual testing for any YP LTS is the realistic
> > > option
> > > we have right now.
> >
> > Some upstream kernel testing is also done on virtual machines, for
> > similar reasons. They may expect downstream "distros" to have more
> > device-focused hardware coverage.
> >
> > To avoid YP users assuming that YP and/or the upstream kernel has
> > done hardware testing for YP LTS, we may want to document specific
> > testing actions that are expected from YP LTS users.  E.g. should YP
> > downstreams work directly with CKI via the Linux Foundation, to pool
> > hardware test results?  Should they report hw test results to other
> > YP LTS users?

Possibly the new kcidb project could help with this.
See https://github.com/kernelci/kcidb

We are currently planning on putting test results from multiple frameworks in
a single database.  The project is just starting out, but will hopefully develop
into a place where many different kinds of results can be pooled.

> 
> We do document very clearly the testing that YP has done for each
> release. Users are already expected to figure out and understand what
> additional testing their end use case may need and LTS is no different.
> Their hardware for example will look very different, even between
> users.
> 
> Pooling results works if the base configurations can be summarised and
> reproduced. I know the automated-testing people are working on ways to
> improve the ability to do that. For the YP side we do have some helpful
> technology for example the hash representations of our configurations
> which cover most of the software supply chain.
> 
> > > > Business requirements for "LTS releases" include security fixes
> > > > and
> > > > non-regression of production systems, i.e. the software supply
> > > > chain
> > > > security topics discussed in [1] above.  There were CKI [3]
> > > > discussions about sharing test resources (human, machine) for
> > > > kernel
> > > > testing.  Some CKI challenges and potential solutions [4] may be
> > > > applicable to Yocto LTS testing of packages beyond the Linux
> > > > kernel.
> > > > VM-based testing could be supplemented with pooled test results
> > > > from
> > > > distributed testing on diverse hardware, across the Yocto
> > > > contributor
> > > > community.  Even with VM-based testing, IOMMU PCI passthrough can
> > > > be
> > > > used to give a VM direct access to a PCI device, for device
> > > > driver
> > > > testing.
> > >
> > > We have already build a mechanism into our test results process for
> > > externally contributed test results and we are happy to collect
> > > that
> > > data although we haven't seen much being contributed yet. I'd love
> > > to
> > > see more data coming in for any LTS though (or any other release
> > > for
> > > that matter).
> >
> > That's good news.  Could you share pointers/docs/samples on the
> > mechanism for external test result contributions?
> 
> I've worked on material which I gave to our tech writer but the
> resulting docs haven't been produced yet which comes down to resources
> :(.
> 
> I've also posted many times over the last year about the Yocto QA
> developments, how the new process works and how people could
> contribute
> test results to that.
> 
> The QA page on the wiki:
> 
> https://wiki.yoctoproject.org/wiki/QA
> 
> has the basics, its not extensively documented but its not a complex
> process either and it is the process Intel/WindRiver use to contribute
> test results today.
> 
> >   Is this effort coordinated with ATS or CKI for pooling of hw-
> > specific test results?
> 
> No, its hard enough to get YP testing to work without more parties
> involved. I was actually specifically told by managers to even drop the
> generic process as it was adding too much overhead and complexity. I
> refused, despite significant pressure FWIW.
> 
> >   If we are able to attract contributions, could we eventually
> > associate YP LTS external test results with public BSP definitions
> > and hardware model/revision numbers?
> 
> I'd love to see it happen and I do understand and like your vision but
> sadly companies don't seem to see the value of putting resources into
> things like this.

The KernelCI project will be formally announced as a Linux Foundation
project next week.  So at least a few companies are starting to put
some resources towards community-oriented upstream testing
efforts.
 -- Tim



More information about the automated-testing mailing list