[Automated-testing] Board management API discussion at ATS - my ideas

Milosz Wasilewski milosz.wasilewski at linaro.org
Mon Oct 28 03:50:50 PDT 2019


On Sat, 26 Oct 2019 at 23:41, <Tim.Bird at sony.com> wrote:

[cut]

> >
> > > > > or another alternative is to place a config script for the board
> > management system in:
> > > > > /etc/test.d
> > > > > with each file containing the name of the command line used to
> > communicate with that board management layer, and
> > > > > possibly some other data that is required for interface to the layer (e.g.
> > the communication method, if we decide to
> > > > > support more than just CLI (e.g. port of a local daemon, or network
> > address for the server providing board management),
> > > > > or location of that board management layer's config file.
> > > >
> > > > I agree, a command line interface (while limited) is probably enough to
> > > > see if we can find a common API.
> > >
> > > What would be the use case for CLI in case board management is an
> > > internal business of the testing framework?
>
> I'm not sure I understand the question.  If a framework has an internal
> API for doing board management, then that would be a candidate for
> modularizing (changing from monolithic to an using a standardized API).
> That might be easy or hard to rationalize with the rest of the system,
> depending on the existing division of labor in the framework.

I'll try to clarify - what is the use for CLI in this scenario? BM
layer should implement API that can be called from scheduler (basing
on your earlier comments). This means that the scheduler would be
using the API directly. Why do you need CLI?

>
> It sounds like LAVA dispatchers are the entities that do board control
> (turn on/off power, and transfer software to the board), but maybe
> these operations are split between different entities.

Now I don't understand the question :) What 'different entities' do
you have in mind?

>
> >
> > See above, lowest common denominator for sharing a board between test
> > frameworks.
> >
> > > > > = starting functions =
> > > > > Here are some functions that I think  the board management layer
> > should support:
> > > > >
> > > > > introspection of the board management layer supported features:
> > > > > verb: list-features
> > > >
> > > > This could be used to expose optional extensions, maybe unter an
> > > > experimental name until standardized (similar to how browsers expose
> > > > vendor specific APIs). On example could be 'x-labgrid-set-gpio'.
> > > >
> > > > > introspection of the board layer managed objects:
> > > > > vefb: list-boards
> > > >
> > > > OK. It might be necessary to return more than just the name (HW type?,
> > > > availability?).
> > >
> > > hmm, how do you distinguish between 'x15' and 'am57xx-beagle-x15'?
> > > This is the same board but the former name comes from LKFT and the
> > > latter from KernelCI. Which name should the list-boards return? It
> > > will be really hard to unify board naming convention. There can be
> > > slight variations in hardware, additional peripherals, etc.
>
> Are these names human-generated and arbitrary, or do they pack
> some meaning used by the test framework (ie describe the hardware
> is some way that is used as part of test automation or scheduling)?

pretty much arbitrary. 'am57xx-beagle-x15' is DTB name:
https://github.com/torvalds/linux/blob/master/arch/arm/boot/dts/am57xx-beagle-x15.dts

>
> If the latter, than the salient attributes should be determined
> and we should adopt conventions for names.  And possibly come
> up with mechanisms to query those attributes outside of the
> board naming scheme.

This would be handy for more advanced tests but I don't really have an
example of such test. We're not doing a lot of HW interface testing so
maybe it's just lack of coverage on our side.

[cut]

> > > > >   * do we need to consider security as part of initial API design?  (what
> > about user ID, access tokens, etc.)
> > > >
> > > > I don't think so. Access controls on the network layer should be enough
> > > > to make an initial implementation useful.
> > > >
> > > > This doesn't need to be a downside, the current frameworks already
> > have
> > > > this part covered and using separate NFS/HTTP servers for each test
> > > > framework in a shared lab shouldn't cause issues.
> > > >
> > > > > I've started collecting data about different test management layers at:
> > > > > https://elinux.org/Board_Management_Layer_Notes
> > >
> > > I think you confused LAVA (test executor) and KernelCI (test job
> > > requester) in wiki.
> Oh probably.  I don't have a good handle on the interface between
> KernelCI and LAVA, and which piece does what job in their overall
> workflow.
>
> > > AFAIU KernelCI itself doesn't do any board
> > > management. It simply requests test jobs to be executed by connected
> > > labs. These are mostly LAVA labs but don't have to be. It's up to each
> > > LAB how to execute the test job. In other words KernelCI doesn't
> > > belong to this board management discussion.
>
> Does KernelCI not know anything about the boards?  Doesn't it have to
> at least know the architecture, to determine if it can request that a board
> execute a test for a particular image?
>
> Doesn't kernelCI now have some hardware tests?  Wouldn't it need to
> know what boards had hardware that was applicable to that test?
>
> Again - my ignorance is showing here.  But this sounds more like
> test scheduling again, and not board management.

I'll let KernelCI people expand on the details but to my understanding
KernelCI does the build and than uses DTB names as board names to ask
LAVA labs to execute tests.

milosz

>
> >
> > Agreed. KernelCI only has test jobs and collects results. The labs are
> > completely independent regarding scheduling of tests to boards.
> >
> [rest snipped]
>  -- Tim
>


More information about the automated-testing mailing list