[yocto] Autobuilder migration status.

Flanagan, Elizabeth elizabeth.flanagan at intel.com
Mon Mar 25 14:45:31 PDT 2013


On Thu, Mar 21, 2013 at 8:31 AM, Trevor Woerner <twoerner at gmail.com> wrote:
> I have looked at the output from a couple builds (nightly-fsl-arm,
> nightly-fsl-ppc, nightly-mips) and had a couple questions.
>
> Running "poky/oe-init-build-env" will produce a
> "build/conf/local.conf", but the nightly builder prefers to puts its
> configurations into "build/conf/auto.conf". Obviously there's nothing
> wrong with this, but I'm wondering why use auto.conf instead of
> local.conf? I'm guessing there's some nugget of information in this
> choice that I'm hoping to discover.

It's nothing particularly special, except I do it to differentiate an
autogenerated local.conf from the local.conf that is created by
oe-init-build-env. I'm sure there was a different reason for it, but
the commit log never made it over from the o-hand.org svn server so...

>
> Not all builds, but these three seem to follow similar steps:
> 1. prepare
> 2. configure+build core-image-sato core-image-sato-dev
> core-image-sato-sdk core-image-minimal core-image-minimal-dev
> 3. configure+build core-image-sato core-image-sato-dev
> core-image-sato-sdk core-image-minimal core-image-minimal-dev
> 4. configure+build meta-toolchain-gmae
> 5. configure+build meta-toolchain-gmae
> 6. finish up

For the most part, yes, that is correct.

> I can't help but wonder why the same builds are (apparently) done more
> than once?

SDKMACHINE and MACHINE

We build out qemu* targets twice, once with one SDKMACHINE, then with
another. Same goes for the toolchain.

In reality, we actually build it out 4 times if you also include the
lsb build outs.

> Also, for me I think it would be better if #2 and #3 split
> out each of those build targets individually.

Building them together provides a slight performance increase. That
said, for the end user if they want to build them individually:

This line in, for example, nightly-arm.conf
        {'BuildImages': {'images': 'core-image-sato
core-image-sato-dev core-image-sato-sdk core-image-minimal
core-image-minimal-dev'}}
Just becomes these lines:
        {'BuildImages': {'images': 'core-image-sato'}},
        {'BuildImages': {'images': 'core-image-sato-dev'}},
        {'BuildImages': {'images': 'core-image-sato-sdk'}}.....

> Seeing that, say,
> meta-fsl-arm failed wouldn't provide me with as much information as
> knowing that (for example) core-image-minimal passed, but
> core-image-sato failed. With the build the way it is currently, I'd
> have to dig through #2's log to see whether core-image-minimal was
> okay or not.

I agree. I'm going to be working on error logging soon so we can look
at splitting up the errors a bit. Not positive on implementation yet
but I'm starting to think of the best way to do it.

>
> Is the choice of build slave random? I've noticed that there seem to
> be 3 different slave hosts: debian, fedora, and suse. This is great!

Yes, however you can tie a builder to a slave or group of slaves:

[nightly-arm]
builders: [ab02, ab10]

> Although rare, sometimes the host does influence whether a build fails
> or succeeds, so I'm curious to know if a build's choice of slave will
> always be the same or is selected randomly.

selected randomly for the most part. Some builds, like eclipse-poky
are set to a smaller subset as I don't want java installed on every
builder.

> Also, I've noticed that
> the version of the host software on the slaves can vary (e.g. I've
> noticed a suse 11.3 and a 12.2). I'm curious to know if this is on
> purpose.

Yes. We're trying to get distro coverage from the autobuilder as well
as release artifacts and QA.

-- 
Elizabeth Flanagan
Yocto Project
Build and Release



More information about the yocto mailing list