[yocto] Build time data

Martin Jansa martin.jansa at gmail.com
Thu Apr 12 22:51:51 PDT 2012


On Thu, Apr 12, 2012 at 04:37:00PM -0700, Flanagan, Elizabeth wrote:
> On Thu, Apr 12, 2012 at 7:12 AM, Darren Hart <dvhart at linux.intel.com> wrote:
> 
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA1
> >
> >
> >
> > On 04/12/2012 01:00 AM, Martin Jansa wrote:
> > > On Thu, Apr 12, 2012 at 01:05:00PM +0530, Joshua Immanuel wrote:
> > >> Darren,
> > >>
> > >> On Wed, 2012-04-11 at 17:30 -0700, Darren Hart wrote:
> > >>> I run on a beast with 12 cores, 48GB of RAM, OS and sources on
> > >>> a G2 Intel SSD, with two Seagate Barracudas in a RAID0 array
> > >>> for my /build partition. I run a headless Ubuntu 11.10 (x86_64)
> > >>> installation running the 3.0.0-16-server kernel. I can build
> > >>> core-image-minimal in < 30 minutes and core-image-sato in < 50
> > >>> minutes from scratch.
> > >
> > > why not use so much RAM for WORKDIR in tmpfs? I bought 16GB just to
> > > be able to do my builds in tmpfs and keep only more permanent data
> > > on RAID.
> >
> > We've done some experiments with tmpfs, adding Beth on CC. If I recall
> > correctly, my RAID0 array with the mount options I specified
> > accomplishes much of what tmpfs does for me without the added setup.
> >
> 
> This should be the case in general. For the most part, if you have a decent
> RAID setup (We're using RAID10 on the ab) with fast disks you should be
> able to hit tmpfs speed (or close to it). I've done some experiments with
> this and what I found was maybe a 5 minute difference, sometimes, from a
> clean build between tmpfs and RAID10.

5 minutes on very small image like core-image-minimal (30 min) is 1/6 of
that time :).. 

I have much bigger images and even bigger ipk feed, so to rebuild from
scratch takes about 24 hours for one architecture..

And my system is very slow compared to yours, I've found my measurement
of core-image-minimal-with-mtdutils around 95 mins
http://patchwork.openembedded.org/patch/17039/
but this was with Phenom II X4 965, 4GB RAM, RAID0 (3 SATA2 disks) for
WORKDIR, RAID5 (the same 3 SATA2 disks) BUILDDIR (raid as mdraid), now
I have Bulldozer AMD FX(tm)-8120, 16GB RAM, still the same RAID0 but
different motherboard..

Problem with tmpfs is that no RAM is big enough to build whole feed in
one go, so I have to build in steps (e.g. bitbake gcc for all machines
with the same architecture, then cleanup WORKDIR and switch to another
arch, then bitbake small-image, bigger-image, qt4-x11-free, ...).
qt4-x11-free is able to eat 15GB tmpfs almost completely.

> I discussed this during Yocto Developer Day. Let me boil it down a bit to
> explain some of what I did on the autobuilders.
> 
> Caveat first though. I would avoid using autobuilder time as representative
> of prime yocto build time. The autobuilder hosts a lot of different
> services that sometimes impact build time and this can vary depending on
> what else is going on on the machine.
> 
> There are four places, in general, where you want to look at optimizing
> outside of dependency issues. CPU, disk, memory, build process. What I
> found was that the most useful of these in getting the autobuilder time
> down was disk and build process.
> 
> With disk, spreading it across the RAID saved us not only a bit of time,
> but also helped us avoid trashed disks. More disk thrash == higher failure
> rate. So far this year we've seen two disk failures that have resulted in
> almost zero autobuilder downtime.

True for RAID10, but for WORKDIR itself RAID0 is cheeper and even higher
failure rate it's not big issue for WORKDIR.. just have to cleansstate
tasks which were in hit in the middle of build..

> The real time saver however ended up being maintaining sstate across build
> runs. Even with our sstate on nfs, we're still seeing a dramatic decrease
> in build time.
> 
> I would be interested in seeing what times you get with tmpfs. I've done
> tmpfs builds before and have seen good results, but bang for the buck did
> end up being a RAID array.

I'll check if core-image-minimal can be built with just 15GB tmpfs,
otherwise I would have to build it in 2 steps and the time wont be
precise.

> With a higher commit interval, the kernel doesn't try to sync the
> > dcache with the disks as frequently (eg not even once during a build),
> > so it's effectively writing to memory (although there is still plenty
> > of IO occurring).
> >
> > The other reason is that while 48GB is plenty for a single build, I
> > often run many builds in parallel, sometimes in virtual machines when
> > I need to reproduce or test something on different hosts.
> >
> > For example:
> >
> > https://picasaweb.google.com/lh/photo/7PCrqXQqxL98SAY1ecNzDdMTjNZETYmyPJy0liipFm0?feat=directlink

-- 
Martin 'JaMa' Jansa     jabber: Martin.Jansa at gmail.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: Digital signature
URL: <http://lists.yoctoproject.org/pipermail/yocto/attachments/20120413/f2094847/attachment.pgp>


More information about the yocto mailing list