[yocto] Build time data

Flanagan, Elizabeth elizabeth.flanagan at intel.com
Thu Apr 12 16:37:00 PDT 2012


On Thu, Apr 12, 2012 at 7:12 AM, Darren Hart <dvhart at linux.intel.com> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
>
>
> On 04/12/2012 01:00 AM, Martin Jansa wrote:
> > On Thu, Apr 12, 2012 at 01:05:00PM +0530, Joshua Immanuel wrote:
> >> Darren,
> >>
> >> On Wed, 2012-04-11 at 17:30 -0700, Darren Hart wrote:
> >>> I run on a beast with 12 cores, 48GB of RAM, OS and sources on
> >>> a G2 Intel SSD, with two Seagate Barracudas in a RAID0 array
> >>> for my /build partition. I run a headless Ubuntu 11.10 (x86_64)
> >>> installation running the 3.0.0-16-server kernel. I can build
> >>> core-image-minimal in < 30 minutes and core-image-sato in < 50
> >>> minutes from scratch.
> >
> > why not use so much RAM for WORKDIR in tmpfs? I bought 16GB just to
> > be able to do my builds in tmpfs and keep only more permanent data
> > on RAID.
>
> We've done some experiments with tmpfs, adding Beth on CC. If I recall
> correctly, my RAID0 array with the mount options I specified
> accomplishes much of what tmpfs does for me without the added setup.
>

This should be the case in general. For the most part, if you have a decent
RAID setup (We're using RAID10 on the ab) with fast disks you should be
able to hit tmpfs speed (or close to it). I've done some experiments with
this and what I found was maybe a 5 minute difference, sometimes, from a
clean build between tmpfs and RAID10.

I discussed this during Yocto Developer Day. Let me boil it down a bit to
explain some of what I did on the autobuilders.

Caveat first though. I would avoid using autobuilder time as representative
of prime yocto build time. The autobuilder hosts a lot of different
services that sometimes impact build time and this can vary depending on
what else is going on on the machine.

There are four places, in general, where you want to look at optimizing
outside of dependency issues. CPU, disk, memory, build process. What I
found was that the most useful of these in getting the autobuilder time
down was disk and build process.

With disk, spreading it across the RAID saved us not only a bit of time,
but also helped us avoid trashed disks. More disk thrash == higher failure
rate. So far this year we've seen two disk failures that have resulted in
almost zero autobuilder downtime.

The real time saver however ended up being maintaining sstate across build
runs. Even with our sstate on nfs, we're still seeing a dramatic decrease
in build time.

I would be interested in seeing what times you get with tmpfs. I've done
tmpfs builds before and have seen good results, but bang for the buck did
end up being a RAID array.


With a higher commit interval, the kernel doesn't try to sync the
> dcache with the disks as frequently (eg not even once during a build),
> so it's effectively writing to memory (although there is still plenty
> of IO occurring).
>
> The other reason is that while 48GB is plenty for a single build, I
> often run many builds in parallel, sometimes in virtual machines when
> I need to reproduce or test something on different hosts.
>
> For example:
>
> https://picasaweb.google.com/lh/photo/7PCrqXQqxL98SAY1ecNzDdMTjNZETYmyPJy0liipFm0?feat=directlink
>
>
> - --
> Darren Hart
> Intel Open Source Technology Center
> Yocto Project - Linux Kernel
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.11 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
>
> iQEcBAEBAgAGBQJPhuLfAAoJEKbMaAwKp3648pYH/1HGCzI1QP1mj1OPfbo1TNou
> nq1dCnEQOc+vUqShrmgjEY5H2G7Kqu5Y8JRp8m3D6v2iUPwu+ko3xASJkIVetgTn
> 1J+dkZl93Gbm8nm63b5bES0mMqyiycNgXW4KTL0iA+4mLbKSXck7nF/gIyjE4iHa
> SR+DDavSoOIJUiZsJBJpIdS4sY2RpalohhJvp97Qfmbxmqlo2RJkqzB7OmLliKbB
> zGiuXeFgGojZXIRl11Rr36kqqA75WoTlNYjlkcg1paEhCr4zCMh0sujGaPQgVPtu
> YU+FCtGxQ569f+hahdJraCU9T4IbMK4AOk30VqVxPifCqFhIvr7FnVRkYtV5pZM=
> =tdFq
> -----END PGP SIGNATURE-----
>



-- 
Elizabeth Flanagan
Yocto Project
Build and Release
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.yoctoproject.org/pipermail/yocto/attachments/20120412/0f529491/attachment.html>


More information about the yocto mailing list