[yocto] Recommended Hardware for building

Chris Tapp opensource at keylevel.com
Thu Oct 2 09:51:29 PDT 2014


On 2 Oct 2014, at 11:04, Burton, Ross <ross.burton at intel.com> wrote:

> On 2 October 2014 10:36, Oliver Novakovic <Oliver.Novakovic at alpine.de> wrote:
>> Can anyone recommend a reasonable performant hardware setup to use ?
>> 
>> What should be considered ? Are there any pitfalls ? What about bottlenecks
>> in the build system ?
>> 
>> Specifically:
>> 
>> How many cores are recommended ? And how much cache is necessary ?
>> How much of the main memory does Yocto really use ? Is 32 GB sufficient or
>> should I go for 64 ?
>> 
>> Does it make sense to use two SSDs as Raid0  to get builds faster ?
> 
> As much of everything as you can afford.  :)  The build isn't heavy in
> any particular metric, so don't sacrifice RAM for SSDs for example.
> 
> RAID 0 over SSD would be nice and fast, but I prefer having a good
> amount of RAM and a tuned ext4 (no journal, long commit delay) so data
> doesn't actually hit the disk as frequently. Keeping the actual build
> directories on a separate disk is good for performance and not causing
> data loss when you lose a disk.
> 
> There are people that have 64GB in machines and then set TMPDIR to a
> tmpfs.  Surprisingly this isn't that much faster (5% or so), but it's
> a lot easier on the hardware and power consumption.

My experience:

I've got a quad core with hyper-threading (so 8 usable cores) running at about 3.8 GHz, 16GB of RAM and use multiple SSDs - one to hold the meta data, downloads and top level build areas (local.conf, etc) and have the TMPDIR on a second SSD (so, as Ross says, I don't get a surprise when it wears out!).

I can build my images (basically an x11 image) in just under 60 minutes (once all the files have been fetched). I run with BB_NUMBER_THREADS and PARALLEL_MAKE both set to 16 to make sure the cores are fully loaded as much as possible (other says that should be 8 and 8 to reduce scheduling overhead).

During the build the system is CPU bound quite a bit of the time (so more cores should help), but there are significant periods where the build dependency chain means this isn't the case and only two or three cores are active. Previously I recall comparing results with someone else and finding that having lots more cores (24, I think) didn't give a significant improvement in build time (certainly not for the 3x system build cost).

I've never seen peak memory usage go much above 9 GB during a build, and the peaks generally coincide with linking activities for "big" items (gcc, eglibc). This is likely to go higher with more active threads.

I started out with a RAID-0 SSD build array, but I didn't really see any difference over a single high-spec (consumer) SSD. As Ross said, running a fast file system on the disk is a good idea.

--

Chris Tapp
opensource at keylevel.com
www.keylevel.com

----
You can tell you're getting older when your car insurance gets real cheap!




More information about the yocto mailing list