[yocto] Using tmpfs for WORKDIR

Burton, Ross ross.burton at intel.com
Tue Aug 13 03:40:03 PDT 2013


Hi,

For a while I've been wondering about the possible performance
improvement from using a tmpfs as $WORKDIR, so each package is
unpacked, compiled, installed, and packaged in a tmpfs and then the
final packages moved to the deploy/sstate/sysroot in persistent
storage.  In theory with lots of RAM and a relatively long file system
commit duration this should be what effectively happens, but there is
a lot of I/O still happening during builds (and occasionally pausing
the build whilst buffers empty here) which I was trying to mitigate,
despite 12G of my 16G RAM being used as the page cache.

Last night I finally got around to testing this.  Unless you've an
ungodly amount of RAM the use of rm_work is mandatory (a 6G tmpfs
wasn't sufficient as a kernel build almost fills that, 8G was
sufficient for me) so I did the HDD times with and without rm_work for
fair comparisons.  Each build was only done once but the machine was
otherwise idle so error margins should be respectable.  The benchmark
was core-image-sato for atom-pc from scratch (with cached downloads).

Work in HDD without rm_work: ~68 minutes
Work in HDD with rm_work: ~71 minutes
Work in tmpfs with rm_work: ~64 minutes

Everyone loves graphs, so here's one I knocked up:  http://bit.ly/146B0Xo

Conclusion: even with the overhead of rm_work there's a performance
advantage to using a tmpfs for the workdir, but the build isn't
massively I/O bound on commodity hardware (i7 with WD Caviar Green
disks).  It's definitely a quick and easy test (assuming enough RAM)
to see how I/O bound your own builds are.

Ross



More information about the yocto mailing list