[yocto] Using tmpfs for WORKDIR

Martin Jansa martin.jansa at gmail.com
Tue Aug 13 04:32:52 PDT 2013


On Tue, Aug 13, 2013 at 11:40:03AM +0100, Burton, Ross wrote:
> Hi,
> 
> For a while I've been wondering about the possible performance
> improvement from using a tmpfs as $WORKDIR, so each package is
> unpacked, compiled, installed, and packaged in a tmpfs and then the
> final packages moved to the deploy/sstate/sysroot in persistent
> storage.  In theory with lots of RAM and a relatively long file system
> commit duration this should be what effectively happens, but there is
> a lot of I/O still happening during builds (and occasionally pausing
> the build whilst buffers empty here) which I was trying to mitigate,
> despite 12G of my 16G RAM being used as the page cache.
> 
> Last night I finally got around to testing this.  Unless you've an
> ungodly amount of RAM the use of rm_work is mandatory (a 6G tmpfs
> wasn't sufficient as a kernel build almost fills that, 8G was
> sufficient for me) so I did the HDD times with and without rm_work for
> fair comparisons.  Each build was only done once but the machine was
> otherwise idle so error margins should be respectable.  The benchmark
> was core-image-sato for atom-pc from scratch (with cached downloads).
> 
> Work in HDD without rm_work: ~68 minutes
> Work in HDD with rm_work: ~71 minutes
> Work in tmpfs with rm_work: ~64 minutes
> 
> Everyone loves graphs, so here's one I knocked up:  http://bit.ly/146B0Xo
> 
> Conclusion: even with the overhead of rm_work there's a performance
> advantage to using a tmpfs for the workdir, but the build isn't
> massively I/O bound on commodity hardware (i7 with WD Caviar Green
> disks).  It's definitely a quick and easy test (assuming enough RAM)
> to see how I/O bound your own builds are.

I'm building in tmpfs for long time so I can add few more comments.

1) with 64GB ram + rm_work I can do my huge world builds (with 20+ layers)
in tmpfs (whole TMPDIR not only WORKDIR), before
http://git.openembedded.org/openembedded-core/commit/meta/classes/rm_work.bbclass?id=4067afcda78d17058f2aa8d7f82173d181e0aae4
I had to build in "steps"

2) rm_work.bbclass sets BB_SCHEDULER ?= "completion", but it doesn't
work as I would expect, it still keeps many build directories around,
before rm_work task is executed, simple solution for lazy people is to
build in "steps", instead of
bitbake my-big-image
do
bitbake gcc-cross
bitbake virtual/kernel
bitbake webkit-gtk
bitbake my-big-image
it will force rm_work to be executed sooner (before each target is
finished and will clean more space in tmpfs with each step). This way
I'm able to use 14GB tmpfs WORKDIR on my poor desktop with only 16GB RAM
to build anything (webkit builds for x86-64 MACHINE are the worst)

3) there is bug in sstate dependency handling that too many
do_package_setscene tasks are executed in each build with rm_work, which
also causes package/packages-split directories to be created again in
tmpfs, see
http://git.openembedded.org/openembedded-core/commit/?id=6107ee294afde395e39d084c33e8e94013c625a9

4) using tmpfs has 2 more advantages
   - disks don't wear off so fast
   - system is more responsible when you're using the same disk for
     something else e.g. to edit recipes when some build is already
     running or when night backup starts when build is still running

Regards,

-- 
Martin 'JaMa' Jansa     jabber: Martin.Jansa at gmail.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: Digital signature
URL: <http://lists.yoctoproject.org/pipermail/yocto/attachments/20130813/4d954cfe/attachment.pgp>


More information about the yocto mailing list