[yocto] sstate-cache clobbering

Trevor Woerner trevor.woerner at linaro.org
Wed Jul 3 22:06:58 PDT 2013


Let's say I have two build machines (A and B), both running the exact
same version of a given distribution. Also assume both machines have a
fully populated "downloads" directory but otherwise have not performed
any OE/Yocto builds.

On machine A I perform a fresh "bitbake core-image-minimal" and end up
with a sstate-cache of 733MB and I receive a message saying 324/1621
tasks didn't need to be (re)run.

I then take this sstate-cache directory, copy it to machine B and
perform a "bitbake core-image-minimal". This build takes under 5
minutes, and didn't need to run 1374/1621 tasks.

So far this makes good sense. The sstate-cache is getting hit quite a
lot and the performance of the second build shows considerable
improvement (relative to the first build on machine A) to reflect this
fact.

Now I wipe machine B then get it ready for a fresh build (i.e. put the
"downloads" directory in place, make sure it has all the necessary
host packages, etc). Then on machine A I perform a "bitbake
core-image-minimal -c populate_sdk". Now, on machine A, I end up with
a sstate-cache that is 1.6GB in size. I take machine A's 1.6GB
sstate-cache and copy it to machine B. Then on machine B I perform a
"bitbake core-image-minimal".

I would have expected this build on machine B to take the same "under
5 minutes, and didn't need to run 1374/1621 tasks". But instead I find
this build takes 27 minutes and only 781/1621 tasks didn't need to be
run.

Doesn't it seem strange that a larger sstate-cache involving the same
base image has such a lower sstate-cache hit rate?



More information about the yocto mailing list