[yocto] cannot re-use shared state cache between build hosts

Andrea Galbusera gizero at gmail.com
Fri Sep 1 08:04:00 PDT 2017


Hi Maciej,

On Fri, Sep 1, 2017 at 4:08 PM, Maciej Borzęcki <maciej.borzecki at rndity.com>
wrote:

> On Fri, Sep 1, 2017 at 3:54 PM, Andrea Galbusera <gizero at gmail.com> wrote:
> > Hi!
> >
> > I was trying to share sstate between different hosts, but the consumer
> build
> > system seems to be unable to use re-use any sstate object. My scenario is
> > setup as follows:
> >
> > * The cache was populated by a pristine qemux86 core-image-minimal build
> of
> > morty. This was done in a crops/poky container (running in docker on Mac)
> > * The cache was then served via HTTP
>
> Make sure that you use a decent HTTP server. Simple `python3 -m
> http.server` will quickly choke when the mirror is being checked. Also
> running bitbake -DDD -v makes investigating this much easier.
>

To be honest, the current server was indeed setup with python's
SimpleHTTPServer... As you suggest, I checked the verbose debug log and
noticed what's happening behind the apparently happy "Checking sstate
mirror object availability" step. After a first "SState: Successful fetch
test for" that I see correctly served with 200 on the server side, tests
for any other sstate object suddenly and systematically fail with logs like
this:

DEBUG: SState: Attempting to fetch file://7d/sstate:libxml2:i586-
poky-linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz
DEBUG: Searching for 7d/sstate:libxml2:i586-poky-linux:2.9.4:r0:i586:3:
7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz in paths:
    /home/vagrant/koan/morty/build/sstate-cache
DEBUG: Defaulting to /home/vagrant/koan/morty/build/sstate-cache/7d/sstate:
libxml2:i586-poky-linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz
for 7d/sstate:libxml2:i586-poky-linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23b
db86ac7ab32_package_qa.tgz
DEBUG: Testing URL file://7d/sstate:libxml2:i586-poky-linux:2.9.4:r0:i586:3:
7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz
DEBUG: For url ['file', '', '7d/sstate:libxml2:i586-poky-
linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz', '',
'', OrderedDict()] comparing ['file', '', '.*', '', '', OrderedDict()] to
['http', '192.168.33.1:8000', '
/sstate-cache/PATH', '', '', OrderedDict([('downloadfilename', 'PATH')])]
DEBUG: For url file://7d/sstate:libxml2:i586-poky-linux:2.9.4:r0:i586:3:
7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz returning
http://192.168.33.1:8000/sstate-cache/7d/sstate%3Alibxml2%3Ai586-poky-linux%
3A2.9.4%3Ar0%3Ai586%3A3%3A7da8fc
3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz;downloadfilename=7d/sstate:
libxml2:i586-poky-linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23bdb86ac7ab
32_package_qa.tgz
DEBUG: checkstatus: trying again
DEBUG: checkstatus() urlopen failed: <urlopen error [Errno 9] Bad file
descriptor>
DEBUG: SState: Unsuccessful fetch test for file://7d/sstate:libxml2:i586-
poky-linux:2.9.4:r0:i586:3:7da8fc3f7f5ed0102d23bdb86ac7ab32_package_qa.tgz

Nothing is reported server-side for any of these failures... As you
recommend, I'll try to setup something more "decent" for the HTTP server
and see if it helps.



> > * The second host is a VM running Ubuntu 16.04 where I set
> SSTATE_MIRRORS to
> > point to the hosted sstate cache like this:
> >
> > SSTATE_MIRRORS ?= "\
> > file://.* http://192.168.33.1:8000/sstate-cache/PATH;downloadfilename=
> PATH"
> >
> > * I checked with curl that the VM can successfully get sstate objects
> from
> > the server.
> > * Then I start a new build (same metadata revisions, default
> configuration
> > for core-image-minimal) and each and every task run from scratch with no
> > sstate cache re-use.
> >
> > Here are the two configurations from bitbake and /etc/lsb-release files:
> >
> > On the container used to seed sstate cache:
> >
> > Build Configuration:
> > BB_VERSION        = "1.32.0"
> > BUILD_SYS         = "x86_64-linux"
> > NATIVELSBSTRING   = "universal"
> > TARGET_SYS        = "i586-poky-linux"
> > MACHINE           = "qemux86"
> > DISTRO            = "poky"
> > DISTRO_VERSION    = "2.2.2"
> > TUNE_FEATURES     = "m32 i586"
> > TARGET_FPU        = ""
> > meta
> > meta-poky
> > meta-yocto-bsp    = "morty:2a70e84643381eca0e7bf7928d4a3d56f9651128"
> >
> > $ cat /etc/lsb-release
> > DISTRIB_ID=Ubuntu
> > DISTRIB_RELEASE=16.04
> > DISTRIB_CODENAME=xenial
> > DISTRIB_DESCRIPTION="Ubuntu 16.04.2 LTS"
> >
> > On the VM that should consume the cache:
> >
> > Build Configuration:
> > BB_VERSION        = "1.32.0"
> > BUILD_SYS         = "x86_64-linux"
> > NATIVELSBSTRING   = "Ubuntu-16.04"
> > TARGET_SYS        = "i586-poky-linux"
> > MACHINE           = "qemux86"
> > DISTRO            = "poky"
> > DISTRO_VERSION    = "2.2.2"
> > TUNE_FEATURES     = "m32 i586"
> > TARGET_FPU        = ""
> > meta
> > meta-poky
> > meta-yocto-bsp    = "morty:2a70e84643381eca0e7bf7928d4a3d56f9651128"
> >
> > $ cat /etc/lsb-release
> > DISTRIB_ID=Ubuntu
> > DISTRIB_RELEASE=16.04
> > DISTRIB_CODENAME=xenial
> > DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"
> >
> >
> > To me, the only differing bit that in my understanding can lead to sstate
> > cache objects invalidation is the value of NATIVELSBSTRING which is
> > "universal" inside the container and "Ubuntu-16.04". This sounds strange
> to
> > me, since both underlying systems are Ubuntu 16.04 (although not exactly
> the
> > same dot release) as confirmed by /etc/lsb-release contents.
> >
> > Is the different NATIVELSBSTRING the root cause for everything being
> > re-built? If so, what's causing them being different in the end and what
> > does "universal" exactly mean (to me it looks like a more generic and
> > incluse term than any distro label, so I'm confused...)?
> >
> >
> > --
> > _______________________________________________
> > yocto mailing list
> > yocto at yoctoproject.org
> > https://lists.yoctoproject.org/listinfo/yocto
> >
>
>
>
> --
> Maciej Borzecki
> RnDity
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.yoctoproject.org/pipermail/yocto/attachments/20170901/5a6a1f38/attachment.html>


More information about the yocto mailing list