[yocto] systemd Version Going Backwards on Warrior

Robert Joslyn robert.joslyn at redrectangle.org
Tue Oct 29 22:12:14 PDT 2019


On Tue, 2019-10-29 at 20:30 +0100, Martin Jansa wrote:
> PR server never knows which one is really newer (in git).
> 
> It just returns max(LOCALCOUNT)+1 when it gets query for hash which
> isn't stored in the database yet.
> 
> Either the build in question didn't use PRserv at all or PRserv's cache
> was deleted between builds or the builds were using the same
> buildhistory but different PRserv's or the first systemd SRCREV was
> reused from sstate created on another server which doesn't share the
> PRserv (so it didn't build it locally to query local PRserv to
> store 511646b8ac as LOCALCOUNT).
> 
> e.g. I've just built systemd with 511646b8ac hash as:
> systemd_241+0+511646b8ac-r0.0webos4_qemux86.ipk
> but reusing it from sstate created on another jenkins server as shown by
> buildstats-summary:
> 
> NOTE: Build completion summary:
> NOTE:   do_populate_sysroot: 100.0% sstate reuse(124 setscene, 0
> scratch)
> NOTE:   do_package_qa: 100.0% sstate reuse(10 setscene, 0 scratch)
> NOTE:   do_packagedata: 100.0% sstate reuse(51 setscene, 0 scratch)
> NOTE:   do_package_write_ipk: 100.0% sstate reuse(10 setscene, 0
> scratch)
> NOTE:   do_populate_lic: 100.0% sstate reuse(17 setscene, 0 scratch)
> 
> Then I've reverted oe-core
> commit 8b9703454cb2a8a0aa6b7942498f191935d547ea to go back
> to c1f8ff8d0de7e303b8004b02a0a47d4cc103a7f8 systemd revision.
> 
> This time it haven't found valid sstate archive for it:
> NOTE: Build completion summary:
> NOTE:   do_populate_sysroot: 0.0% sstate reuse(0 setscene, 16 scratch)
> NOTE:   do_package_qa: 0.0% sstate reuse(0 setscene, 19 scratch)
> NOTE:   do_package: 15.8% sstate reuse(3 setscene, 16 scratch)
> NOTE:   do_packagedata: 0.0% sstate reuse(0 setscene, 16 scratch)
> NOTE:   do_package_write_ipk: 0.0% sstate reuse(0 setscene, 19 scratch)
> NOTE:   do_populate_lic: 100.0% sstate reuse(2 setscene, 0 scratch)
> 
> and resulting .ipk has also +0:
> systemd_241+0+c1f8ff8d0d-r0.0webos4_qemux86.ipk
> but no warning is shown, because in this case it went from 511 to c1f.
> 
> Removing the revert again, doesn't trigger the warning again, because it
> will be again reused from sstate (and QA checks won't get executed):
> NOTE: Build completion summary:
> NOTE:   do_populate_sysroot: 100.0% sstate reuse(16 setscene, 0 scratch)
> NOTE:   do_package_qa: 100.0% sstate reuse(19 setscene, 0 scratch)
> NOTE:   do_packagedata: 100.0% sstate reuse(16 setscene, 0 scratch)
> NOTE:   do_package_write_ipk: 100.0% sstate reuse(19 setscene, 0
> scratch)
> NOTE:   do_populate_lic: 100.0% sstate reuse(2 setscene, 0 scratch)
> 
> And the local PRserv database still has only the c1f8ff8d0d hash,
> because 511646b8ac was never really queried against this local PRserv.
> 
> cache$ sqlite3 prserv.sqlite3 "select * from PRMAIN_nohist where version
> like 'AUTOINC-systemd-1%'"
> AUTOINC-systemd-1_241+|qemux86|AUTOINC+c1f8ff8d0d|0
> 
> Also systemd recipe is using strange format with:
> PV_append = "+${SRCPV}"
> 
> most recipes use "+git${SRCPV}" or "+gitr${SRCPV} to make it more clear
> where this +0+hash came from.
> 
> So long story short: the change is correct, PRserv should handle this,
> but there are many cases where it will fail (e.g. 
> https://bugzilla.yoctoproject.org/show_bug.cgi?id=5399), but that's not
> a reason to start PE bumps everywhere.

I think this explains what I'm seeing and matches what Ross said. I did
setup the same test and am able to see the version going forward properly
when I have the PR server enabled. The file goes from

systemd_241+0+c1f8ff8d0d-r0.0_core2-64.ipk
to
systemd_241+1+511646b8ac-r0.0_core2-64.ipk

It wasn't obvious to me that the +0 would be incremented by the PR server,
I guess I never noticed it before. I already had the PR server running for
my production builds, but I didn't have it enabled for my test builds
where I got the error. I'll setup another PR server for my test builds to
prevent false alarms like this.

Thanks for the help!

Robert




More information about the yocto mailing list