[yocto] Yocto-ish upgrade-in-place strategy

Craig McQueen craig.mcqueen at innerrange.com
Mon May 4 21:45:44 PDT 2015


From: yocto-bounces at yoctoproject.org [mailto:yocto-bounces at yoctoproject.org] On Behalf Of Chris Morgan

On Saturday, May 2, 2015, Patrick Doyle <wpdster at gmail.com<mailto:wpdster at gmail.com>> wrote:
Rather than inventing this from scratch yet again, I would like to ask
what the Yocto-ish best practice is for deploying an embedded system
that has to support in-place upgrades.

It seems to me that this should be a fairly common scenario:
I have (or, rather am in the process of developing yet another) an
embedded application that will be running on a device whose power
supply is uncertain at best.  Consequently, I want to run from a
read-only squashfs rootfs, with some small amount of seldom changed
configuration data (most likely stored in a JFFS partition).

But I need a strategy to upgrade this system in place.  Since I am
running from a read-only squashfs, I can't apt-get or rpm upgrade
individual packages.  I must redeploy the entire image.

I can divvy up the flash however I want, so I am thinking that I would
like to use u-boot to boot a rescue image from one partition, that
would kexec the deployed image from a different parition.

Are there Yocto recipes, blogs, community experience with this sort of
thing, or should I invent my own solution?

Again, this feel like a common problem that others should have already
solved, and I would rather solve uncommon problems than re-solve
common ones.

--wpd
--
_______________________________________________
yocto mailing list
yocto at yoctoproject.org<javascript:;>
https://lists.yoctoproject.org/listinfo/yocto

Is there a standard way? We've seen a few different approaches between android systems (phones), Linux distributions, and things like chromebook.

In our case we are using two uboot, two kernel, and two root file system partitions with uboot environment controlling the active one. Squashfs for the root fs. Raw images for uboot and kernel. Overlayfs in another rw partition that we put on top of the rootfs where we keep system configuration. Media and other stuff goes into yet another btrfs partition that, like the overlayfs, isn't managed by the update system.

Approach is to update the second rootfs while one rootfs is running. Swap env in uboot environment to point at the other rootfs and then when appropriate reboot. This lets us avoid downtime while downloading the update, we download in the background.

We build everything with yocto but afaik we don't have much specific stuff for the update system because we don't have an upgrade partition but rather two sets of partitions.

Thoughts?



I’m working on this for a BeagleBone Black type system, which uses eMMC (i.e. disk partitions). I’m considering:

Partition 1: FAT16 “BOOT”, with MLO, u-boot.img, and custom uEnv.txt (U-Boot rules to append)
Partition 2: ext4 “KERNEL1”, which contains a zImage with attached initramfs, and device tree
Partition 3: ext4 “KERNEL2”, which contains a zImage with attached initramfs, and device tree
Partition 4: ext4 “DATA”, a read/write filesystem

The DATA partition should contain a SquashFS file named /lib/firmware/rootro1 and/or rootro2.

At boot up, U-Boot loads the custom rules from uEnv.txt. That checks for the presence of a BOOT2 file on the DATA partition. If it exists, it boots the kernel from KERNEL2, otherwise from KERNEL1. It passes kernel arguments:
    rootrw=/dev/mmcblk1p4
    rootro=/mnt/rootrw/lib/firmware/rootro1 -- or rootro2 depending on whether booting KERNEL1 or KERNEL2.

The kernel contains an initramfs (using initramfs-framework) which mounts the DATA partition as /mnt/rootrw. Then it mounts a SquashFS partition /mnt/rootrw/lib/firmware/rootro1 according to the passed kernel argument ‘rootro’, as /mnt/rootro. Then it mounts an OverlayFS with the rootrw mount over the rootro mount.

This is development in-progress, but it seems to be working well for me so far.

Then, I need to have an upgrade image which is an archive of:

·         SquashFS rootro image

·         Kernel with attached initramfs

·         Device tree

·         Any metadata for the upgrade, README, etc

The user can upload it onto the device through a web interface, or something like that. Then it gets processed after upload:


·         The integrity is verified somehow (e.g. hash)

·         The kernel and device tree are copied to the KERNEL1 or KERNEL2 partition that’s not currently in-use.

·         The SquashFS rootro gets copied to /lib/firmware/rootro1 or rootro2, whichever is not currently in-use.

·         The partition 4 file BOOT2 is created or deleted, as needed, to cause U-Boot to boot the “other image”.

·         Reboot

The BeagleBone Black U-Boot implements an incrementing ‘bootcount’, stored in RTC scratch, I believe. A Linux kernel driver could be written which allows for this to be reset to 0 by the kernel or userspace app. Then, U-Boot could do some alternative action if bootcount gets too big (meaning it’s not successfully booting)—such as revert to the other older image, if present.

Craig McQueen

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.yoctoproject.org/pipermail/yocto/attachments/20150505/17d3dd09/attachment.html>


More information about the yocto mailing list