[meta-virtualization] Question Integration of LXC

Mark Asselstine mark.asselstine at windriver.com
Fri May 25 08:24:30 PDT 2018


On Thu, May 24, 2018 at 3:06 PM, Mark Asselstine
<mark.asselstine at windriver.com> wrote:
> On Wed, May 23, 2018 at 2:15 PM, Bruce Ashfield
> <bruce.ashfield at gmail.com> wrote:
>>
>>
>> On Wed, May 23, 2018 at 7:19 AM, Nicolai Weis <nicolai.weis at web.de> wrote:
>>>
>>> On Tue, May 22, 2018 at 7:38 AM, Nicolai Weis <nicolai.weis at web.de>
>>> wrote:
>>>
>>> > > Hi all,
>>> > >
>>> > > In a project I want to integrate lxc-containers into the
>>> > > core-image-minimal for the minnowboard turbot. Therefore I'm using the
>>> > > krogoth branch (have to use this branch) for the meta-layers.
>>> > > As described in the README-file I add meta-virtualization, meta-oe,
>>> > > meta-networking, meta-filesystems and meta-python to the
>>> > > bblayers.conf. In
>>> > > the local.conf
>>> > > I added at the end of the file BBFILE_PRIORITY_openembedded-layer =
>>> > > "4"
>>> > > and IMAGE_INSTALL_append = " lxc" (lxc_2.0.0.bb).
>>> > >
>>> > > After the sucessful build I start the image and created with the
>>> > > busybox-template - should I use a different template? - a lxc-busybox
>>> > > container. I tried to start the container, but the error
>>> > > "1079 failed initializing cgroup support" appeared. lxc-checkconfig
>>> > > showed
>>> > > cgroup-namespaces: required. I tried to solve this with adding
>>> > > cgroup-lite
>>> > > to IMAGE_INSTALL_append, but it
>>> > > didn't change anything (only let cgroup-namespaces disappear and
>>> > > showed
>>> > > Cgroup clone_children flag: enabled). I'm using sysvinit instead of
>>> > > systemd.
>>>
>>>
>>> > What kernel are you using ? This typically means that the kernel config
>>> > isn't correct.
>>> > If you aren't using linux-yocto, then the fragments we have in the layer
>>> > are likely
>>> > not being applied, and you'll need to manually configure your kernel
>>> > with
>>> > the right
>>> >
>>> options.
>>>
>>> In the minnowboard image I'm using the kernel version
>>> 4.4.26-rt19-yocto-preempt-rt.
>>> I tried it with a qemux image with the kernel version
>>> 4.4.26-yocto-standard, too.
>>
>>
>> You are fine with both of these kernels. I know that we've run LXC against
>> them in the past
>>>
>>> > Otherwise, it could be that the cgroups are not being mounted properly
>>> > by
>>> > your
>>> > recipe, so have a look and confirm if cgroups or cgroups2 are in fact
>>> > mounted in
>>> > your image.
>>>
>>> I did the mounting with the cgroups-mount script under the /bin directory
>>> (comming through adding cgroups-lite to the image). Inside /sys/fs/cgroup
>>> directory
>>> there are the mounted folders with blkio, cpu, cpuacct, cpuset, debug,
>>> devices, freezer, memory and net_cls. Furthermore I checked it with a
>>> cgroups_check-config.sh from docker.
>>> The output showed following missing configs, but I don't think they are
>>> the reason that the lxc-busybox container doesn't start:
>>> - Generally necessary - missing: CONFIG_VETH,
>>> CONFIG_IP_NF_TARGET_MASQUERADE, CONFIG_NETFILTER_XT_MATCH_ADDRTYPE,
>>> CONFIG_NETFILTER_XT_MATCH_IPVS, CONFIG_IP_NF_NAT,
>>>    CONFIG_DEVPTS_MULTIPLE_INSTANCES
>>> - Optional Features - missing: CONFIG_CGROUP_PIDS,
>>> CONFIG_BLK_DEV_THROTTLING, CONFIG_CFQ_GROUP_IOSCHED, CONFIG_CGROUP_PERF,
>>> CONFIG_CGROUP_HUGETLB, CONFIG_CGROUP_NET_PRIO,
>>>   CONFIG_CFS_BANDWIDTH, CONFIG_RT_GROUP_SCHED, CONFIG_IP_VS,
>>> CONFIG_IP_VS_NFCT, CONFIG_IP_VS_RR
>>>
>>> With lxc-start -n Test -F it showed that start.c: 1079 failed initializing
>>> cgroup support comes after cgfsng.c: 431 no systemd controller mountpoint
>>> found. If I try to mount systemd (I'm using sysvinit and so mounting systemd
>>> makes no sense to me) there are a lot of other failures.
>>
>>
>> Both lxc and docker have over time evolved to have very specific
>> expectations of mounts (and in particular cgroup mounts), which is made
>> worse with the versions moving from cgroup to cgroup2 mount structures.
>>
>> It could also just be that our sysvinit support has bit-rotted. I know that
>> I've been using systemd for quite a long time and haven't heard of a lot of
>> sysvinit testing.
>>
>> My suggestion is that we need to dive into the code to see what mount
>> structure that LXC version is looking for, and then figure out how to
>> manually mount it (which I've done in the past), or get cgroups-lite to take
>> care of it.
>>
>> Bruce
>
> I actually had a cgroup-lite uprev in flight which has now been sent
> to the list. I also have plans to uprev LXC in the next week or so,
> along with making some changes, such as dropping the lxc-setup package
> which has its roots from early iterations of the systemd/sysvinit
> split, which can be done differently now. If I find the time to get
> this done you should see the series soon.

Just to follow up on this. With the cgroup-lite change sent and merged yesterday
I am able to create and run a container using the lxc busybox template.

I needed to update the template to 'binary_copy passwd' and adjust the
'chmod' path for
this from '/bin/passwd' to '/usr/bin/passwd'. Along with manually
tweaking the memory
cgroup (via 'echo 1 > /sys/fs/cgroup/memory/memory.use_hierarchy).  I
will not consider
digging in to fix these until I confirm they are still an issue after
the LXC uprev.

The following are the only changes to my local.conf
---
MACHINE = "qemux86-64"
DISTRO_FEATURES += "virtualization"
PACKAGE_CLASSES = "package_ipk"
IMAGE_INSTALL_append += "cgroup-lite lxc"
---

So no systemd etc..

This is all on 'master' so you will have to attempt to recreate on krogoth

Mark

>
> Mark
>
>>
>>> > Bruce
>>>
>>>
>>> > > Do I have to change some configurations in the lxc-recipe? I would be
>>> > > very
>>> > > grateful if you can give me some information about it.
>>> > >
>>> > > Thanks
>>> > > Nicolai
>>
>>
>>
>>
>> --
>> "Thou shalt not follow the NULL pointer, for chaos and madness await thee at
>> its end"
>>
>> --
>> _______________________________________________
>> meta-virtualization mailing list
>> meta-virtualization at yoctoproject.org
>> https://lists.yoctoproject.org/listinfo/meta-virtualization
>>


More information about the meta-virtualization mailing list